Back to the list
Congress: ECR24
Poster Number: C-19422
Type: EPOS Radiologist (scientific)
Authorblock: T. Santner1, C. Ruppert2, S. Gianolini3, J-G. Stalheim4, S. Frei5, M. Hondl Adametz6, V. Fröhlich7, S. Hofvind8, G. Widmann1; 1Innsbruck/AT, 2Zürich/CH, 3Glattpark/CH, 4Bergen/NO, 5Lausanne/CH, 6Vienna/AT, 7Wiener Neustadt/AT, 8Oslo/NO
Disclosures:
Tina Santner: Nothing to disclose
Carlotta Ruppert: Employee: b-rayZ AG
Stefano Gianolini: Nothing to disclose
Johanne-Gro Stalheim: Nothing to disclose
Stephanie Frei: Nothing to disclose
Michaela Hondl Adametz: Nothing to disclose
Vanessa Fröhlich: Nothing to disclose
Solveig Hofvind: Nothing to disclose
Gerlig Widmann: Nothing to disclose
Keywords: Artificial Intelligence, Breast, Mammography, Screening, Quality assurance
Purpose Purpose:The aim of this study was to evaluate human inter-reader agreement and the impact of subjectivity in a diagnostic image quality assessment (PGMI (perfect-good-moderate-inadequate)[1]) of screening mammograms, and further to investigate the role of artificial intelligence (AI) as an alternative reader.
Read more Methods and materials Background:The fact that breast diagnostics, especially breast cancer screening, requires highest possible quality in all aspects of the diagnostics pathway to achieve the necessary cancer detection rate and overall reliability, has already been extensively discussed.With regard to the technical image quality, there are usually clear guidelines on how, by whom, when and to what extent checks must be carried out (e.g., constancy check). However, it is much more difficult to measure a diagnostic image quality – involving the ideal positioning...
Read more Results Results:Significant inter-reader variability among human readers with poor to moderate agreement (κ=-0.018 to κ=0.41) was observed, with some readers showing more homogenous interpretations of quality features and overall quality than others. Interestingly, we did not find evident direct correlations between individual experience or background and the differences in ratings. In comparison, the AI software demonstrated higher consistency with fewer outliers (positive as well as negative), highlighting its generalization capability.For a comprehensive evaluation of overall image quality, we conducted an analysis...
Read more Conclusion Conclusion:Notably, human inter-reader disagreement of PGMI assessment in screening mammography is substantially high. The results emphasize the necessity for further rethinking the assessment of both individual quality features and overall image quality.AI software may reliably categorize such quality. This gives the potential for objective standardization, comprehensive long-term monitoring but also immediate feedback that will help radiographers achieve and maintain the required high level of quality in screening programs. Consideration and evaluation should be given to detailed functions, possible combinations with...
Read more References [1]National Health Service Breast Screening Programme (1989): Quality Assurance Guidelines for Mammography, Pritchard Report, Oxford: NHS Breast Screening Programme.Cited by:Klabunde, Carrie/Bouchard, Francoise/Taplin, Stephen/Scharpantgen, Astrid/Ballard-Barbash, Rachel (2001): Quality assurance for screening mammography: an international comparison, in: Journal of Epidemiology and Community Health, 55, 204-212.[2]Taplin, Stephen H./Rutter, Carolyn M./Finder, Charles/Mandelson, Margaret T./Houn, Florence/White, Emily (2002): Screening Mammography: Clinical Image Quality and the Risk of Interval Breast Cancer, in: American Journal of Radiology, 178, 797-803.[3]Waade, Gunvor G/ Danielsen, Anders Skyrud/Holen Åsne S/Larsen, Marthe/Hanestad,...
Read more
GALLERY