Breast cancer is one of the most common cancer deaths in the world. The mammography quality control manual includes a section on positioning, which defines six areas, including IMF, nipple, retoromammary space, etc. Mammography guidelines define the imaging techniques that are necessary for proper mammography. However, the mammographic breast positioning criteria in the guidelines are limited to visual and qualitative assessments to determine mammographic propriety. The visual evaluation is rated on a three-point scale, but the visual evaluation lacks quantitativeness and is subject to inter-individual variation. A quantitative evaluation of positioning criteria is needed.
The positioning evaluation using artificial intelligence has been conducted in previous studies [1, 2]. However, these reports only include automatic detection and classification of IMF and nipple. Furthermore, the applied image processing in density has not been evaluated.
In this study, we propose quantitively evaluating mammographic positioning criteria using a deep convolutional neural network, in which each mammogram part can be detected automatically. The target areas were three of the six items in the quality control manual: the IMF, the nipple, and the retromammary space, and they were automatically detected and classified. Additionally, the effectiveness of grayscale processing using contrast-limited adaptive histogram equalization (CLAHE) was verified.