Congress:
ECR25
Poster Number:
C-26542
Type:
Poster: EPOS Radiologist (scientific)
DOI:
10.26044/ecr2025/C-26542
Authorblock:
A. Borji1, G. Kronreif1, B. Angermayr2, S. Hatamikia2; 1Wiener Neustadt/AT, 2Krems an der Donau/AT
Disclosures:
Arezoo Borji:
Other: PhD student
Gernot Kronreif:
Other: Austrian Center for Medical Innovation and Technology Other: Chief Scientific Officer
Bernhard Angermayr:
Other: professor
Sepideh Hatamikia:
Other: Assistant professor
Keywords:
Artificial Intelligence, Bones, Vascular, CAD, CAD, Cost-effectiveness, Experimental investigations, Image verification, Outcomes
The results demonstrate that the ViT model outperformed the EfficientNet-B3 model in both classification accuracy and computational efficiency. The ViT model achieved an accuracy rate of 96%, a precision rate of 94.3%, a recall rate of 95.8%, and an F1 score of 97%.
The superior performance of the ViT model can be attributed to the following factors:
- Enhanced feature representation: Unlike CNNs, which rely on local feature extraction, ViT processes all image patches simultaneously, allowing it to distinguish subtle differences between viable and non-viable tumor tissues more effectively.
- Better class-wide generalization: The use of SMOTE ensured that the model learned balanced representations for all classes, particularly improving classification accuracy for underrepresented non-viable tumor samples.
- Efficient handling of high-dimensional data: ViT's self-attention mechanism reduces dependence on large convolutional kernels, making it more effective in analyzing high-resolution histopathological images while maintaining computational efficiency.
- Context-aware feature learning: Unlike CNNs that focus on small receptive fields, ViT models global spatial relationships between tissue structures, which is crucial for histopathology, where tumor patterns extend beyond localized regions.
The results indicate that the ViT model is well-suited for automated histopathological classification, providing a reliable and efficient alternative to conventional CNN-based models.