Back to the list
Congress: ECR25
Poster Number: C-17761
Type: Poster: EPOS Radiologist (scientific)
DOI: 10.26044/ecr2025/C-17761
Authorblock: N. Vrancsik, V. Shabani, M. Allaria, P-A. A. Poletti, K-O. Loevblad, F. Kurz, M. Scheffler; Geneva/CH
Disclosures:
Nóra Vrancsik: Nothing to disclose
Venera Shabani: Nothing to disclose
Marie Allaria: Nothing to disclose
Pierre-Alexandre Aloïs Poletti: Nothing to disclose
Karl-Olof Loevblad: Nothing to disclose
Felix Kurz: Nothing to disclose
Max Scheffler: Nothing to disclose
Keywords: eHealth, Experimental, Computer Applications-Detection, diagnosis, Workforce
Conclusion

While AI systems are remarkably adept at handling text-based queries and accessing specialist knowledge, which is challenging for a general radiologist, they stumble significantly when detailed image interpretation is required, with accuracy dropping by almost 40% compared to human radiologists.Conversely, radiologists shine in their natural domain - interpreting all type of medical images confidently, recognising imaging signs and integrating visual information - maintaining superior performance in image-dependent and atypical cases.In our comparative analyses of the performance in solving MCQs featuring on the Radiopaedia website, the overall score of the best AI – the 06/2024 online version of Chat GPT-4o was with 67% between radiologists (74%) and residents (64%). GPT-4o notably performed only 56% on the same question set when an updated version was tested in 11/2024. Despite of the small sample size, we have gained a pretty good insight into how the two AI systems are much less performant in image analysis than anticipated from their text processing skills.

The subgroup analyses we conducted allowed us to address the intriguing question: does multimodal AI truly understand submitted text or images? AI employs language models that analyze statistical relationships between words and image patterns to predict and produce coherent responses. However, it does not possess consciousness; its output is based on learned patterns rather than true understanding. This implies that while AI can effectively process and generate text and pictures in a way that appears meaningful, it lacks genuine comprehension or awareness—currently hindering its practical use in real-world medical settings. An illustrative example is provided.

Fig 5: A beautiful but rather an artistic depiction of the human pelvis according to GPT-4o
While AI can serve as a useful diagnostic adjunct tool, this raises concerns and warrants cautious use. This research supports an inspiring conclusion: the fusion of AI and radiological expertise represents not a compromise, but rather an evolution in medical imaging that maximises the strengths of both human insight and technological capabilities.

GALLERY