Back to the list
Congress: ECR25
Poster Number: C-25294
Type: Poster: EPOS Radiologist (scientific)
Authorblock: J. F. Ojeda Esparza1, D. Botta1, A. Fitisiori1, C. Santarosa1, M. Pucci1, C. Meinzer2, Y-C. Yun2, K-O. Loevblad1, F. T. Kurz1; 1Geneva/CH, 2Heidelberg/DE
Disclosures:
Jose Federico Ojeda Esparza: Nothing to disclose
Daniele Botta: Nothing to disclose
Aikaterini Fitisiori: Nothing to disclose
Corrado Santarosa: Nothing to disclose
Marcella Pucci: Nothing to disclose
Clara Meinzer: Nothing to disclose
Yeong-Chul Yun: Nothing to disclose
Karl-Olof Loevblad: Nothing to disclose
Felix T Kurz: Nothing to disclose
Keywords: Artificial Intelligence, CNS, Catheter arteriography, CT, MR, Computer Applications-General, Diagnostic procedure, Technology assessment, Education and training, Image verification
Purpose

Artificial intelligence (AI) tools are evolving rapidly, with large language models (LLMs) now demonstrating the ability to process both text and image-based data. In radiology, AI has primarily focused on task-specific solutions for image analysis and diagnostic support [1]. However, recent developments in multimodal models have introduced LLMs capable of integrating text and image inputs, opening new possibilities for clinical applications.Unlike conventional AI algorithms, which are designed for specific imaging tasks mainly interpretative and workflow-enhancing deep learning algorithms, LLMs offer broader utility by generating detailed responses to image-related questions and assisting in decision-making processes [2]. This versatility could make them a valuable tool in radiology, especially with the emergence of commercially widely available AI models like GPT-4V and Google’s Gemini that can analyze images and provide context-aware explanations [3,4].

Multiple studies have shown that these models perform well on text-based medical exams, including in fields such as dermatology, neurology, and ophthalmology [5–7]. However, their ability to interpret radiological images and match the expertise of human radiologists remains under investigation. This presents an opportunity to evaluate their potential role in radiology beyond traditional text-based applications.

In our study, we compared the performance of commercially widely available large language AI models (LLMs) with that of neuroradiologists and radiology residents in answering image-based multiple-choice questions (MCQs) from a vastly consulted free radiological educational online resource like Radiopaedia [8]. The evaluation included response accuracy, overall success rates, and the number of correct answers across various imaging categories.

GALLERY