Back to the list
Congress: ECR25
Poster Number: C-24137
Type: Poster: EPOS Radiologist (scientific)
Authorblock: D. Männle, M. Langhals, N. Santhanam, C. G. Cho, H. Wenz, C. Groden, F. Siegel, M. E. Maros; Mannheim/DE
Disclosures:
David Männle: Nothing to disclose
Martina Langhals: Nothing to disclose
Nandhini Santhanam: Nothing to disclose
Chang Gyu Cho: Nothing to disclose
Holger Wenz: Nothing to disclose
Christoph Groden: Nothing to disclose
Fabian Siegel: Nothing to disclose
Máté Elöd Maros: Consultant: Non-related consultancy EppData GmbH Consultant: Non-related consultancy Siemens Healthineers AG
Keywords: Artificial Intelligence, Computer applications, Neuroradiology brain, CT, CT-Angiography, RIS, Computer Applications-General, Technology assessment, Ischaemia / Infarction
Purpose

Large language models (LLMs) have been widely recognized for their ability to encode clinical knowledge and effectively summarize medical texts. However, despite these capabilities, there is still a lack of comprehensive understanding regarding the optimal strategies for prompting or fine-tuning these models for specific tasks, particularly when dealing with non-English corpora. To address this gap, we conducted a systematic investigation into various in-context learning (ICL) strategies. Our study focused on evaluating a broad and diverse set of state-of-the-art open-source large language models (OS-LLMs) with the objective of determining their effectiveness in summarizing key findings from stroke CT reports.