Background: This study aimed to assess the performance of ChatGPT, a large language model (LLM), on the Italian State Exam for Medical Residency (SSM) test to determine its potential as a tool for medical education and clinical decision-making support. Materials and methods: A total of 136 questions were obtained from the official SSM test. ChatGPT responses were analyzed and compared to the performance of medical doctors who took the test in 2022. Questions were classified into clinical cases (CC) and notional questions (NQ). Results: ChatGPT achieved an overall accuracy of 90.44%, with higher performance on clinical cases (92.45%) than on notional questions (89.15%). Compared to medical doctors' scores, ChatGPT performance was higher than 99.6% of the participants. Conclusions: These results suggest that ChatGPT holds promise as a valuable tool in clinical decision-making, particularly in the context of clinical reasoning. Further research is needed to explore the potential applications and implementation of large language models (LLMs) in medical education and medical practice.

Exploring the potential of ChatGPT for clinical reasoning and decision-making: a cross-sectional study on the Italian Medical Residency Exam

Scaioli, Giacomo
First
;
Lo Moro, Giuseppina;Conrado, Francesco
;
Rosset, Lorenzo;Bert, Fabrizio;Siliquini, Roberta
Last
2024-01-01

Abstract

Background: This study aimed to assess the performance of ChatGPT, a large language model (LLM), on the Italian State Exam for Medical Residency (SSM) test to determine its potential as a tool for medical education and clinical decision-making support. Materials and methods: A total of 136 questions were obtained from the official SSM test. ChatGPT responses were analyzed and compared to the performance of medical doctors who took the test in 2022. Questions were classified into clinical cases (CC) and notional questions (NQ). Results: ChatGPT achieved an overall accuracy of 90.44%, with higher performance on clinical cases (92.45%) than on notional questions (89.15%). Compared to medical doctors' scores, ChatGPT performance was higher than 99.6% of the participants. Conclusions: These results suggest that ChatGPT holds promise as a valuable tool in clinical decision-making, particularly in the context of clinical reasoning. Further research is needed to explore the potential applications and implementation of large language models (LLMs) in medical education and medical practice.
2024
59
4
267
270
https://www.iss.it/documents/20126/0/ANN_23_04_05.pdf
Scaioli, Giacomo; Lo Moro, Giuseppina; Conrado, Francesco; Rosset, Lorenzo; Bert, Fabrizio; Siliquini, Roberta
File in questo prodotto:
File Dimensione Formato  
ANN_23_04_05.pdf

Accesso aperto

Tipo di file: PDF EDITORIALE
Dimensione 122.13 kB
Formato Adobe PDF
122.13 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1955430
Citazioni
  • ???jsp.display-item.citation.pmc??? 4
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 7
social impact