Objectives To evaluate and compare the accuracy and reliability of large language models (LLMs) ChatGPT-o1, DeepSeek R1, and Gemini 2.0 in answering general primary care medical questions, assessing their reasoning approaches and potential applications in medical education and clinical decision-making.
Design A cross-sectional study using an automated evaluation process where three large language models (LLMs) answered a standardized set of multiple-choice medical questions.
Setting From February 1, 2025 to February 15, 2025, the models were subjected to the test questions. For each model, each question was formulated in a new chat session.
Questions were presented in Italian, with no additional instructions. Responses were compared to official test solutions.
Participants Three LLMs were evaluated: ChatGPT-o1 (OpenAI), DeepSeek R1 (DeepSeek), and Gemini 2.0 flash thinking experimental model (Google). No human subjects or patient data were used.
Intervention Each model received the same 100 multiple-choice questions and provided a single response per question without follow-up interactions. Scoring was based on correct answers (+1) and incorrect answers (0).
Main Outcome Measures Accuracy was measured as the percentage of correct responses. Inter-model agreement was assessed through Cohen’s Kappa, and statistical significance was evaluated using McNemar’s test.
Results ChatGPT-o1 achieved the highest accuracy (98%), followed by Gemini 2.0 (96%) and DeepSeek R1 (95%). Statistical analysis found no significant differences (p > 0.05) between the three models. Cohen’s Kappa indicated low agreement (ChatGPT-o1 vs. DeepSeek R1 = 0.2647; ChatGPT-o1 vs. Gemini 2.0 = 0.315), suggesting variations in reasoning.
Conclusion LLMs exhibited high accuracy in answering primary care medical questions, highlighting their potential for medical education and clinical decision support in primary care. However, inconsistencies between models suggest that a multi-model or AI-assisted approach is preferable to relying on a single AI system. Future research should explore performance in real clinical cases and different medical specialties.
Competing Interest StatementThe authors have declared no competing interest.
Funding StatementThis study did not receive any funding.
Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.
Yes
I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.
Yes
I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).
Yes
I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.
Yes
Data AvailabilityAll data produced in the present study are available upon reasonable request to the authors.
Comments (0)