Evaluating the Efficacy of AI Chatbots as Tutors in Urology: A Comparative Analysis of Responses to the 2022 In-Service Assessment of the European Board of Urology

Abstract: Introduction: This study assessed the potential of large language models (LLMs) as educational tools by evaluating their accuracy in answering questions across urological subtopics. Methods: Three LLMs (ChatGPT-3.5, ChatGPT-4, and Bing AI) were examined in two testing rounds, separated by 48 h, using 100 Multiple-Choice Questions (MCQs) from the 2022 European Board of Urology (EBU) In-Service Assessment (ISA), covering five different subtopics. The correct answer was defined as “formal accuracy” (FA) representing the designated single best answer (SBA) among four options. Alternative answers selected from LLMs, which may not necessarily be the SBA but are still deemed correct, were labeled as “extended accuracy” (EA). Their capacity to enhance the overall accuracy rate when combined with FA was examined. Results: In two rounds of testing, the FA scores were achieved as follows: ChatGPT-3.5: 58% and 62%, ChatGPT-4: 63% and 77%, and BING AI: 81% and 73%. The incorporation of EA did not yield a significant enhancement in overall performance. The achieved gains for ChatGPT-3.5, ChatGPT-4, and BING AI were as a result 7% and 5%, 5% and 2%, and 3% and 1%, respectively (p > 0.3). Within urological subtopics, LLMs showcased best performance in Pediatrics/Congenital and comparatively less effectiveness in Functional/BPS/Incontinence. Conclusion: LLMs exhibit suboptimal urology knowledge and unsatisfactory proficiency for educational purposes. The overall accuracy did not significantly improve when combining EA to FA. The error rates remained high ranging from 16 to 35%. Proficiency levels vary substantially across subtopics. Further development of medicine-specific LLMs is required before integration into urological training programs.

Location
Deutsche Nationalbibliothek Frankfurt am Main
Extent
Online-Ressource
Language
Englisch

Bibliographic citation
Evaluating the Efficacy of AI Chatbots as Tutors in Urology: A Comparative Analysis of Responses to the 2022 In-Service Assessment of the European Board of Urology ; volume:108 ; number:4 ; year:2024 ; pages:359-366 ; extent:8
Urologia internationalis ; 108, Heft 4 (2024), 359-366 (gesamt 8)

Creator
May, Matthias
Körner-Riffard, Katharina
Kollitsch, Lisa
Burger, Maximilian
Brookman-May, Sabine D.
Rauchenwald, Michael
Marszalek, Martin
Eredics, Klaus

DOI
10.1159/000537854
URN
urn:nbn:de:101:1-2408150125148.597240213486
Rights
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
Last update
14.08.2025, 10:45 AM CEST

Data provider

This object is provided by:
Deutsche Nationalbibliothek. If you have any questions about the object, please contact the data provider.

Associated

  • May, Matthias
  • Körner-Riffard, Katharina
  • Kollitsch, Lisa
  • Burger, Maximilian
  • Brookman-May, Sabine D.
  • Rauchenwald, Michael
  • Marszalek, Martin
  • Eredics, Klaus

Other Objects (12)