Your browser doesn't support javascript.
loading
Lung Cancer Staging Using Chest CT and FDG PET/CT Free-Text Reports: Comparison Among Three ChatGPT Large-Language Models and Six Human Readers of Varying Experience.
Lee, Jong Eun; Park, Ki-Seong; Kim, Yun-Hyeon; Song, Ho-Chun; Park, Byunggeon; Jeong, Yeon Joo.
Affiliation
  • Lee JE; Department of Radiology and Research Institute of Radiology, Asan Medical Center, Seoul, Korea.
  • Park KS; Department of Nuclear Medicine, Chonnam National University Hospital, Gwangju, Korea.
  • Kim YH; Department of Radiology, Chonnam National University Hospital, Gwangju, Korea.
  • Song HC; Department of Nuclear Medicine, Chonnam National University Hospital, Gwangju, Korea.
  • Park B; Department of Radiology, Kyungpook National University Chilgok Hospital, Daegu, Korea.
  • Jeong YJ; Department of Radiology, Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Pusan National University School of Medicine, Yangsan, Korea.
AJR Am J Roentgenol ; 2024 Sep 04.
Article in En | MEDLINE | ID: mdl-39230409
ABSTRACT

Background:

Although radiology reports are commonly used for lung cancer staging, this task can be challenging given radiologists' variable reporting styles as well as reports' potentially ambiguous and/or incomplete staging-related information.

Objective:

To compare performance of ChatGPT large-language models (LLMs) and human readers of varying experience in lung cancer staging using chest CT and FDG PET/CT free-text reports.

Methods:

This retrospective study included 700 patients (mean age, 73.8±29.5 years; 509 male, 191 female) from four institutions in Korea who underwent chest CT or FDG PET/CT for non-small cell lung cancer initial staging from January, 2020 to December, 2023. Examinations' reports used a free-text format, written exclusively in English or in mixed English and Korean. Two thoracic radiologists in consensus determined the overall stage group (IA, IB, IIA, IIB, IIIA, IIIB, IIIC, IVA, IVB) for each report using the AJCC 8th-edition staging system, establishing the reference standard. Three ChatGPT models (GPT-4o, GPT-4, GPT-3.5) determined an overall stage group for each report using a script-based application programming interface, zero-shot learning, and prompt incorporating a staging system summary. Six human readers (two fellowship-trained radiologists with lesser experience than the radiologists who determined the reference standard, two fellows, two residents) also independently determined overall stage groups. GPT-4o's overall accuracy for determining the correct stage among the nine groups was compared with that of the other LLMs and human readers using McNemar tests.

Results:

GPT-4o had an overall staging accuracy of 74.1%, significantly better than the accuracy of GPT-4 (70.1%, p=.02), GPT-3.5 (57.4%, p<.001), and resident 2 (65.7%, p<.001); significantly worse than the accuracy of fellowship-trained radiologist 1 (82.3%, p<.001) and fellowship-trained radiologist 2 (85.4%, p<.001); and not significantly different from the accuracy of fellow 1 (77.7%, p=.09), fellow 2 (75.6%, p=.53), and resident 1 (72.3%, p=.42).

Conclusions:

The best-performing model, GPT-4o, showed no significant difference in staging accuracy versus fellows, but significantly worse performance versus fellowship-trained radiologists. The findings do not support use of LLMs for lung cancer staging in place of expert healthcare professionals. Clinical Impact The findings indicate the importance of domain expertise for performing complex specialized tasks such as cancer staging.

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: AJR Am J Roentgenol Year: 2024 Document type: Article Country of publication: United States

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: AJR Am J Roentgenol Year: 2024 Document type: Article Country of publication: United States