RESUMO
PURPOSE: This work aims to assess standard evaluation practices used by the research community for evaluating medical imaging classifiers, with a specific focus on the implications of class imbalance. The analysis is performed on chest X-rays as a case study and encompasses a comprehensive model performance definition, considering both discriminative capabilities and model calibration. MATERIALS AND METHODS: We conduct a concise literature review to examine prevailing scientific practices used when evaluating X-ray classifiers. Then, we perform a systematic experiment on two major chest X-ray datasets to showcase a didactic example of the behavior of several performance metrics under different class ratios and highlight how widely adopted metrics can conceal performance in the minority class. RESULTS: Our literature study confirms that: (1) even when dealing with highly imbalanced datasets, the community tends to use metrics that are dominated by the majority class; and (2) it is still uncommon to include calibration studies for chest X-ray classifiers, albeit its importance in the context of healthcare. Moreover, our systematic experiments confirm that current evaluation practices may not reflect model performance in real clinical scenarios and suggest complementary metrics to better reflect the performance of the system in such scenarios. CONCLUSION: Our analysis underscores the need for enhanced evaluation practices, particularly in the context of class-imbalanced chest X-ray classifiers. We recommend the inclusion of complementary metrics such as the area under the precision-recall curve (AUC-PR), adjusted AUC-PR, and balanced Brier score, to offer a more accurate depiction of system performance in real clinical scenarios, considering metrics that reflect both, discrimination and calibration performance. CLINICAL RELEVANCE STATEMENT: This study underscores the critical need for refined evaluation metrics in medical imaging classifiers, emphasizing that prevalent metrics may mask poor performance in minority classes, potentially impacting clinical diagnoses and healthcare outcomes. KEY POINTS: Common scientific practices in papers dealing with X-ray computer-assisted diagnosis (CAD) systems may be misleading. We highlight limitations in reporting of evaluation metrics for X-ray CAD systems in highly imbalanced scenarios. We propose adopting alternative metrics based on experimental evaluation on large-scale datasets.
RESUMO
The development of successful artificial intelligence models for chest X-ray analysis relies on large, diverse datasets with high-quality annotations. While several databases of chest X-ray images have been released, most include disease diagnosis labels but lack detailed pixel-level anatomical segmentation labels. To address this gap, we introduce an extensive chest X-ray multi-center segmentation dataset with uniform and fine-grain anatomical annotations for images coming from five well-known publicly available databases: ChestX-ray8, CheXpert, MIMIC-CXR-JPG, Padchest, and VinDr-CXR, resulting in 657,566 segmentation masks. Our methodology utilizes the HybridGNet model to ensure consistent and high-quality segmentations across all datasets. Rigorous validation, including expert physician evaluation and automatic quality control, was conducted to validate the resulting masks. Additionally, we provide individualized quality indices per mask and an overall quality estimation per dataset. This dataset serves as a valuable resource for the broader scientific community, streamlining the development and assessment of innovative methodologies in chest X-ray analysis.
Assuntos
Radiografia Torácica , Humanos , Bases de Dados Factuais , Inteligência Artificial , Pulmão/diagnóstico por imagemRESUMO
Background: The delay in the referral of patients with potential surgical vertebral metastasis (VM) to the spine surgeon is strongly associated with a worse outcome. The spinal instability neoplastic score (SINS) allows for determining the risk of instability of a spine segment with VM; however, it is almost exclusively used by specialists or residents in neurosurgery or orthopedics. The objective of this work is to report the delay in surgical consultation of patients with potentially unstable and unstable VM (SINS >6) at our center. Material: We performed a 5-year single-center retrospective analysis of patients with spine metastasis on computed tomography (CT). Patients were divided into Group 1 (G1), potentially unstable VM (SINS 7-12), and Group 2 (G2), unstable VM (SINS 13-18). Time to surgical referral was calculated as the number of days between the report of the VM in the CT and the first clinical assessment of a spinal surgeon on the medical records. Results: We analyzed 220 CT scans, and 98 met the selection criteria. Group 1 had 85 patients (86.7%) and Group 2 had 13 (13.3%). We observed a mean time to referral of 83.5 days in the entire cohort (std = 127.6); 87.6 days (std = 135.1) for G1, and 57.2 days (std = 53.8) for G2. The delay in referral showed no significant correlation with the SINS score. Conclusion: We report a mean delay of 83.5 days in the surgical referral of VM (SINS >6, n = 98). Both groups showed cases of serious referral delay, with 25% of patients having the first surgical consultation more than three months after the CT study.
Assuntos
Neoplasias da Coluna Vertebral , Humanos , América Latina , Estudos Retrospectivos , Neoplasias da Coluna Vertebral/diagnóstico por imagem , Neoplasias da Coluna Vertebral/patologia , Neoplasias da Coluna Vertebral/secundário , Neoplasias da Coluna Vertebral/cirurgia , Cirurgiões , Encaminhamento e Consulta , Tempo para o Tratamento , Tomografia Computadorizada por Raios X , Coluna Vertebral/diagnóstico por imagem , Coluna Vertebral/patologia , Coluna Vertebral/cirurgiaRESUMO
The acceptance of artificial intelligence (AI) systems by health professionals is crucial to obtain a positive impact on the diagnosis pathway. We evaluated user satisfaction with an AI system for the automated detection of findings in chest x-rays, after five months of use at the Emergency Department. We collected quantitative and qualitative data to analyze the main aspects of user satisfaction, following the Technology Acceptance Model. We selected the intended users of the system as study participants: radiology residents and emergency physicians. We found that both groups of users shared a high satisfaction with the system's ease of use, while their perception of output quality (i.e., diagnostic performance) differed notably. The perceived usefulness of the application yielded positive evaluations, focusing on its utility to confirm that no findings were omitted, and also presenting distinct patterns across the two groups of users. Our results highlight the importance of clearly differentiating the intended users of AI applications in clinical workflows, to enable the design of specific modifications that better suit their particular needs. This study confirmed that measuring user acceptance and recognizing the perception that professionals have of the AI system after daily use can provide important insights for future implementations.