Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
1.
J Digit Imaging ; 36(6): 2392-2401, 2023 12.
Article in English | MEDLINE | ID: mdl-37580483

ABSTRACT

Thyroid nodules occur in up to 68% of people, 95% of which are benign. Of the 5% of malignant nodules, many would not result in symptoms or death, yet 600,000 FNAs are still performed annually, with a PPV of 5-7% (up to 30%). Artificial intelligence (AI) systems have the capacity to improve diagnostic accuracy and workflow efficiency when integrated into clinical decision pathways. Previous studies have evaluated AI systems against physicians, whereas we aim to compare the benefits of incorporating AI into their final diagnostic decision. This work analyzed the potential for artificial intelligence (AI)-based decision support systems to improve physician accuracy, variability, and efficiency. The decision support system (DSS) assessed was Koios DS, which provides automated sonographic nodule descriptor predictions and a direct cancer risk assessment aligned to ACR TI-RADS. The study was conducted retrospectively between (08/2020) and (10/2020). The set of cases used included 650 patients (21% male, 79% female) of age 53 ± 15. Fifteen physicians assessed each of the cases in the set, both unassisted and aided by the DSS. The order of the reading condition was randomized, and reading blocks were separated by a period of 4 weeks. The system's impact on reader accuracy was measured by comparing the area under the ROC curve (AUC), sensitivity, and specificity of readers with and without the DSS with FNA as ground truth. The impact on reader variability was evaluated using Pearson's correlation coefficient. The impact on efficiency was determined by comparing the average time per read. There was a statistically significant increase in average AUC of 0.083 [0.066, 0.099] and an increase in sensitivity and specificity of 8.4% [5.4%, 11.3%] and 14% [12.5%, 15.5%], respectively, when aided by Koios DS. The average time per case decreased by 23.6% (p = 0.00017), and the observed Pearson's correlation coefficient increased from r = 0.622 to r = 0.876 when aided by Koios DS. These results indicate that providing physicians with automated clinical decision support significantly improved diagnostic accuracy, as measured by AUC, sensitivity, and specificity, and reduced inter-reader variability and interpretation times.


Subject(s)
Deep Learning , Thyroid Nodule , Humans , Male , Female , Adult , Middle Aged , Aged , Retrospective Studies , Artificial Intelligence , Thyroid Nodule/pathology , Ultrasonography/methods
2.
Radiographics ; 43(3): e220098, 2023 03.
Article in English | MEDLINE | ID: mdl-36757882

ABSTRACT

From basic research to the bedside, precise terminology is key to advancing medicine and ensuring optimal and appropriate patient care. However, the wide spectrum of diseases and their manifestations superimposed on medical team-specific and discipline-specific communication patterns often impairs shared understanding and the shared use of common medical terminology. Common terms are currently used in medicine to ensure interoperability and facilitate integration of biomedical information for clinical practice and emerging scientific and educational applications alike, from database integration to supporting basic clinical operations such as billing. Such common terminologies can be provided in ontologies, which are formalized representations of knowledge in a particular domain. Ontologies unambiguously specify common concepts and describe the relationships between those concepts by using a form that is mathematically precise and accessible to humans and machines alike. RadLex® is a key RSNA initiative that provides a shared domain model, or ontology, of radiology to facilitate integration of information in radiology education, clinical care, and research. As the contributions of the computational components of common radiologic workflows continue to increase with the ongoing development of big data, artificial intelligence, and novel image analysis and visualization tools, the use of common terminologies is becoming increasingly important for supporting seamless computational resource integration across medicine. This article introduces ontologies, outlines the fundamental semantic web technologies used to create and apply RadLex, and presents examples of RadLex applications in everyday radiology and research. It concludes with a discussion of emerging applications of RadLex, including artificial intelligence applications. © RSNA, 2023 Quiz questions for this article are available in the supplemental material.


Subject(s)
Biological Ontologies , Radiology , Humans , Artificial Intelligence , Semantics , Workflow , Diagnostic Imaging
3.
J Digit Imaging ; 36(1): 1-10, 2023 02.
Article in English | MEDLINE | ID: mdl-36316619

ABSTRACT

The existing fellowship imaging informatics curriculum, established in 2004, has not undergone formal revision since its inception and inaccurately reflects present-day radiology infrastructure. It insufficiently equips trainees for today's informatics challenges as current practices require an understanding of advanced informatics processes and more complex system integration. We sought to address this issue by surveying imaging informatics fellowship program directors across the country to determine the components and cutline for essential topics in a standardized imaging informatics curriculum, the consensus on essential versus supplementary knowledge, and the factors individual programs may use to determine if a newly developed topic is an essential topic. We further identified typical program structural elements and sought fellowship director consensus on offering official graduate trainee certification to imaging informatics fellows. Here, we aim to provide an imaging informatics fellowship director consensus on topics considered essential while still providing a framework for informatics fellowship programs to customize their individual curricula.


Subject(s)
Education, Medical, Graduate , Fellowships and Scholarships , Humans , Education, Medical, Graduate/methods , Consensus , Curriculum , Diagnostic Imaging , Surveys and Questionnaires
4.
J Digit Imaging ; 35(5): 1419, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35840870
5.
Radiographics ; 42(4): 1062-1080, 2022.
Article in English | MEDLINE | ID: mdl-35594198

ABSTRACT

The pancreaticoduodenal groove (PDG) is a small space between the pancreatic head and duodenum where vital interactions between multiple organs and physiologic processes take place. Muscles, nerves, and hormones perform a coordinated dance, allowing bile and pancreatic enzymes to aid in digestion and absorption of critical nutrition. Given the multitude of organs and cells working together, a variety of benign and malignant entities can arise in or adjacent to this space. Management of lesions in this region is also complex and can involve observation, endoscopic resection, or challenging surgeries such as the Whipple procedure. The radiologist plays an important role in evaluation of abnormalities involving the PDG. While CT is usually the first-line examination for evaluation of this complex region, MRI offers complementary information. Although features of abnormalities involving the PDG can often overlap, understanding the characteristic imaging and pathologic features generally allows categorization of disease entities based on the suspected organ of origin and the presence of ancillary features. The goal of the authors is to provide radiologists with a conceptual approach to entities implicating the PDG to increase the accuracy of diagnosis and assist in appropriate management or presurgical planning. They briefly discuss the anatomy of the PDG, followed by a more in-depth presentation of the features of disease categories. A table summarizing the entities that occur in this region by underlying cause and anatomic location is provided. ©RSNA, 2022.


Subject(s)
Duodenum , Pancreas , Duodenum/diagnostic imaging , Humans , Magnetic Resonance Imaging/methods , Pancreas/diagnostic imaging
6.
J Digit Imaging ; 35(2): 335-339, 2022 04.
Article in English | MEDLINE | ID: mdl-35018541

ABSTRACT

Preparing radiology examinations for interpretation requires prefetching relevant prior examinations and implementing hanging protocols to optimally display the examination along with comparisons. Body part is a critical piece of information to facilitate both prefetching and hanging protocols, but body part information encoded using the Digital Imaging and Communications in Medicine (DICOM) standard is widely variable, error-prone, not granular enough, or missing altogether. This results in inappropriate examinations being prefetched or relevant examinations left behind; hanging protocol optimization suffers as well. Modern artificial intelligence (AI) techniques, particularly when harnessing federated deep learning techniques, allow for highly accurate automatic detection of body part based on the image data within a radiological examination; this allows for much more reliable implementation of this categorization and workflow. Additionally, new avenues to further optimize examination viewing such as dynamic hanging protocol and image display can be implemented using these techniques.


Subject(s)
Artificial Intelligence , Deep Learning , Human Body , Humans , Radiography , Workflow
7.
AJR Am J Roentgenol ; 218(4): 714-715, 2022 04.
Article in English | MEDLINE | ID: mdl-34755522

ABSTRACT

Convolutional neural networks (CNNs) trained to identify abnormalities on upper extremity radiographs achieved an AUC of 0.844 with a frequent emphasis on radiograph laterality and/or technologist labels for decision-making. Covering the labels increased the AUC to 0.857 (p = .02) and redirected CNN attention from the labels to the bones. Using images of radiograph labels alone, the AUC was 0.638, indicating that radiograph labels are associated with abnormal examinations. Potential radiographic confounding features should be considered when curating data for radiology CNN development.


Subject(s)
Deep Learning , Algorithms , Humans , Neural Networks, Computer , Radiography , Upper Extremity
8.
J Digit Imaging ; 34(6): 1331-1341, 2021 12.
Article in English | MEDLINE | ID: mdl-34724143

ABSTRACT

The advent of deep learning has engendered renewed and rapidly growing interest in artificial intelligence (AI) in radiology to analyze images, manipulate textual reports, and plan interventions. Applications of deep learning and other AI approaches must be guided by sound medical knowledge to assure that they are developed successfully and that they address important problems in biomedical research or patient care. To date, AI has been applied to a limited number of real-world radiology applications. As AI systems become more pervasive and are applied more broadly, they will benefit from medical knowledge on a larger scale, such as that available through computer-based approaches. A key approach to represent computer-based knowledge in a particular domain is an ontology. As defined in informatics, an ontology defines a domain's terms through their relationships with other terms in the ontology. Those relationships, then, define the terms' semantics, or "meaning." Biomedical ontologies commonly define the relationships between terms and more general terms, and can express causal, part-whole, and anatomic relationships. Ontologies express knowledge in a form that is both human-readable and machine-computable. Some ontologies, such as RSNA's RadLex radiology lexicon, have been applied to applications in clinical practice and research, and may be familiar to many radiologists. This article describes how ontologies can support research and guide emerging applications of AI in radiology, including natural language processing, image-based machine learning, radiomics, and planning.


Subject(s)
Biological Ontologies , Radiology , Artificial Intelligence , Humans , Natural Language Processing , Radiography
9.
Radiology ; 301(3): 692-699, 2021 12.
Article in English | MEDLINE | ID: mdl-34581608

ABSTRACT

Background Previous studies suggest that use of artificial intelligence (AI) algorithms as diagnostic aids may improve the quality of skeletal age assessment, though these studies lack evidence from clinical practice. Purpose To compare the accuracy and interpretation time of skeletal age assessment on hand radiograph examinations with and without the use of an AI algorithm as a diagnostic aid. Materials and Methods In this prospective randomized controlled trial, the accuracy of skeletal age assessment on hand radiograph examinations was performed with (n = 792) and without (n = 739) the AI algorithm as a diagnostic aid. For examinations with the AI algorithm, the radiologist was shown the AI interpretation as part of their routine clinical work and was permitted to accept or modify it. Hand radiographs were interpreted by 93 radiologists from six centers. The primary efficacy outcome was the mean absolute difference between the skeletal age dictated into the radiologists' signed report and the average interpretation of a panel of four radiologists not using a diagnostic aid. The secondary outcome was the interpretation time. A linear mixed-effects regression model with random center- and radiologist-level effects was used to compare the two experimental groups. Results Overall mean absolute difference was lower when radiologists used the AI algorithm compared with when they did not (5.36 months vs 5.95 months; P = .04). The proportions at which the absolute difference exceeded 12 months (9.3% vs 13.0%, P = .02) and 24 months (0.5% vs 1.8%, P = .02) were lower with the AI algorithm than without it. Median radiologist interpretation time was lower with the AI algorithm than without it (102 seconds vs 142 seconds, P = .001). Conclusion Use of an artificial intelligence algorithm improved skeletal age assessment accuracy and reduced interpretation times for radiologists, although differences were observed between centers. Clinical trial registration no. NCT03530098 © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Rubin in this issue.


Subject(s)
Age Determination by Skeleton/methods , Artificial Intelligence , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography/methods , Adolescent , Adult , Child , Child, Preschool , Female , Humans , Infant , Male , Prospective Studies , Radiologists , Reproducibility of Results , Sensitivity and Specificity
10.
NPJ Digit Med ; 4(1): 88, 2021 Jun 01.
Article in English | MEDLINE | ID: mdl-34075194

ABSTRACT

Coronary artery disease (CAD), the most common manifestation of cardiovascular disease, remains the most common cause of mortality in the United States. Risk assessment is key for primary prevention of coronary events and coronary artery calcium (CAC) scoring using computed tomography (CT) is one such non-invasive tool. Despite the proven clinical value of CAC, the current clinical practice implementation for CAC has limitations such as the lack of insurance coverage for the test, need for capital-intensive CT machines, specialized imaging protocols, and accredited 3D imaging labs for analysis (including personnel and software). Perhaps the greatest gap is the millions of patients who undergo routine chest CT exams and demonstrate coronary artery calcification, but their presence is not often reported or quantitation is not feasible. We present two deep learning models that automate CAC scoring demonstrating advantages in automated scoring for both dedicated gated coronary CT exams and routine non-gated chest CTs performed for other reasons to allow opportunistic screening. First, we trained a gated coronary CT model for CAC scoring that showed near perfect agreement (mean difference in scores = -2.86; Cohen's Kappa = 0.89, P < 0.0001) with current conventional manual scoring on a retrospective dataset of 79 patients and was found to perform the task faster (average time for automated CAC scoring using a graphics processing unit (GPU) was 3.5 ± 2.1 s vs. 261 s for manual scoring) in a prospective trial of 55 patients with little difference in scores compared to three technologists (mean difference in scores = 3.24, 5.12, and 5.48, respectively). Then using CAC scores from paired gated coronary CT as a reference standard, we trained a deep learning model on our internal data and a cohort from the Multi-Ethnic Study of Atherosclerosis (MESA) study (total training n = 341, Stanford test n = 42, MESA test n = 46) to perform CAC scoring on routine non-gated chest CT exams with validation on external datasets (total n = 303) obtained from four geographically disparate health systems. On identifying patients with any CAC (i.e., CAC ≥ 1), sensitivity and PPV was high across all datasets (ranges: 80-100% and 87-100%, respectively). For CAC ≥ 100 on routine non-gated chest CTs, which is the latest recommended threshold to initiate statin therapy, our model showed sensitivities of 71-94% and positive predictive values in the range of 88-100% across all the sites. Adoption of this model could allow more patients to be screened with CAC scoring, potentially allowing opportunistic early preventive interventions.

11.
J Digit Imaging ; 34(2): 229-230, 2021 04.
Article in English | MEDLINE | ID: mdl-33846888
12.
J Digit Imaging ; 34(2): 367-373, 2021 04.
Article in English | MEDLINE | ID: mdl-33742332

ABSTRACT

Radiology reports are consumed not only by referring physicians and healthcare providers, but also by patients. We assessed report readability in our enterprise and implemented a two-part quality improvement intervention with the goal of improving report accessibility. A total of 491,813 radiology reports from ten hospitals within the enterprise from May to October, 2018 were collected. We excluded echocardiograms, rehabilitation reports, administrator reports, and reports with negative scores leaving 461,219 reports and report impressions for analysis. A grade level (GL) was calculated for each report and impression by averaging four readability metrics. Next, we conducted a readability workshop and distributed weekly emails with readability GLs over a period of 6 months to each attending radiologist at our primary institution. Following this intervention, we utilized the same exclusion criteria and analyzed 473,612 reports from May to October, 2019. The mean GL for all reports and report impressions was above 13 at every hospital in the enterprise. Following our intervention, a statistically significant drop in GL for reports and impressions was demonstrated at all locations, but a larger and significant improvement was observed in impressions at our primary site. Radiology reports across the enterprise are written at an advanced reading level making them difficult for patients and their families to understand. We observed a significantly larger drop in GL for impressions at our primary site than at all other sites following our intervention. Radiologists at our home institution improved their report readability after becoming more aware of their writing practices.


Subject(s)
Comprehension , Radiology , Humans , Internet , Patient-Centered Care , Radiography , Radiologists
13.
J Am Coll Radiol ; 17(11): 1405-1409, 2020 Nov.
Article in English | MEDLINE | ID: mdl-33035503

ABSTRACT

Many radiologists are considering investments in artificial intelligence (AI) to improve the quality of care for our patients. This article outlines considerations for the purchasing process beginning with performance evaluation. Practices should decide whether there is a need to independently verify performance or accept vendor-provided data. Successful implementations will consider who will receive AI results, how results will be presented, and the impact on efficiency. The article provides education on infrastructure considerations including the benefits and drawbacks of best-of-breed and platform approaches in addition to highly specialized server requirements like graphical processing unit availability. Finally, the article presents financial and quality and safety considerations, some of which are unique to AI. Examples include whether additional revenue could be obtained, as in the case of mammography, and whether an AI model unintentionally leads to reinforcing healthcare disparities.


Subject(s)
Artificial Intelligence , Radiologists , Humans , Mammography
14.
J Digit Imaging ; 33(3): 792-796, 2020 06.
Article in English | MEDLINE | ID: mdl-32026219

ABSTRACT

The presentation of radiology exams can be enhanced through the use of dynamic images. Dynamic images differ from static images by the use of animation and are especially useful for depicting real-time activity such as the scrolling or the flow of contrast to enhance pathology. This is generally superior to a collection of static images as a representation of clinical workflow and provides a more robust appreciation of the case in question. Dynamic images can be shared electronically to facilitate teaching, case review, presentation, and sharing of interesting cases to be viewed in detail on a computer or mobile devices for education. The creation of movies or animated images from radiology data has traditionally been challenging based on technological limitations inherent in converting the Digital Imaging and Communications in Medicine (DICOM) standard to other formats or concerns related to the presence of protected health information (PHI). The solution presented here, named Cinebot, allows a simple "one-click" generation of anonymized dynamic movies or animated images within the picture archiving and communication system (PACS) workflow. This approach works across all imaging modalities, including stacked cross-sectional and multi-frame cine formats. Usage statistics over 2 years have shown this method to be well-received and useful throughout our enterprise.


Subject(s)
Radiology Department, Hospital , Radiology Information Systems , Radiology , Cross-Sectional Studies , Humans , Motion Pictures
15.
J Digit Imaging ; 33(3): 602-606, 2020 06.
Article in English | MEDLINE | ID: mdl-31898038

ABSTRACT

Radiologists are an integral component in patient care and provide valuable information at multidisciplinary tumor boards. However, the radiologists' role at such meetings can be compromised by technical and workflow limitations, typically including the need for complex software such as picture archiving and communication system (PACS) applications which are difficult to install and manage in disparate locations with increasing security and network limitations. Our purpose was to develop a web-based system for easy retrieval of images and notes for presentation during multidisciplinary conferences and tumor boards. Our system allows images to be viewed from any computer with a web browser and does not require a stand-alone PACS software installation. The tool is launched by the radiologist marking the exam in PACS. It stores relevant text-based information in a MySQL server and is indexed to the conference for which it is to be used. The exams are then viewed through a web browser, via the hospital intranet or virtual private network (VPN). A web-based viewing platform, provided by our PACS vendor, is used for image display. In the 28 months following implementation, our web-based conference system was well-received by our radiologists and is now fully integrated into daily practice. Our method streamlines radiologist workflow in preparing and presenting medical imaging at multidisciplinary conferences and overcomes many previous technical obstacles. In addition to its primary role for interdepartmental conferences, our system also functions as a teaching file, fostering radiologist education within our department.


Subject(s)
Radiology Information Systems , Radiology , Humans , Internet , Radiologists , Workflow
16.
Radiol Artif Intell ; 2(3): e190095, 2020 May.
Article in English | MEDLINE | ID: mdl-33937824

ABSTRACT

Past technology transition successes and failures have demonstrated the importance of user-centered design and the science of human factors; these approaches will be critical to the success of artificial intelligence in radiology.

17.
J Digit Imaging ; 33(2): 490-496, 2020 04.
Article in English | MEDLINE | ID: mdl-31768897

ABSTRACT

Pneumothorax is a potentially life-threatening condition that requires prompt recognition and often urgent intervention. In the ICU setting, large numbers of chest radiographs are performed and must be interpreted on a daily basis which may delay diagnosis of this entity. Development of artificial intelligence (AI) techniques to detect pneumothorax could help expedite detection as well as localize and potentially quantify pneumothorax. Open image analysis competitions are useful in advancing state-of-the art AI algorithms but generally require large expert annotated datasets. We have annotated and adjudicated a large dataset of chest radiographs to be made public with the goal of sparking innovation in this space. Because of the cumbersome and time-consuming nature of image labeling, we explored the value of using AI models to generate annotations for review. Utilization of this machine learning annotation (MLA) technique appeared to expedite our annotation process with relatively high sensitivity at the expense of specificity. Further research is required to confirm and better characterize the value of MLAs. Our adjudicated dataset is now available for public consumption in the form of a challenge.


Subject(s)
Crowdsourcing , Pneumothorax , Artificial Intelligence , Datasets as Topic , Humans , Machine Learning , Pneumothorax/diagnostic imaging , X-Rays
18.
J Am Coll Radiol ; 16(9 Pt B): 1279-1285, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31492406

ABSTRACT

Correlation of pathology reports with radiology examinations has long been of interest to radiologists and helps to facilitate peer learning. Such correlation also helps meet regulatory requirements, ensures quality, and supports multidisciplinary conferences and patient care. Additional offshoots of such correlation include evaluating for and ensuring concordance of pathology results with radiology interpretation and procedures as well as ensuring specimen adequacy after biopsy. For much of the history of radiology, this correlation has been done manually, which is time consuming and cumbersome and provides coverage of only a fraction of radiology examinations performed. Electronic storage and indexing of radiology and pathology information laid the foundation for easier access and for the development of automated artificial intelligence methods to match pathology information with radiology reports. More recent techniques have resulted in near comprehensive coverage of radiology examinations with methods to present results and solicit feedback from end users. Newer deep learning language modeling techniques will advance these methods by providing more robust automated and comprehensive radiology-pathology correlation with the ability to rapidly, flexibly, and iteratively tune models to site and user preference.


Subject(s)
Deep Learning , Magnetic Resonance Imaging/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Quality Improvement , Radiology/trends , Artificial Intelligence , Biopsy, Needle , Female , Humans , Immunohistochemistry , Male , Radiology/methods
19.
J Am Coll Radiol ; 16(9 Pt B): 1286-1291, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31173746

ABSTRACT

PURPOSE: Radiology-pathology correlation has long been foundational to continuing education, peer learning, quality assurance, and multidisciplinary patient care. The objective of this study was to determine whether modern deep-learning language-modeling techniques could reliably match pathology reports to pertinent radiology reports. METHODS: The recently proposed Universal Language Model Fine-Tuning for Text Classification methodology was used. Two hundred thousand radiology and pathology reports were used for adaptation to the radiology-pathology space. One hundred thousand candidate radiology-pathology pairs, evenly split into match and no-match categories, were used for training the final binary classification model. Matches were defined by a previous-generation artificial intelligence anatomic concept radiology-pathology correlation system. RESULTS: The language model rapidly adapted very closely to the prior anatomic concept-matching approach, with 100% specificity, 65.1% sensitivity, and 73.7% accuracy. For comparison, the previous methodology, which was intentionally designed to be specific at the expense of sensitivity, had 98.0% specificity, 65.1% sensitivity, and 73.2% accuracy. CONCLUSIONS: Modern deep-learning language-modeling approaches are promising for radiology-pathology correlation. Because of their rapid adaptation to underlying training labels, these models advance previous artificial intelligence work in that they can be continuously improved and tuned to improve performance and adjust to user and site-level preference.


Subject(s)
Deep Learning/trends , Natural Language Processing , Precision Medicine/trends , Quality Improvement , Radiology Information Systems/statistics & numerical data , Artificial Intelligence , Automation , Databases, Factual , Forecasting , Humans , Pathology, Clinical , Radiology/methods , Radiology/trends
20.
J Digit Imaging ; 32(4): 656-664, 2019 08.
Article in English | MEDLINE | ID: mdl-31065828

ABSTRACT

Develop a highly accurate deep learning model to reliably classify radiographs by laterality. Digital Imaging and Communications in Medicine (DICOM) data for nine body parts was extracted retrospectively. Laterality was determined directly if encoded properly or inferred using other elements. Curation confirmed categorization and identified inaccurate labels due to human error. Augmentation enriched training data to semi-equilibrate classes. Classification and object detection models were developed on a dedicated workstation and tested on novel images. Receiver operating characteristic (ROC) curves, sensitivity, specificity, and accuracy were calculated. Study-level accuracy was determined and both were compared to human performance. An ensemble model was tested for the rigorous use-case of automatically classifying exams retrospectively. The final classification model identified novel images with an ROC area under the curve (AUC) of 0.999, improving on previous work and comparable to human performance. A similar ROC curve was observed for per-study analysis with AUC of 0.999. The object detection model classified images with accuracy of 99% or greater at both image and study level. Confidence scores allow adjustment of sensitivity and specificity as needed; the ensemble model designed for the highly specific use-case of automatically classifying exams was comparable and arguably better than human performance demonstrating 99% accuracy with 1% of exams unchanged and no incorrect classification. Deep learning models can classify radiographs by laterality with high accuracy and may be applied in a variety of settings that could improve patient safety and radiologist satisfaction. Rigorous use-cases requiring high specificity are achievable.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Radiography/methods , Algorithms , Datasets as Topic , Functional Laterality , Humans , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...