Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Eur Radiol ; 34(2): 810-822, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37606663

ABSTRACT

OBJECTIVES: Non-contrast computed tomography of the brain (NCCTB) is commonly used to detect intracranial pathology but is subject to interpretation errors. Machine learning can augment clinical decision-making and improve NCCTB scan interpretation. This retrospective detection accuracy study assessed the performance of radiologists assisted by a deep learning model and compared the standalone performance of the model with that of unassisted radiologists. METHODS: A deep learning model was trained on 212,484 NCCTB scans drawn from a private radiology group in Australia. Scans from inpatient, outpatient, and emergency settings were included. Scan inclusion criteria were age ≥ 18 years and series slice thickness ≤ 1.5 mm. Thirty-two radiologists reviewed 2848 scans with and without the assistance of the deep learning system and rated their confidence in the presence of each finding using a 7-point scale. Differences in AUC and Matthews correlation coefficient (MCC) were calculated using a ground-truth gold standard. RESULTS: The model demonstrated an average area under the receiver operating characteristic curve (AUC) of 0.93 across 144 NCCTB findings and significantly improved radiologist interpretation performance. Assisted and unassisted radiologists demonstrated an average AUC of 0.79 and 0.73 across 22 grouped parent findings and 0.72 and 0.68 across 189 child findings, respectively. When assisted by the model, radiologist AUC was significantly improved for 91 findings (158 findings were non-inferior), and reading time was significantly reduced. CONCLUSIONS: The assistance of a comprehensive deep learning model significantly improved radiologist detection accuracy across a wide range of clinical findings and demonstrated the potential to improve NCCTB interpretation. CLINICAL RELEVANCE STATEMENT: This study evaluated a comprehensive CT brain deep learning model, which performed strongly, improved the performance of radiologists, and reduced interpretation time. The model may reduce errors, improve efficiency, facilitate triage, and better enable the delivery of timely patient care. KEY POINTS: • This study demonstrated that the use of a comprehensive deep learning system assisted radiologists in the detection of a wide range of abnormalities on non-contrast brain computed tomography scans. • The deep learning model demonstrated an average area under the receiver operating characteristic curve of 0.93 across 144 findings and significantly improved radiologist interpretation performance. • The assistance of the comprehensive deep learning model significantly reduced the time required for radiologists to interpret computed tomography scans of the brain.


Subject(s)
Deep Learning , Adolescent , Humans , Radiography , Radiologists , Retrospective Studies , Tomography, X-Ray Computed/methods , Adult
2.
Diagnostics (Basel) ; 13(14)2023 Jul 09.
Article in English | MEDLINE | ID: mdl-37510062

ABSTRACT

This retrospective case-control study evaluated the diagnostic performance of a commercially available chest radiography deep convolutional neural network (DCNN) in identifying the presence and position of central venous catheters, enteric tubes, and endotracheal tubes, in addition to a subgroup analysis of different types of lines/tubes. A held-out test dataset of 2568 studies was sourced from community radiology clinics and hospitals in Australia and the USA, and was then ground-truth labelled for the presence, position, and type of line or tube from the consensus of a thoracic specialist radiologist and an intensive care clinician. DCNN model performance for identifying and assessing the positioning of central venous catheters, enteric tubes, and endotracheal tubes over the entire dataset, as well as within each subgroup, was evaluated. The area under the receiver operating characteristic curve (AUC) was assessed. The DCNN algorithm displayed high performance in detecting the presence of lines and tubes in the test dataset with AUCs > 0.99, and good position classification performance over a subpopulation of ground truth positive cases with AUCs of 0.86-0.91. The subgroup analysis showed that model performance was robust across the various subtypes of lines or tubes, although position classification performance of peripherally inserted central catheters was relatively lower. Our findings indicated that the DCNN algorithm performed well in the detection and position classification of lines and tubes, supporting its use as an assistant for clinicians. Further work is required to evaluate performance in rarer scenarios, as well as in less common subgroups.

3.
Diagnostics (Basel) ; 13(4)2023 Feb 15.
Article in English | MEDLINE | ID: mdl-36832231

ABSTRACT

Limitations of the chest X-ray (CXR) have resulted in attempts to create machine learning systems to assist clinicians and improve interpretation accuracy. An understanding of the capabilities and limitations of modern machine learning systems is necessary for clinicians as these tools begin to permeate practice. This systematic review aimed to provide an overview of machine learning applications designed to facilitate CXR interpretation. A systematic search strategy was executed to identify research into machine learning algorithms capable of detecting >2 radiographic findings on CXRs published between January 2020 and September 2022. Model details and study characteristics, including risk of bias and quality, were summarized. Initially, 2248 articles were retrieved, with 46 included in the final review. Published models demonstrated strong standalone performance and were typically as accurate, or more accurate, than radiologists or non-radiologist clinicians. Multiple studies demonstrated an improvement in the clinical finding classification performance of clinicians when models acted as a diagnostic assistance device. Device performance was compared with that of clinicians in 30% of studies, while effects on clinical perception and diagnosis were evaluated in 19%. Only one study was prospectively run. On average, 128,662 images were used to train and validate models. Most classified less than eight clinical findings, while the three most comprehensive models classified 54, 72, and 124 findings. This review suggests that machine learning devices designed to facilitate CXR interpretation perform strongly, improve the detection performance of clinicians, and improve the efficiency of radiology workflow. Several limitations were identified, and clinician involvement and expertise will be key to driving the safe implementation of quality CXR machine learning systems.

4.
BMJ Open ; 11(12): e053024, 2021 12 07.
Article in English | MEDLINE | ID: mdl-34876430

ABSTRACT

OBJECTIVES: To evaluate the ability of a commercially available comprehensive chest radiography deep convolutional neural network (DCNN) to detect simple and tension pneumothorax, as stratified by the following subgroups: the presence of an intercostal drain; rib, clavicular, scapular or humeral fractures or rib resections; subcutaneous emphysema and erect versus non-erect positioning. The hypothesis was that performance would not differ significantly in each of these subgroups when compared with the overall test dataset. DESIGN: A retrospective case-control study was undertaken. SETTING: Community radiology clinics and hospitals in Australia and the USA. PARTICIPANTS: A test dataset of 2557 chest radiography studies was ground-truthed by three subspecialty thoracic radiologists for the presence of simple or tension pneumothorax as well as each subgroup other than positioning. Radiograph positioning was derived from radiographer annotations on the images. OUTCOME MEASURES: DCNN performance for detecting simple and tension pneumothorax was evaluated over the entire test set, as well as within each subgroup, using the area under the receiver operating characteristic curve (AUC). A difference in AUC of more than 0.05 was considered clinically significant. RESULTS: When compared with the overall test set, performance of the DCNN for detecting simple and tension pneumothorax was statistically non-inferior in all subgroups. The DCNN had an AUC of 0.981 (0.976-0.986) for detecting simple pneumothorax and 0.997 (0.995-0.999) for detecting tension pneumothorax. CONCLUSIONS: Hidden stratification has significant implications for potential failures of deep learning when applied in clinical practice. This study demonstrated that a comprehensively trained DCNN can be resilient to hidden stratification in several clinically meaningful subgroups in detecting pneumothorax.


Subject(s)
Deep Learning , Pneumothorax , Algorithms , Case-Control Studies , Humans , Pneumothorax/diagnostic imaging , Radiography , Radiography, Thoracic/methods , Retrospective Studies
5.
BMJ Open ; 11(12): e052902, 2021 12 20.
Article in English | MEDLINE | ID: mdl-34930738

ABSTRACT

OBJECTIVES: Artificial intelligence (AI) algorithms have been developed to detect imaging features on chest X-ray (CXR) with a comprehensive AI model capable of detecting 124 CXR findings being recently developed. The aim of this study was to evaluate the real-world usefulness of the model as a diagnostic assistance device for radiologists. DESIGN: This prospective real-world multicentre study involved a group of radiologists using the model in their daily reporting workflow to report consecutive CXRs and recording their feedback on level of agreement with the model findings and whether this significantly affected their reporting. SETTING: The study took place at radiology clinics and hospitals within a large radiology network in Australia between November and December 2020. PARTICIPANTS: Eleven consultant diagnostic radiologists of varying levels of experience participated in this study. PRIMARY AND SECONDARY OUTCOME MEASURES: Proportion of CXR cases where use of the AI model led to significant material changes to the radiologist report, to patient management, or to imaging recommendations. Additionally, level of agreement between radiologists and the model findings, and radiologist attitudes towards the model were assessed. RESULTS: Of 2972 cases reviewed with the model, 92 cases (3.1%) had significant report changes, 43 cases (1.4%) had changed patient management and 29 cases (1.0%) had further imaging recommendations. In terms of agreement with the model, 2569 cases showed complete agreement (86.5%). 390 (13%) cases had one or more findings rejected by the radiologist. There were 16 findings across 13 cases (0.5%) deemed to be missed by the model. Nine out of 10 radiologists felt their accuracy was improved with the model and were more positive towards AI poststudy. CONCLUSIONS: Use of an AI model in a real-world reporting environment significantly improved radiologist reporting and showed good agreement with radiologists, highlighting the potential for AI diagnostic support to improve clinical practice.


Subject(s)
Artificial Intelligence , Deep Learning , Algorithms , Humans , Prospective Studies , Radiologists
6.
Anaesth Intensive Care ; 49(6): 448-454, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34772298

ABSTRACT

Clinicians assessing cardiac risk as part of a comprehensive consultation before surgery can use an expanding set of tools, including predictive risk calculators, cardiac stress tests and measuring serum natriuretic peptides. The optimal assessment strategy is unclear, with conflicting international guidelines. We investigated the prognostic accuracy of the Revised Cardiac Risk Index for risk stratification and cardiac outcomes in patients undergoing elective non-cardiac surgery in a contemporary Australian cohort.We audited the records for 1465 consecutive patients 45 years and older presenting to the perioperative clinic for elective non-cardiac surgery in our tertiary hospital. We calculated individual Revised Cardiac Risk Index scores and documented any use of preoperative cardiac tests. The primary outcome was any major adverse cardiac events within 30 days of surgery, including myocardial infarction, pulmonary oedema, complete heart block or cardiac death.Myocardial perfusion imaging was the most common preoperative stress test (4.2%, 61/1465). There was no routine investigation of natriuretic peptide levels for cardiac risk assessment before surgery. Major adverse cardiac events occurred in 1.3% (18/1366) of patients who had surgery. The Revised Cardiac Risk Index score had modest prognostic accuracy for major cardiac complications, area under receiver operator curve 0.73, 95% confidence interval 0.60 to 0.86. Stratifying major adverse cardiac events by the Revised Cardiac Risk Index scores 0, 1, 2 and 3 or greater corresponded to event rates of 0.6% (4/683), 0.8% (4/488), 4.1% (6/145) and 8.0% (4/50), respectively.The Revised Cardiac Risk Index had only modest predictive value in our single-centre experience. Patients with a revised cardiac risk index score of 2 or more had an elevated risk of early cardiac complications after elective non-cardiac surgery.


Subject(s)
Postoperative Complications , Australia/epidemiology , Humans , Postoperative Complications/diagnosis , Postoperative Complications/epidemiology , Predictive Value of Tests , Retrospective Studies , Risk Assessment , Risk Factors , Tertiary Care Centers
7.
Lancet Digit Health ; 3(8): e496-e506, 2021 08.
Article in English | MEDLINE | ID: mdl-34219054

ABSTRACT

BACKGROUND: Chest x-rays are widely used in clinical practice; however, interpretation can be hindered by human error and a lack of experienced thoracic radiologists. Deep learning has the potential to improve the accuracy of chest x-ray interpretation. We therefore aimed to assess the accuracy of radiologists with and without the assistance of a deep-learning model. METHODS: In this retrospective study, a deep-learning model was trained on 821 681 images (284 649 patients) from five data sets from Australia, Europe, and the USA. 2568 enriched chest x-ray cases from adult patients (≥16 years) who had at least one frontal chest x-ray were included in the test dataset; cases were representative of inpatient, outpatient, and emergency settings. 20 radiologists reviewed cases with and without the assistance of the deep-learning model with a 3-month washout period. We assessed the change in accuracy of chest x-ray interpretation across 127 clinical findings when the deep-learning model was used as a decision support by calculating area under the receiver operating characteristic curve (AUC) for each radiologist with and without the deep-learning model. We also compared AUCs for the model alone with those of unassisted radiologists. If the lower bound of the adjusted 95% CI of the difference in AUC between the model and the unassisted radiologists was more than -0·05, the model was considered to be non-inferior for that finding. If the lower bound exceeded 0, the model was considered to be superior. FINDINGS: Unassisted radiologists had a macroaveraged AUC of 0·713 (95% CI 0·645-0·785) across the 127 clinical findings, compared with 0·808 (0·763-0·839) when assisted by the model. The deep-learning model statistically significantly improved the classification accuracy of radiologists for 102 (80%) of 127 clinical findings, was statistically non-inferior for 19 (15%) findings, and no findings showed a decrease in accuracy when radiologists used the deep-learning model. Unassisted radiologists had a macroaveraged mean AUC of 0·713 (0·645-0·785) across all findings, compared with 0·957 (0·954-0·959) for the model alone. Model classification alone was significantly more accurate than unassisted radiologists for 117 (94%) of 124 clinical findings predicted by the model and was non-inferior to unassisted radiologists for all other clinical findings. INTERPRETATION: This study shows the potential of a comprehensive deep-learning model to improve chest x-ray interpretation across a large breadth of clinical practice. FUNDING: Annalise.ai.


Subject(s)
Deep Learning , Mass Screening/methods , Models, Biological , Radiographic Image Interpretation, Computer-Assisted , Radiography, Thoracic , X-Rays , Adolescent , Adult , Aged , Aged, 80 and over , Area Under Curve , Artificial Intelligence , Female , Humans , Infections/diagnosis , Infections/diagnostic imaging , Male , Middle Aged , ROC Curve , Radiologists , Retrospective Studies , Thoracic Injuries/diagnosis , Thoracic Injuries/diagnostic imaging , Thoracic Neoplasms/diagnosis , Thoracic Neoplasms/diagnostic imaging , Young Adult
8.
Beilstein J Org Chem ; 11: 37-41, 2015.
Article in English | MEDLINE | ID: mdl-25670990

ABSTRACT

The effective and efficient removal of the BF2 moiety from F-BODIPY derivatives has been achieved using two common Brønsted acids; treatment with trifluoroacetic acid (TFA) or methanolic hydrogen chloride (HCl) followed by work-up with Ambersep(®) 900 resin (hydroxide form) effects this conversion in near-quantitative yields. Compared to existing methods, these conditions are relatively mild and operationally simple, requiring only reaction at room temperature for six hours (TFA) or overnight (HCl).

SELECTION OF CITATIONS
SEARCH DETAIL
...