Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
1.
Radiol Artif Intell ; 6(3): e230079, 2024 05.
Article in English | MEDLINE | ID: mdl-38477661

ABSTRACT

Purpose To evaluate the impact of an artificial intelligence (AI) assistant for lung cancer screening on multinational clinical workflows. Materials and Methods An AI assistant for lung cancer screening was evaluated on two retrospective randomized multireader multicase studies where 627 (141 cancer-positive cases) low-dose chest CT cases were each read twice (with and without AI assistance) by experienced thoracic radiologists (six U.S.-based or six Japan-based radiologists), resulting in a total of 7524 interpretations. Positive cases were defined as those within 2 years before a pathology-confirmed lung cancer diagnosis. Negative cases were defined as those without any subsequent cancer diagnosis for at least 2 years and were enriched for a spectrum of diverse nodules. The studies measured the readers' level of suspicion (on a 0-100 scale), country-specific screening system scoring categories, and management recommendations. Evaluation metrics included the area under the receiver operating characteristic curve (AUC) for level of suspicion and sensitivity and specificity of recall recommendations. Results With AI assistance, the radiologists' AUC increased by 0.023 (0.70 to 0.72; P = .02) for the U.S. study and by 0.023 (0.93 to 0.96; P = .18) for the Japan study. Scoring system specificity for actionable findings increased 5.5% (57% to 63%; P < .001) for the U.S. study and 6.7% (23% to 30%; P < .001) for the Japan study. There was no evidence of a difference in corresponding sensitivity between unassisted and AI-assisted reads for the U.S. (67.3% to 67.5%; P = .88) and Japan (98% to 100%; P > .99) studies. Corresponding stand-alone AI AUC system performance was 0.75 (95% CI: 0.70, 0.81) and 0.88 (95% CI: 0.78, 0.97) for the U.S.- and Japan-based datasets, respectively. Conclusion The concurrent AI interface improved lung cancer screening specificity in both U.S.- and Japan-based reader studies, meriting further study in additional international screening environments. Keywords: Assistive Artificial Intelligence, Lung Cancer Screening, CT Supplemental material is available for this article. Published under a CC BY 4.0 license.


Subject(s)
Artificial Intelligence , Early Detection of Cancer , Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnosis , Lung Neoplasms/epidemiology , Japan , United States/epidemiology , Retrospective Studies , Early Detection of Cancer/methods , Female , Male , Middle Aged , Aged , Sensitivity and Specificity , Radiographic Image Interpretation, Computer-Assisted/methods
2.
NPJ Digit Med ; 4(1): 146, 2021 Oct 08.
Article in English | MEDLINE | ID: mdl-34625656

ABSTRACT

The COVID-19 pandemic has highlighted the global need for reliable models of disease spread. We propose an AI-augmented forecast modeling framework that provides daily predictions of the expected number of confirmed COVID-19 deaths, cases, and hospitalizations during the following 4 weeks. We present an international, prospective evaluation of our models' performance across all states and counties in the USA and prefectures in Japan. Nationally, incident mean absolute percentage error (MAPE) for predicting COVID-19 associated deaths during prospective deployment remained consistently <8% (US) and <29% (Japan), while cumulative MAPE remained <2% (US) and <10% (Japan). We show that our models perform well even during periods of considerable change in population behavior, and are robust to demographic differences across different geographic locations. We further demonstrate that our framework provides meaningful explanatory insights with the models accurately adapting to local and national policy interventions. Our framework enables counterfactual simulations, which indicate continuing Non-Pharmaceutical Interventions alongside vaccinations is essential for faster recovery from the pandemic, delaying the application of interventions has a detrimental effect, and allow exploration of the consequences of different vaccination strategies. The COVID-19 pandemic remains a global emergency. In the face of substantial challenges ahead, the approach presented here has the potential to inform critical decisions.

3.
Nat Methods ; 18(10): 1196-1203, 2021 10.
Article in English | MEDLINE | ID: mdl-34608324

ABSTRACT

How noncoding DNA determines gene expression in different cell types is a major unsolved problem, and critical downstream applications in human genetics depend on improved solutions. Here, we report substantially improved gene expression prediction accuracy from DNA sequences through the use of a deep learning architecture, called Enformer, that is able to integrate information from long-range interactions (up to 100 kb away) in the genome. This improvement yielded more accurate variant effect predictions on gene expression for both natural genetic variants and saturation mutagenesis measured by massively parallel reporter assays. Furthermore, Enformer learned to predict enhancer-promoter interactions directly from the DNA sequence competitively with methods that take direct experimental data as input. We expect that these advances will enable more effective fine-mapping of human disease associations and provide a framework to interpret cis-regulatory evolution.


Subject(s)
DNA/genetics , Databases, Genetic , Epigenesis, Genetic , Gene Expression Regulation , Machine Learning , Nerve Net , Animals , Cell Line , Genome , Genomics/methods , Humans , Mice , Quantitative Trait Loci
4.
J Med Internet Res ; 23(7): e26151, 2021 07 12.
Article in English | MEDLINE | ID: mdl-34255661

ABSTRACT

BACKGROUND: Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE: Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS: The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS: We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS: Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.


Subject(s)
Deep Learning , Head and Neck Neoplasms , Algorithms , Head and Neck Neoplasms/diagnostic imaging , Head and Neck Neoplasms/radiotherapy , Humans , Tomography, X-Ray Computed
5.
Nat Protoc ; 16(6): 2765-2787, 2021 06.
Article in English | MEDLINE | ID: mdl-33953393

ABSTRACT

Early prediction of patient outcomes is important for targeting preventive care. This protocol describes a practical workflow for developing deep-learning risk models that can predict various clinical and operational outcomes from structured electronic health record (EHR) data. The protocol comprises five main stages: formal problem definition, data pre-processing, architecture selection, calibration and uncertainty, and generalizability evaluation. We have applied the workflow to four endpoints (acute kidney injury, mortality, length of stay and 30-day hospital readmission). The workflow can enable continuous (e.g., triggered every 6 h) and static (e.g., triggered at 24 h after admission) predictions. We also provide an open-source codebase that illustrates some key principles in EHR modeling. This protocol can be used by interdisciplinary teams with programming and clinical expertise to build deep-learning prediction models with alternate data sources and prediction tasks.


Subject(s)
Deep Learning , Electronic Health Records , Research Design , Risk Assessment/methods , Humans , Software , Workflow
7.
Quant Imaging Med Surg ; 10(6): 1298-1306, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32550138

ABSTRACT

BACKGROUND: Dynamic susceptibility contrast MR imaging (DSC-MRI) offers direct evaluation of neo-vascularity. Ferucarbotran does not accumulate in the interstitial space, instead remaining in the intravascular space during early phase imaging. We investigate tracer kinetic analysis with DSC-MRI with ferucarbotran and single level CT during hepatic arteriography (SL-CTHA) in assessment of hypervascular hepatocellular lesions and evaluate the usefulness of DSC-MRI with ferucarbotran. METHODS: Six patients having hypervascular hepatocellular carcinoma (HCC) and 3 patients having focal nodular hyperplasia (FNH) were included in the study. SL-CTHA was performed with the infusion of 3 mL of contrast media at a rate of 1 mL/s and scanned at a rate of 0.8 second per rotation. DSC-MRI was acquired with the echo-planar method at 1.5T system. A total dose of 1.4 mL (0.5 mol Fe/L) of ferucarbotran was used. Ferucarbotran was injected at a rate of 2 mL/s with 40 mL of physiological saline. Imaging was obtained at a temporal resolution of 1.2 or 0.46 seconds in 5 and 4 patients, respectively. For both CT and MRI modalities, a model-free analysis method was used to derive region of interest-based perfusion parameters. Plasma flow, distribution volume (DV) of contrast agent and estimated mean transit time (EMTT) were estimated. RESULTS: A strong correlation was obtained with plasma flow (r=0.8231, P=0.0064) between DSC-MRI and SL-CTHA. No significant correlation was obtained for DV and EMTT between DSC-MRI and SL-CTHA. All perfusion parameters showed no significant difference between SL-CTHA and DSC-MRI in FNH. On the other hand, in HCC, DV and EMTT showed significant differences (P=0.046 and 0.046), and plasma flow showed no significant difference between DSC-MRI and SL-CTHA. CONCLUSIONS: This pilot study demonstrates the possibility of quantitative analysis of liver tumor using superparamagnetic iron oxide (SPIO)-based agent and highlights the potential for SPIO-based agent in more precisely assessing the perfusion characteristic of hypervascular liver tumors than by using extracellular contrast media.

8.
Nat Med ; 26(6): 892-899, 2020 06.
Article in English | MEDLINE | ID: mdl-32424211

ABSTRACT

Progression to exudative 'wet' age-related macular degeneration (exAMD) is a major cause of visual deterioration. In patients diagnosed with exAMD in one eye, we introduce an artificial intelligence (AI) system to predict progression to exAMD in the second eye. By combining models based on three-dimensional (3D) optical coherence tomography images and corresponding automatic tissue maps, our system predicts conversion to exAMD within a clinically actionable 6-month time window, achieving a per-volumetric-scan sensitivity of 80% at 55% specificity, and 34% sensitivity at 90% specificity. This level of performance corresponds to true positives in 78% and 41% of individual eyes, and false positives in 56% and 17% of individual eyes at the high sensitivity and high specificity points, respectively. Moreover, we show that automatic tissue segmentation can identify anatomical changes before conversion and high-risk subgroups. This AI system overcomes substantial interobserver variability in expert predictions, performing better than five out of six experts, and demonstrates the potential of using AI to predict disease progression.


Subject(s)
Deep Learning , Geographic Atrophy/diagnostic imaging , Tomography, Optical Coherence , Wet Macular Degeneration/diagnosis , Aged , Aged, 80 and over , Disease Progression , Early Diagnosis , Early Medical Intervention , Female , Humans , Imaging, Three-Dimensional , Macular Degeneration/diagnostic imaging , Male , Prognosis , Wet Macular Degeneration/diagnostic imaging , Wet Macular Degeneration/therapy
9.
Nat Commun ; 11(1): 130, 2020 01 08.
Article in English | MEDLINE | ID: mdl-31913272

ABSTRACT

Center-involved diabetic macular edema (ci-DME) is a major cause of vision loss. Although the gold standard for diagnosis involves 3D imaging, 2D imaging by fundus photography is usually used in screening settings, resulting in high false-positive and false-negative calls. To address this, we train a deep learning model to predict ci-DME from fundus photographs, with an ROC-AUC of 0.89 (95% CI: 0.87-0.91), corresponding to 85% sensitivity at 80% specificity. In comparison, retinal specialists have similar sensitivities (82-85%), but only half the specificity (45-50%, p < 0.001). Our model can also detect the presence of intraretinal fluid (AUC: 0.81; 95% CI: 0.81-0.86) and subretinal fluid (AUC 0.88; 95% CI: 0.85-0.91). Using deep learning to make predictions via simple 2D images without sophisticated 3D-imaging equipment and with better than specialist performance, has broad relevance to many other applications in medical imaging.


Subject(s)
Diabetic Retinopathy/diagnostic imaging , Macular Edema/diagnostic imaging , Aged , Deep Learning , Diabetic Retinopathy/genetics , Female , Humans , Imaging, Three-Dimensional , Macular Edema/genetics , Male , Middle Aged , Mutation , Photography , Retina/diagnostic imaging , Tomography, Optical Coherence
10.
Nature ; 577(7788): 89-94, 2020 01.
Article in English | MEDLINE | ID: mdl-31894144

ABSTRACT

Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful1. Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives2. Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. To assess its performance in the clinical setting, we curated a large representative dataset from the UK and a large enriched dataset from the USA. We show an absolute reduction of 5.7% and 1.2% (USA and UK) in false positives and 9.4% and 2.7% in false negatives. We provide evidence of the ability of the system to generalize from the UK to the USA. In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening.


Subject(s)
Artificial Intelligence/standards , Breast Neoplasms/diagnostic imaging , Early Detection of Cancer/methods , Early Detection of Cancer/standards , Female , Humans , Mammography/standards , Reproducibility of Results , United Kingdom , United States
11.
Nature ; 572(7767): 116-119, 2019 08.
Article in English | MEDLINE | ID: mdl-31367026

ABSTRACT

The early prediction of deterioration could have an important role in supporting healthcare professionals, as an estimated 11% of deaths in hospital follow a failure to promptly recognize and treat deteriorating patients1. To achieve this goal requires predictions of patient risk that are continuously updated and accurate, and delivered at an individual level with sufficient context and enough time to act. Here we develop a deep learning approach for the continuous risk prediction of future deterioration in patients, building on recent work that models adverse events from electronic health records2-17 and using acute kidney injury-a common and potentially life-threatening condition18-as an exemplar. Our model was developed on a large, longitudinal dataset of electronic health records that cover diverse clinical environments, comprising 703,782 adult patients across 172 inpatient and 1,062 outpatient sites. Our model predicts 55.8% of all inpatient episodes of acute kidney injury, and 90.2% of all acute kidney injuries that required subsequent administration of dialysis, with a lead time of up to 48 h and a ratio of 2 false alerts for every true alert. In addition to predicting future acute kidney injury, our model provides confidence assessments and a list of the clinical features that are most salient to each prediction, alongside predicted future trajectories for clinically relevant blood tests9. Although the recognition and prompt treatment of acute kidney injury is known to be challenging, our approach may offer opportunities for identifying patients at risk within a time window that enables early treatment.


Subject(s)
Acute Kidney Injury/diagnosis , Clinical Laboratory Techniques/methods , Acute Kidney Injury/complications , Adolescent , Adult , Aged , Aged, 80 and over , Computer Simulation , Datasets as Topic , False Positive Reactions , Female , Humans , Male , Middle Aged , Pulmonary Disease, Chronic Obstructive/complications , ROC Curve , Risk Assessment , Uncertainty , Young Adult
13.
Lancet Digit Health ; 1(6): e271-e297, 2019 10.
Article in English | MEDLINE | ID: mdl-33323251

ABSTRACT

BACKGROUND: Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging. METHODS: In this systematic review and meta-analysis, we searched Ovid-MEDLINE, Embase, Science Citation Index, and Conference Proceedings Citation Index for studies published from Jan 1, 2012, to June 6, 2019. Studies comparing the diagnostic performance of deep learning models and health-care professionals based on medical imaging, for any disease, were included. We excluded studies that used medical waveform data graphics material or investigated the accuracy of image segmentation rather than disease classification. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. Studies undertaking an out-of-sample external validation were included in a meta-analysis, using a unified hierarchical model. This study is registered with PROSPERO, CRD42018091176. FINDINGS: Our search identified 31 587 studies, of which 82 (describing 147 patient cohorts) were included. 69 studies provided enough data to construct contingency tables, enabling calculation of test accuracy, with sensitivity ranging from 9·7% to 100·0% (mean 79·1%, SD 0·2) and specificity ranging from 38·9% to 100·0% (mean 88·3%, SD 0·1). An out-of-sample external validation was done in 25 studies, of which 14 made the comparison between deep learning models and health-care professionals in the same sample. Comparison of the performance between health-care professionals in these 14 studies, when restricting the analysis to the contingency table for each study reporting the highest accuracy, found a pooled sensitivity of 87·0% (95% CI 83·0-90·2) for deep learning models and 86·4% (79·9-91·0) for health-care professionals, and a pooled specificity of 92·5% (95% CI 85·1-96·4) for deep learning models and 90·5% (80·6-95·7) for health-care professionals. INTERPRETATION: Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals. However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology. FUNDING: None.


Subject(s)
Deep Learning , Diagnostic Imaging , Health Personnel , Humans
14.
Lancet Digit Health ; 1(5): e232-e242, 2019 09.
Article in English | MEDLINE | ID: mdl-33323271

ABSTRACT

BACKGROUND: Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding-and no deep learning-expertise. METHODS: We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset. FINDINGS: Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3-97·0%; specificity 67-100%; AUPRC 0·87-1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%. INTERPRETATION: All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets. FUNDING: National Institute for Health Research and Moorfields Eye Charity.


Subject(s)
Algorithms , Data Interpretation, Statistical , Deep Learning , Software , Adult , Feasibility Studies , Fundus Oculi , Humans , Skin Neoplasms/diagnosis , Tomography, Optical Coherence/statistics & numerical data
15.
Nat Med ; 24(9): 1342-1350, 2018 09.
Article in English | MEDLINE | ID: mdl-30104768

ABSTRACT

The volume and complexity of diagnostic imaging is increasing at a pace faster than the availability of human expertise to interpret it. Artificial intelligence has shown great promise in classifying two-dimensional photographs of some common diseases and typically relies on databases of millions of annotated images. Until now, the challenge of reaching the performance of expert clinicians in a real-world clinical pathway with three-dimensional diagnostic scans has remained unsolved. Here, we apply a novel deep learning architecture to a clinically heterogeneous set of three-dimensional optical coherence tomography scans from patients referred to a major eye hospital. We demonstrate performance in making a referral recommendation that reaches or exceeds that of experts on a range of sight-threatening retinal diseases after training on only 14,884 scans. Moreover, we demonstrate that the tissue segmentations produced by our architecture act as a device-independent representation; referral accuracy is maintained when using tissue segmentations from a different type of device. Our work removes previous barriers to wider clinical use without prohibitive training data requirements across multiple pathologies in a real-world setting.


Subject(s)
Deep Learning , Referral and Consultation , Retinal Diseases/diagnosis , Aged , Clinical Decision-Making , Female , Humans , Male , Middle Aged , Retina/diagnostic imaging , Retina/pathology , Retinal Diseases/diagnostic imaging , Tomography, Optical Coherence
16.
J Belg Soc Radiol ; 102(1): 40, 2018 Apr 20.
Article in English | MEDLINE | ID: mdl-30039052

ABSTRACT

OBJECTIVE: Dynamic contrast-enhanced MRI (DCE-MRI) can measure the changes in tumor blood flow, vascular permeability and interstitial and intravascular volume. The objective was to evaluate the efficacy of DCE-MRI in prediction of Barcelona Clinic Liver Cancer (BCLC) staging B or C hepatocellular carcinoma (HCC) response after treatment with transcatheter arterial chemoembolization (TACE) followed by sorafenib therapy. METHODS: Sorafenib was administered four days after TACE of BCLC staging B or C HCC in 11 patients (21 lesions). DCE-MRI was performed with Gd-EOB-DTPA contrast before TACE and three and 10 days after TACE. DCE-MRI acquisitions were taken pre-contrast, hepatic arterial-dominant phase and 60, 120, 180, 240, 330, 420, 510 and 600 seconds post-contrast. Distribution volume of contrast agent (DV) and transfer constant Ktrans were calculated. Patients were grouped by mRECIST after one month or more post-TACE into responders (complete response, partial response) and non-responders (stable disease, progressive disease). RESULTS: DV was reduced in responders at three and 10 days post-TACE (p = 0.008 and p = 0.008 respectively). DV fell in non-responders at three days (p = 0.025) but was not significantly changed from pre-TACE values after sorafenib. Sensitivity and specificity for DV 10 days post-TACE were 88% and 77% respectively. CONCLUSION: DV may be a useful biomarker for early prediction of therapeutic outcome in intermediate HCC.

17.
F1000Res ; 6: 1033, 2017.
Article in English | MEDLINE | ID: mdl-28751970

ABSTRACT

Acute Kidney Injury (AKI), an abrupt deterioration in kidney function, is defined by changes in urine output or serum creatinine. AKI is common (affecting up to 20% of acute hospital admissions in the United Kingdom), associated with significant morbidity and mortality, and expensive (excess costs to the National Health Service in England alone may exceed £1 billion per year). NHS England has mandated the implementation of an automated algorithm to detect AKI based on changes in serum creatinine, and to alert clinicians. It is uncertain, however, whether 'alerting' alone improves care quality. We have thus developed a digitally-enabled care pathway as a clinical service to inpatients in the Royal Free Hospital (RFH), a large London hospital. This pathway incorporates a mobile software application - the "Streams-AKI" app, developed by DeepMind Health - that applies the NHS AKI algorithm to routinely collected serum creatinine data in hospital inpatients. Streams-AKI alerts clinicians to potential AKI cases, furnishing them with a trend view of kidney function alongside other relevant data, in real-time, on a mobile device. A clinical response team comprising nephrologists and critical care nurses responds to these AKI alerts by reviewing individual patients and administering interventions according to existing clinical practice guidelines. We propose a mixed methods service evaluation of the implementation of this care pathway. This evaluation will assess how the care pathway meets the health and care needs of service users (RFH inpatients), in terms of clinical outcome, processes of care, and NHS costs. It will also seek to assess acceptance of the pathway by members of the response team and wider hospital community. All analyses will be undertaken by the service evaluation team from UCL (Department of Applied Health Research) and St George's, University of London (Population Health Research Institute).

18.
F1000Res ; 5: 1573, 2016.
Article in English | MEDLINE | ID: mdl-27830057

ABSTRACT

There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases. Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular ("wet") age-related macular degeneration (wet AMD) and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the 'back' of the eye) and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves). Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges. This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients. Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

19.
Eur Radiol ; 24(1): 112-9, 2014 Jan.
Article in English | MEDLINE | ID: mdl-23949726

ABSTRACT

OBJECTIVE: To investigate if tracer kinetic modelling of low temporal resolution dynamic contrast-enhanced (DCE) MRI with Gd-EOB-DTPA could replace technetium-99 m galactosyl human serum albumin (GSA) single positron emission computed tomography (SPECT) and indocyanine green (ICG) retention for the measurement of liver functional reserve. METHODS: Twenty eight patients awaiting liver resection for various cancers were included in this retrospective study that was approved by the institutional review board. The Gd-EOB-DTPA MRI sequence acquired five images: unenhanced, double arterial phase, portal phase, and 4 min after injection. Intracellular contrast uptake rate (UR) and extracellular volume (Ve) were calculated from DCE-MRI, along with the ratio of GSA radioactivity of liver to heart-plus-liver and per cent of cumulative uptake from 15-16 min (LHL15 and LU15, respectively) from GSA-scintigraphy. ICG retention at 15 min, Child-Pugh cirrhosis score (CPS) and postoperative Inuyama fibrosis criteria were also recorded. Statistical analysis was with Spearman rank correlation analysis. RESULTS: Comparing MRI parameters with the reference methods, significant correlations were obtained for UR and LHL15, LU15, ICG15 (all 0.4-0.6, P < 0.05); UR and CPS (-0.64, P < 0.001); Ve and Inuyama (0.44, P < 0.05). CONCLUSION: Measures of liver function obtained by routine Gd-EOB-DTPA DCE-MRI with tracer kinetic modelling may provide a suitable method for the evaluation of liver functional reserve. KEY POINTS: • Magnetic resonance imaging (MRI) provides new methods of measuring hepatic functional reserve. • DCE-MRI with Gd-EOB-DTPA offers the possibility of replacing scintigraphy. • The analysis method can be used for preoperative liver function evaluation.


Subject(s)
Gadolinium DTPA , Indocyanine Green , Liver Neoplasms/diagnosis , Magnetic Resonance Imaging/methods , Technetium Tc 99m Aggregated Albumin , Technetium Tc 99m Pentetate , Tomography, Emission-Computed, Single-Photon/methods , Aged , Coloring Agents , Contrast Media , Female , Follow-Up Studies , Hepatectomy , Humans , Liver/diagnostic imaging , Liver/pathology , Liver Function Tests , Liver Neoplasms/metabolism , Liver Neoplasms/surgery , Male , Radiopharmaceuticals , Reproducibility of Results , Retrospective Studies
20.
J Magn Reson Imaging ; 38(6): 1554-63, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23857776

ABSTRACT

PURPOSE: To identify the optimal tracer-kinetic modeling strategy for dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data acquired at low temporal resolution. MATERIALS AND METHODS: DCE-MRI was performed on 13 patients with rheumatoid arthritis of the hand before and after anti-tumor necrosis factor alpha (TNFα) therapy, using a 3D sequence with a temporal resolution of 13 seconds, imaging for 4 minutes postcontrast injection. Concentration-time curves were extracted from regions of interest (ROIs) in enhancing synovium and fitted to the 3-parameter modified Tofts model (MT) and the 4-parameter two-compartment exchange model (2CXM). To assist the interpretation of the data, the same analysis was applied to simulated data with similar characteristics. RESULTS: Both models fitted the data closely, and showed similar therapy effects. The MT plasma volume was significantly lower than with 2CXM, but the differences in permeability and interstitial volume were not significant. 2CXM was less precise than MT, with larger standard deviations relative to the mean in most parameters. The additional perfusion parameter determined with 2CXM did not provide a statistically significant trend due to low precision. CONCLUSION: The standard MT model is the optimal modeling strategy at low temporal resolution. Advanced models improve the accuracy and generate an additional parameter, but these benefits are offset by low precision.


Subject(s)
Arthritis, Rheumatoid/diagnosis , Arthritis, Rheumatoid/metabolism , Heterocyclic Compounds/pharmacokinetics , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Models, Biological , Organometallic Compounds/pharmacokinetics , Algorithms , Computer Simulation , Female , Humans , Image Enhancement/methods , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...