Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Interdiscip Sci ; 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38954231

RESUMO

To elucidate the genetic basis of complex diseases, it is crucial to discover the single-nucleotide polymorphisms (SNPs) contributing to disease susceptibility. This is particularly challenging for high-order SNP epistatic interactions (HEIs), which exhibit small individual effects but potentially large joint effects. These interactions are difficult to detect due to the vast search space, encompassing billions of possible combinations, and the computational complexity of evaluating them. This study proposes a novel explicit-encoding-based multitasking harmony search algorithm (MTHS-EE-DHEI) specifically designed to address this challenge. The algorithm operates in three stages. First, a harmony search algorithm is employed, utilizing four lightweight evaluation functions, such as Bayesian network and entropy, to efficiently explore potential SNP combinations related to disease status. Second, a G-test statistical method is applied to filter out insignificant SNP combinations. Finally, two machine learning-based methods, multifactor dimensionality reduction (MDR) as well as random forest (RF), are employed to validate the classification performance of the remaining significant SNP combinations. This research aims to demonstrate the effectiveness of MTHS-EE-DHEI in identifying HEIs compared to existing methods, potentially providing valuable insights into the genetic architecture of complex diseases. The performance of MTHS-EE-DHEI was evaluated on twenty simulated disease datasets and three real-world datasets encompassing age-related macular degeneration (AMD), rheumatoid arthritis (RA), and breast cancer (BC). The results demonstrably indicate that MTHS-EE-DHEI outperforms four state-of-the-art algorithms in terms of both detection power and computational efficiency. The source code is available at https://github.com/shouhengtuo/MTHS-EE-DHEI.git .

2.
NPJ Digit Med ; 7(1): 181, 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-38971902

RESUMO

The main cause of corneal blindness worldwide is keratitis, especially the infectious form caused by bacteria, fungi, viruses, and Acanthamoeba. The key to effective management of infectious keratitis hinges on prompt and precise diagnosis. Nevertheless, the current gold standard, such as cultures of corneal scrapings, remains time-consuming and frequently yields false-negative results. Here, using 23,055 slit-lamp images collected from 12 clinical centers nationwide, this study constructed a clinically feasible deep learning system, DeepIK, that could emulate the diagnostic process of a human expert to identify and differentiate bacterial, fungal, viral, amebic, and noninfectious keratitis. DeepIK exhibited remarkable performance in internal, external, and prospective datasets (all areas under the receiver operating characteristic curves > 0.96) and outperformed three other state-of-the-art algorithms (DenseNet121, InceptionResNetV2, and Swin-Transformer). Our study indicates that DeepIK possesses the capability to assist ophthalmologists in accurately and swiftly identifying various infectious keratitis types from slit-lamp images, thereby facilitating timely and targeted treatment.

3.
bioRxiv ; 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38826238

RESUMO

Over 95% of pancreatic ductal adenocarcinomas (PDAC) harbor oncogenic mutations in K-Ras. Upon treatment with K-Ras inhibitors, PDAC cancer cells undergo metabolic reprogramming towards an oxidative phosphorylation-dependent, drug-resistant state. However, direct inhibition of complex I is poorly tolerated in patients due to on-target induction of peripheral neuropathy. In this work, we develop molecular glue degraders against ZBTB11, a C2H2 zinc finger transcription factor that regulates the nuclear transcription of components of the mitoribosome and electron transport chain. Our ZBTB11 degraders leverage the differences in demand for biogenesis of mitochondrial components between human neurons and rapidly-dividing pancreatic cancer cells, to selectively target the K-Ras inhibitor resistant state in PDAC. Combination treatment of both K-Ras inhibitor-resistant cell lines and multidrug resistant patient-derived organoids resulted in superior anti-cancer activity compared to single agent treatment, while sparing hiPSC-derived neurons. Proteomic and stable isotope tracing studies revealed mitoribosome depletion and impairment of the TCA cycle as key events that mediate this response. Together, this work validates ZBTB11 as a vulnerability in K-Ras inhibitor-resistant PDAC and provides a suite of molecular glue degrader tool compounds to investigate its function.

4.
Biomed Eng Online ; 23(1): 25, 2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38419078

RESUMO

BACKGROUND: The accurate detection of eyelid tumors is essential for effective treatment, but it can be challenging due to small and unevenly distributed lesions surrounded by irrelevant noise. Moreover, early symptoms of eyelid tumors are atypical, and some categories of eyelid tumors exhibit similar color and texture features, making it difficult to distinguish between benign and malignant eyelid tumors, particularly for ophthalmologists with limited clinical experience. METHODS: We propose a hybrid model, HM_ADET, for automatic detection of eyelid tumors, including YOLOv7_CNFG to locate eyelid tumors and vision transformer (ViT) to classify benign and malignant eyelid tumors. First, the ConvNeXt module with an inverted bottleneck layer in the backbone of YOLOv7_CNFG is employed to prevent information loss of small eyelid tumors. Then, the flexible rectified linear unit (FReLU) is applied to capture multi-scale features such as texture, edge, and shape, thereby improving the localization accuracy of eyelid tumors. In addition, considering the geometric center and area difference between the predicted box (PB) and the ground truth box (GT), the GIoU_loss was utilized to handle cases of eyelid tumors with varying shapes and irregular boundaries. Finally, the multi-head attention (MHA) module is applied in ViT to extract discriminative features of eyelid tumors for benign and malignant classification. RESULTS: Experimental results demonstrate that the HM_ADET model achieves excellent performance in the detection of eyelid tumors. In specific, YOLOv7_CNFG outperforms YOLOv7, with AP increasing from 0.763 to 0.893 on the internal test set and from 0.647 to 0.765 on the external test set. ViT achieves AUCs of 0.945 (95% CI 0.894-0.981) and 0.915 (95% CI 0.860-0.955) for the classification of benign and malignant tumors on the internal and external test sets, respectively. CONCLUSIONS: Our study provides a promising strategy for the automatic diagnosis of eyelid tumors, which could potentially improve patient outcomes and reduce healthcare costs.


Assuntos
Neoplasias Palpebrais , Humanos , Neoplasias Palpebrais/diagnóstico , Área Sob a Curva , Custos de Cuidados de Saúde
5.
Nat Commun ; 14(1): 8016, 2023 Dec 04.
Artigo em Inglês | MEDLINE | ID: mdl-38049406

RESUMO

Understanding how small molecules bind to specific protein complexes in living cells is critical to understanding their mechanism-of-action. Unbiased chemical biology strategies for direct readout of protein interactome remodelling by small molecules would provide advantages over target-focused approaches, including the ability to detect previously unknown ligand targets and complexes. However, there are few current methods for unbiased profiling of small molecule interactomes. To address this, we envisioned a technology that would combine the sensitivity and live-cell compatibility of proximity labelling coupled to mass spectrometry, with the specificity and unbiased nature of chemoproteomics. In this manuscript, we describe the BioTAC system, a small-molecule guided proximity labelling platform that can rapidly identify both direct and complexed small molecule binding proteins. We benchmark the system against µMap, photoaffinity labelling, affinity purification coupled to mass spectrometry and proximity labelling coupled to mass spectrometry datasets. We also apply the BioTAC system to provide interactome maps of Trametinib and analogues. The BioTAC system overcomes a limitation of current approaches and supports identification of both inhibitor bound and molecular glue bound complexes.


Assuntos
Biotina , Proteínas , Proteínas/metabolismo , Cromatografia de Afinidade , Espectrometria de Massas/métodos , Marcadores de Fotoafinidade/química
6.
bioRxiv ; 2023 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-37662262

RESUMO

Unbiased chemical biology strategies for direct readout of protein interactome remodelling by small molecules provide advantages over target-focused approaches, including the ability to detect previously unknown targets, and the inclusion of chemical off-compete controls leading to high-confidence identifications. We describe the BioTAC system, a small-molecule guided proximity labelling platform, to rapidly identify both direct and complexed small molecule binding proteins. The BioTAC system overcomes a limitation of current approaches, and supports identification of both inhibitor bound and molecular glue bound complexes.

7.
Front Cell Dev Biol ; 11: 1197239, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37576595

RESUMO

Purpose: To develop a visual function-based deep learning system (DLS) using fundus images to screen for visually impaired cataracts. Materials and methods: A total of 8,395 fundus images (5,245 subjects) with corresponding visual function parameters collected from three clinical centers were used to develop and evaluate a DLS for classifying non-cataracts, mild cataracts, and visually impaired cataracts. Three deep learning algorithms (DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best one for the system. The performance of the system was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Results: The AUC of the best algorithm (DenseNet121) on the internal test dataset and the two external test datasets were 0.998 (95% CI, 0.996-0.999) to 0.999 (95% CI, 0.998-1.000),0.938 (95% CI, 0.924-0.951) to 0.966 (95% CI, 0.946-0.983) and 0.937 (95% CI, 0.918-0.953) to 0.977 (95% CI, 0.962-0.989), respectively. In the comparison between the system and cataract specialists, better performance was observed in the system for detecting visually impaired cataracts (p < 0.05). Conclusion: Our study shows the potential of a function-focused screening tool to identify visually impaired cataracts from fundus images, enabling timely patient referral to tertiary eye hospitals.

8.
Cell Rep Med ; 4(7): 101095, 2023 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-37385253

RESUMO

Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.


Assuntos
Inteligência Artificial , Oftalmologia , Humanos , Oftalmologia/métodos
9.
Arch Pharm (Weinheim) ; 355(11): e2200288, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35941525

RESUMO

Based on a previously reported 1,4-dihydropyridinebutyrolactone virtual screening hit, nine lactone ring-opened ester and seven amide analogs were prepared. The analogs were designed to provide interactions with residues at the entrance of the ZA loop of the testis-specific bromodomain (ZA) channel to enhance the affinity and selectivity for the bromodomain and extra-terminal (BET) subfamily of bromodomains. Compound testing by AlphaScreen showed that neither the affinity nor the selectivity of the ester and lactam analogs was improved for BRD4-1 and the first bromodomain of the testis-specific bromodomain (BRDT-1). The esters retained affinity comparable to the parent compound, whereas the affinity for the amide analogs was reduced 10-fold. A representative benzyl ester analog was found to retain high selectivity for BET bromodomains as shown by a BROMOscan. X-ray analysis of the allyl ester analog in complex with BRD4-1 and BRDT-1 revealed that the ester side chain is located next to the ZA loop and solvent exposed.


Assuntos
Proteínas Nucleares , Fatores de Transcrição , Humanos , Masculino , Amidas/farmacologia , Proteínas de Ciclo Celular , Ésteres/farmacologia , Proteínas Nucleares/química , Proteínas Nucleares/metabolismo , Relação Estrutura-Atividade , Lactonas/química
10.
NPJ Digit Med ; 5(1): 23, 2022 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-35236921

RESUMO

Malignant eyelid tumors can invade adjacent structures and pose a threat to vision and even life. Early identification of malignant eyelid tumors is crucial to avoiding substantial morbidity and mortality. However, differentiating malignant eyelid tumors from benign ones can be challenging for primary care physicians and even some ophthalmologists. Here, based on 1,417 photographic images from 851 patients across three hospitals, we developed an artificial intelligence system using a faster region-based convolutional neural network and deep learning classification networks to automatically locate eyelid tumors and then distinguish between malignant and benign eyelid tumors. The system performed well in both internal and external test sets (AUCs ranged from 0.899 to 0.955). The performance of the system is comparable to that of a senior ophthalmologist, indicating that this system has the potential to be used at the screening stage for promoting the early detection and treatment of malignant eyelid tumors.

11.
Physiol Meas ; 43(2)2022 03 17.
Artigo em Inglês | MEDLINE | ID: mdl-35297780

RESUMO

Objective. Cardiac activity changes during sleep enable real-time sleep staging. We developed a deep neural network (DNN) to detect sleep stages using interbeat intervals (IBIs) extracted from electrocardiogram signals.Approach. Data from healthy and apnea subjects were used for training and validation; 2 additional datasets (healthy and sleep disorders subjects) were used for testing. R-peak detection was used to determine IBIs before resampling at 2 Hz; the resulting signal was segmented into 150 s windows (30 s shift). DNN output approximated the probabilities of a window belonging to light, deep, REM, or wake stages. Cohen's Kappa, accuracy, and sensitivity/specificity per stage were determined, and Kappa was optimized using thresholds on probability ratios for each stage versus light sleep.Main results. Mean (SD) Kappa and accuracy for 4 sleep stages were 0.44 (0.09) and 0.65 (0.07), respectively, in healthy subjects. For 3 sleep stages (light+deep, REM, and wake), Kappa and accuracy were 0.52 (0.12) and 0.76 (0.07), respectively. Algorithm performance on data from subjects with REM behavior disorder or periodic limb movement disorder was significantly worse, with Kappa of 0.24 (0.09) and 0.36 (0.12), respectively. Average processing time by an ARM microprocessor for a 300-sample window was 19.2 ms.Significance. IBIs can be obtained from a variety of cardiac signals, including electrocardiogram, photoplethysmography, and ballistocardiography. The DNN algorithm presented is 3 orders of magnitude smaller compared with state-of-the-art algorithms and was developed to perform real-time, IBI-based sleep staging. With high specificity and moderate sensitivity for deep and REM sleep, small footprint, and causal processing, this algorithm may be used across different platforms to perform real-time sleep staging and direct intervention strategies.Novelty & Significance(92/100 words) This article describes the development and testing of a deep neural network-based algorithm to detect sleep stages using interbeat intervals, which can be obtained from a variety of cardiac signals including photoplethysmography, electrocardiogram, and ballistocardiography. Based on the interbeat intervals identified in electrocardiogram signals, the algorithm architecture included a group of convolution layers and a group of long short-term memory layers. With its small footprint, fast processing time, high specificity and good sensitivity for deep and REM sleep, this algorithm may provide a good option for real-time sleep staging to direct interventions.


Assuntos
Fotopletismografia , Fases do Sono , Algoritmos , Humanos , Redes Neurais de Computação , Sono
12.
ChemMedChem ; 17(1): e202100407, 2022 01 05.
Artigo em Inglês | MEDLINE | ID: mdl-34932262

RESUMO

Inhibitors of Bromodomain and Extra Terminal (BET) proteins are investigated for various therapeutic indications, but selectivity for BRD2, BRD3, BRD4, BRDT and their respective tandem bromodomains BD1 and BD2 remains suboptimal. Here we report selectivity-focused structural modifications of previously reported dihydropyridine lactam 6 by changing linker length and linker type of the lactam side chain in efforts to engage the unique arginine 54 (R54) residue in BRDT-BD1 to achieve BRDT-selective affinity. We found that the analogs were highly selective for BET bromodomains, and generally more selective for the first (BD1) and second (BD2) bromodomains of BRD4 rather than for those of BRDT. Based on AlphaScreen and BromoScan results and on crystallographic data for analog 10 j, we concluded that the lack of selectivity for BRDT is most likely due to the high flexibility of the protein and the unfavorable trajectory of the lactam side chain that do not allow interaction with R54. A 15-fold preference for BD2 over BD1 in BRDT was observed for analogs 10 h and 10 m, which was supported by protein-based 19 F NMR experiments with a BRDT tandem bromodomain protein construct.


Assuntos
Di-Hidropiridinas/farmacologia , Lactamas/farmacologia , Proteínas Nucleares/antagonistas & inibidores , Di-Hidropiridinas/química , Relação Dose-Resposta a Droga , Humanos , Lactamas/química , Estrutura Molecular , Proteínas Nucleares/metabolismo , Relação Estrutura-Atividade
13.
iScience ; 24(11): 103317, 2021 Nov 19.
Artigo em Inglês | MEDLINE | ID: mdl-34778732

RESUMO

The performance of deep learning in disease detection from high-quality clinical images is identical to and even greater than that of human doctors. However, in low-quality images, deep learning performs poorly. Whether human doctors also have poor performance in low-quality images is unknown. Here, we compared the performance of deep learning systems with that of cornea specialists in detecting corneal diseases from low-quality slit lamp images. The results showed that the cornea specialists performed better than our previously established deep learning system (PEDLS) trained on only high-quality images. The performance of the system trained on both high- and low-quality images was superior to that of the PEDLS while inferior to that of a senior corneal specialist. This study highlights that cornea specialists perform better in low-quality images than the system trained on high-quality images. Adding low-quality images with sufficient diagnostic certainty to the training set can reduce this performance gap.

14.
Nat Commun ; 12(1): 3738, 2021 06 18.
Artigo em Inglês | MEDLINE | ID: mdl-34145294

RESUMO

Keratitis is the main cause of corneal blindness worldwide. Most vision loss caused by keratitis can be avoidable via early detection and treatment. The diagnosis of keratitis often requires skilled ophthalmologists. However, the world is short of ophthalmologists, especially in resource-limited settings, making the early diagnosis of keratitis challenging. Here, we develop a deep learning system for the automated classification of keratitis, other cornea abnormalities, and normal cornea based on 6,567 slit-lamp images. Our system exhibits remarkable performance in cornea images captured by the different types of digital slit lamp cameras and a smartphone with the super macro mode (all AUCs>0.96). The comparable sensitivity and specificity in keratitis detection are observed between the system and experienced cornea specialists. Our system has the potential to be applied to both digital slit lamp cameras and smartphones to promote the early diagnosis and treatment of keratitis, preventing the corneal blindness caused by keratitis.


Assuntos
Cegueira/prevenção & controle , Córnea/patologia , Aprendizado Profundo , Ceratite/diagnóstico , Diagnóstico Precoce , Humanos , Ceratite/terapia , Área Carente de Assistência Médica , Sensibilidade e Especificidade
15.
Front Med (Lausanne) ; 8: 664023, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34026791

RESUMO

Infantile cataract is the main cause of infant blindness worldwide. Although previous studies developed artificial intelligence (AI) diagnostic systems for detecting infantile cataracts in a single center, its generalizability is not ideal because of the complicated noises and heterogeneity of multicenter slit-lamp images, which impedes the application of these AI systems in real-world clinics. In this study, we developed two lens partition strategies (LPSs) based on deep learning Faster R-CNN and Hough transform for improving the generalizability of infantile cataracts detection. A total of 1,643 multicenter slit-lamp images collected from five ophthalmic clinics were used to evaluate the performance of LPSs. The generalizability of Faster R-CNN for screening and grading was explored by sequentially adding multicenter images to the training dataset. For the normal and abnormal lenses partition, the Faster R-CNN achieved the average intersection over union of 0.9419 and 0.9107, respectively, and their average precisions are both > 95%. Compared with the Hough transform, the accuracy, specificity, and sensitivity of Faster R-CNN for opacity area grading were improved by 5.31, 8.09, and 3.29%, respectively. Similar improvements were presented on the other grading of opacity density and location. The minimal training sample size required by Faster R-CNN is determined on multicenter slit-lamp images. Furthermore, the Faster R-CNN achieved real-time lens partition with only 0.25 s for a single image, whereas the Hough transform needs 34.46 s. Finally, using Grad-Cam and t-SNE techniques, the most relevant lesion regions were highlighted in heatmaps, and the high-level features were discriminated. This study provides an effective LPS for improving the generalizability of infantile cataracts detection. This system has the potential to be applied to multicenter slit-lamp images.

16.
Ann Transl Med ; 9(7): 550, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33987248

RESUMO

BACKGROUND: Lens opacity seriously affects the visual development of infants. Slit-illumination images play an irreplaceable role in lens opacity detection; however, these images exhibited varied phenotypes with severe heterogeneity and complexity, particularly among pediatric cataracts. Therefore, it is urgently needed to explore an effective computer-aided method to automatically diagnose heterogeneous lens opacity and to provide appropriate treatment recommendations in a timely manner. METHODS: We integrated three different deep learning networks and a cost-sensitive method into an ensemble learning architecture, and then proposed an effective model called CCNN-Ensemble [ensemble of cost-sensitive convolutional neural networks (CNNs)] for automatic lens opacity detection. A total of 470 slit-illumination images of pediatric cataracts were used for training and comparison between the CCNN-Ensemble model and conventional methods. Finally, we used two external datasets (132 independent test images and 79 Internet-based images) to further evaluate the model's generalizability and effectiveness. RESULTS: Experimental results and comparative analyses demonstrated that the proposed method was superior to conventional approaches and provided clinically meaningful performance in terms of three grading indices of lens opacity: area (specificity and sensitivity; 92.00% and 92.31%), density (93.85% and 91.43%) and opacity location (95.25% and 89.29%). Furthermore, the comparable performance on the independent testing dataset and the internet-based images verified the effectiveness and generalizability of the model. Finally, we developed and implemented a website-based automatic diagnosis software for pediatric cataract grading diagnosis in ophthalmology clinics. CONCLUSIONS: The CCNN-Ensemble method demonstrates higher specificity and sensitivity than conventional methods on multi-source datasets. This study provides a practical strategy for heterogeneous lens opacity diagnosis and has the potential to be applied to the analysis of other medical images.

17.
Comput Methods Programs Biomed ; 203: 106048, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33765481

RESUMO

BACKGROUND AND OBJECTIVE: Previous studies developed artificial intelligence (AI) diagnostic systems only using eligible slit-lamp images for detecting corneal diseases. However, images of ineligible quality (including poor-field, defocused, and poor-location images), which are inevitable in the real world, can cause diagnostic information loss and thus affect downstream AI-based image analysis. Manual evaluation for the eligibility of slit-lamp images often requires an ophthalmologist, and this procedure can be time-consuming and labor-intensive when applied on a large scale. Here, we aimed to develop a deep learning-based image quality control system (DLIQCS) to automatically detect and filter out ineligible slit-lamp images (poor-field, defocused, and poor-location images). METHODS: We developed and externally evaluated the DLIQCS based on 48,530 slit-lamp images (19,890 individuals) that were derived from 4 independent institutions using different types of digital slit lamp cameras. To find the best deep learning model for the DLIQCS, we used 3 algorithms (AlexNet, DenseNet121, and InceptionV3) to train models. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were leveraged to assess the performance of each algorithm for the classification of poor-field, defocused, poor-location, and eligible images. RESULTS: In an internal test dataset, the best algorithm DenseNet121 had AUCs of 0.999, 1.000, 1.000, and 1.000 in the detection of poor-field, defocused, poor-location, and eligible images, respectively. In external test datasets, the AUCs of the best algorithm DenseNet121 for identifying poor-field, defocused, poor-location, and eligible images were ranged from 0.997 to 0.997, 0.983 to 0.995, 0.995 to 0.998, and 0.999 to 0.999, respectively. CONCLUSIONS: Our DLIQCS can accurately detect poor-field, defocused, poor-location, and eligible slit-lamp images in an automated fashion. This system may serve as a prescreening tool to filter out ineligible images and enable that only eligible images would be transferred to the subsequent AI diagnostic systems.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Algoritmos , Humanos , Controle de Qualidade , Lâmpada de Fenda
18.
Int J Med Inform ; 147: 104363, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33388480

RESUMO

BACKGROUND: Recent advances in artificial intelligence (AI) have shown great promise in detecting some diseases based on medical images. Most studies developed AI diagnostic systems only using eligible images. However, in real-world settings, ineligible images (including poor-quality and poor-location images) that can compromise downstream analysis are inevitable, leading to uncertainty about the performance of these AI systems. This study aims to develop a deep learning-based image eligibility verification system (DLIEVS) for detecting and filtering out ineligible fundus images. METHODS: A total of 18,031 fundus images (9,188 subjects) collected from 4 clinical centres were used to develop and evaluate the DLIEVS for detecting eligible, poor-location, and poor-quality fundus images. Four deep learning algorithms (AlexNet, DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best model for the DLIEVS. The performance of the DLIEVS was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard determined by retina experts. RESULTS: In the internal test dataset, the best algorithm (DenseNet121) achieved AUCs of 1.000, 0.999, and 1.000 for the classification of eligible, poor-location, and poor-quality images, respectively. In the external test datasets, the AUCs of the best algorithm (DenseNet121) for detecting eligible, poor-location, and poor-quality images were ranged from 0.999-1.000, 0.997-1.000, and 0.997-0.999, respectively. CONCLUSIONS: Our DLIEVS can accurately discriminate poor-quality and poor-location images from eligible images. This system has the potential to serve as a pre-screening technique to filter out ineligible images obtained from real-world settings, ensuring only eligible images will be applied in the subsequent image-based AI diagnostic analyses.


Assuntos
Aprendizado Profundo , Algoritmos , Área Sob a Curva , Inteligência Artificial , Fundo de Olho , Humanos , Curva ROC
19.
NPJ Digit Med ; 3: 112, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32904507

RESUMO

A challenge of chronic diseases that remains to be solved is how to liberate patients and medical resources from the burdens of long-term monitoring and periodic visits. Precise management based on artificial intelligence (AI) holds great promise; however, a clinical application that fully integrates prediction and telehealth computing has not been achieved, and further efforts are required to validate its real-world benefits. Taking congenital cataract as a representative, we used Bayesian and deep-learning algorithms to create CC-Guardian, an AI agent that incorporates individualized prediction and scheduling, and intelligent telehealth follow-up computing. Our agent exhibits high sensitivity and specificity in both internal and multi-resource validation. We integrate our agent with a web-based smartphone app and prototype a prediction-telehealth cloud platform to support our intelligent follow-up system. We then conduct a retrospective self-controlled test validating that our system not only accurately detects and addresses complications at earlier stages, but also reduces the socioeconomic burdens compared to conventional methods. This study represents a pioneering step in applying AI to achieve real medical benefits and demonstrates a novel strategy for the effective management of chronic diseases.

20.
Br J Ophthalmol ; 103(11): 1553-1560, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31481392

RESUMO

PURPOSE: To establish and validate a universal artificial intelligence (AI) platform for collaborative management of cataracts involving multilevel clinical scenarios and explored an AI-based medical referral pattern to improve collaborative efficiency and resource coverage. METHODS: The training and validation datasets were derived from the Chinese Medical Alliance for Artificial Intelligence, covering multilevel healthcare facilities and capture modes. The datasets were labelled using a three-step strategy: (1) capture mode recognition; (2) cataract diagnosis as a normal lens, cataract or a postoperative eye and (3) detection of referable cataracts with respect to aetiology and severity. Moreover, we integrated the cataract AI agent with a real-world multilevel referral pattern involving self-monitoring at home, primary healthcare and specialised hospital services. RESULTS: The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance in three-step tasks: (1) capture mode recognition (area under the curve (AUC) 99.28%-99.71%), (2) cataract diagnosis (normal lens, cataract or postoperative eye with AUCs of 99.82%, 99.96% and 99.93% for mydriatic-slit lamp mode and AUCs >99% for other capture modes) and (3) detection of referable cataracts (AUCs >91% in all tests). In the real-world tertiary referral pattern, the agent suggested 30.3% of people be 'referred', substantially increasing the ophthalmologist-to-population service ratio by 10.2-fold compared with the traditional pattern. CONCLUSIONS: The universal AI platform and multilevel collaborative pattern showed robust diagnostic performance and effective service for cataracts. The context of our AI-based medical referral pattern will be extended to other common disease conditions and resource-intensive situations.


Assuntos
Inteligência Artificial , Catarata/diagnóstico , Colaboração Intersetorial , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Área Sob a Curva , Catarata/classificação , Catarata/epidemiologia , Extração de Catarata , Feminino , Humanos , Masculino , Programas de Rastreamento , Pessoa de Meia-Idade , Curva ROC , Microscopia com Lâmpada de Fenda , Transtornos da Visão/reabilitação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...