Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Innovation (Camb) ; 5(4): 100648, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39021525

RESUMO

Pulmonary infections pose formidable challenges in clinical settings with high mortality rates across all age groups worldwide. Accurate diagnosis and early intervention are crucial to improve patient outcomes. Artificial intelligence (AI) has the capability to mine imaging features specific to different pathogens and fuse multimodal features to reach a synergistic diagnosis, enabling more precise investigation and individualized clinical management. In this study, we successfully developed a multimodal integration (MMI) pipeline to differentiate among bacterial, fungal, and viral pneumonia and pulmonary tuberculosis based on a real-world dataset of 24,107 patients. The area under the curve (AUC) of the MMI system comprising clinical text and computed tomography (CT) image scans yielded 0.910 (95% confidence interval [CI]: 0.904-0.916) and 0.887 (95% CI: 0.867-0.909) in the internal and external testing datasets respectively, which were comparable to those of experienced physicians. Furthermore, the MMI system was utilized to rapidly differentiate between viral subtypes with a mean AUC of 0.822 (95% CI: 0.805-0.837) and bacterial subtypes with a mean AUC of 0.803 (95% CI: 0.775-0.830). Here, the MMI system harbors the potential to guide tailored medication recommendations, thus mitigating the risk of antibiotic misuse. Additionally, the integration of multimodal factors in the AI-driven system also provided an evident advantage in predicting risks of developing critical illness, contributing to more informed clinical decision-making. To revolutionize medical care, embracing multimodal AI tools in pulmonary infections will pave the way to further facilitate early intervention and precise management in the foreseeable future.

2.
Can J Ophthalmol ; 2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38768649

RESUMO

OBJECTIVE: Uveal melanoma is the most common intraocular malignancy in adults. Current screening and triaging methods for melanocytic choroidal tumours face inherent limitations, particularly in regions with limited access to specialized ocular oncologists. This study explores the potential of machine learning to automate tumour segmentation. We develop and evaluate a machine-learning model for lesion segmentation using ultra-wide-field fundus photography. METHOD: A retrospective chart review was conducted of patients diagnosed with uveal melanoma, choroidal nevi, or congenital hypertrophy of the retinal pigmented epithelium at a tertiary academic medical centre. Included patients had a single ultra-wide-field fundus photograph (Optos PLC, Dunfermline, Fife, Scotland) of adequate quality to visualize the lesion of interest, as confirmed by a single ocular oncologist. These images were used to develop and test a machine-learning algorithm for lesion segmentation. RESULTS: A total of 396 images were used to develop a machine-learning algorithm for lesion segmentation. Ninety additional images were used in the testing data set along with images of 30 healthy control individuals. Of the images with successfully detected lesions, the machine-learning segmentation yielded Dice coefficients of 0.86, 0.81, and 0.85 for uveal melanoma, choroidal nevi, and congenital hypertrophy of the retinal pigmented epithelium, respectively. Sensitivities for any lesion detection per image were 1.00, 0.90, and 0.87, respectively. For images without lesions, specificity was 0.93. CONCLUSION: Our study demonstrates a novel machine-learning algorithm's performance, suggesting its potential clinical utility as a widely accessible method of screening choroidal tumours. Additional evaluation methods are necessary to further enhance the model's lesion classification and diagnostic accuracy.

3.
Signal Transduct Target Ther ; 8(1): 416, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37907497

RESUMO

There have been hundreds of millions of cases of coronavirus disease 2019 (COVID-19), which is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). With the growing population of recovered patients, it is crucial to understand the long-term consequences of the disease and management strategies. Although COVID-19 was initially considered an acute respiratory illness, recent evidence suggests that manifestations including but not limited to those of the cardiovascular, respiratory, neuropsychiatric, gastrointestinal, reproductive, and musculoskeletal systems may persist long after the acute phase. These persistent manifestations, also referred to as long COVID, could impact all patients with COVID-19 across the full spectrum of illness severity. Herein, we comprehensively review the current literature on long COVID, highlighting its epidemiological understanding, the impact of vaccinations, organ-specific sequelae, pathophysiological mechanisms, and multidisciplinary management strategies. In addition, the impact of psychological and psychosomatic factors is also underscored. Despite these crucial findings on long COVID, the current diagnostic and therapeutic strategies based on previous experience and pilot studies remain inadequate, and well-designed clinical trials should be prioritized to validate existing hypotheses. Thus, we propose the primary challenges concerning biological knowledge gaps and efficient remedies as well as discuss the corresponding recommendations.


Assuntos
COVID-19 , Humanos , SARS-CoV-2 , Síndrome de COVID-19 Pós-Aguda , Avaliação de Resultados em Cuidados de Saúde
4.
Semin Cancer Biol ; 91: 1-15, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36801447

RESUMO

Personalized treatment strategies for cancer frequently rely on the detection of genetic alterations which are determined by molecular biology assays. Historically, these processes typically required single-gene sequencing, next-generation sequencing, or visual inspection of histopathology slides by experienced pathologists in a clinical context. In the past decade, advances in artificial intelligence (AI) technologies have demonstrated remarkable potential in assisting physicians with accurate diagnosis of oncology image-recognition tasks. Meanwhile, AI techniques make it possible to integrate multimodal data such as radiology, histology, and genomics, providing critical guidance for the stratification of patients in the context of precision therapy. Given that the mutation detection is unaffordable and time-consuming for a considerable number of patients, predicting gene mutations based on routine clinical radiological scans or whole-slide images of tissue with AI-based methods has become a hot issue in actual clinical practice. In this review, we synthesized the general framework of multimodal integration (MMI) for molecular intelligent diagnostics beyond standard techniques. Then we summarized the emerging applications of AI in the prediction of mutational and molecular profiles of common cancers (lung, brain, breast, and other tumor types) pertaining to radiology and histology imaging. Furthermore, we concluded that there truly exist multiple challenges of AI techniques in the way of its real-world application in the medical field, including data curation, feature fusion, model interpretability, and practice regulations. Despite these challenges, we still prospect the clinical implementation of AI as a highly potential decision-support tool to aid oncologists in future cancer treatment management.


Assuntos
Inteligência Artificial , Neoplasias , Humanos , Neoplasias/diagnóstico , Neoplasias/genética , Neoplasias/terapia , Medicina de Precisão/métodos , Oncologia/métodos , Diagnóstico por Imagem/métodos
5.
Res Sq ; 2023 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-38196619

RESUMO

Objective: This study aims to assess a machine learning (ML) algorithm using multimodal imaging to accurately identify risk factors for uveal melanoma (UM) and aid in the diagnosis of melanocytic choroidal tumors. Subjects and Methods: This study included 223 eyes from 221 patients with melanocytic choroidal lesions seen at the eye clinic of the University of Illinois at Chicago between 01/2010 and 07/2022. An ML algorithm was developed and trained on ultra-widefield fundus imaging and B-scan ultrasonography to detect risk factors of malignant transformation of choroidal lesions into UM. The risk factors were verified using all multimodal imaging available from the time of diagnosis. We also explore classification of lesions into UM and choroidal nevi using the ML algorithm. Results: The ML algorithm assessed features of ultra-widefield fundus imaging and B-scan ultrasonography to determine the presence of the following risk factors for malignant transformation: lesion thickness, subretinal fluid, orange pigment, proximity to optic nerve, ultrasound hollowness, and drusen. The algorithm also provided classification of lesions into UM and choroidal nevi. A total of 115 patients with choroidal nevi and 108 patients with UM were included. The mean lesion thickness for choroidal nevi was 1.6 mm and for UM was 5.9 mm. Eleven ML models were implemented and achieved high accuracy, with an area under the curve of 0.982 for thickness prediction and 0.964 for subretinal fluid prediction. Sensitivity/specificity values ranged from 0.900/0.818 to 1.000/0.727 for different features. The ML algorithm demonstrated high accuracy in identifying risk factors and differentiating lesions based on the analyzed imaging data. Conclusions: This study provides proof of concept that ML can accurately identify risk factors for malignant transformation in melanocytic choroidal tumors based on a single ultra-widefield fundus image or B-scan ultrasound at the time of initial presentation. By leveraging the efficiency and availability of ML, this study has the potential to provide a non-invasive tool that helps to prevent unnecessary treatment, improve our ability to predict malignant transformation, reduce the risk of metastasis, and potentially save patient lives.

6.
Front Immunol ; 13: 987018, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36311754

RESUMO

Tuberculosis, caused by Mycobacterium tuberculosis, engenders an onerous burden on public hygiene. Congenital and adaptive immunity in the human body act as robust defenses against the pathogens. However, in coevolution with humans, this microbe has gained multiple lines of mechanisms to circumvent the immune response to sustain its intracellular persistence and long-term survival inside a host. Moreover, emerging evidence has revealed that this stealthy bacterium can alter the expression of demic noncoding RNAs (ncRNAs), leading to dysregulated biological processes subsequently, which may be the rationale behind the pathogenesis of tuberculosis. Meanwhile, the differential accumulation in clinical samples endows them with the capacity to be indicators in the time of tuberculosis suffering. In this article, we reviewed the nearest insights into the impact of ncRNAs during Mycobacterium tuberculosis infection as realized via immune response modulation and their potential as biomarkers for the diagnosis, drug resistance identification, treatment evaluation, and adverse drug reaction prediction of tuberculosis, aiming to inspire novel and precise therapy development to combat this pathogen in the future.


Assuntos
Mycobacterium tuberculosis , Tuberculose , Humanos , RNA não Traduzido/genética , Imunidade Adaptativa , Biomarcadores/metabolismo
7.
Cancers (Basel) ; 14(19)2022 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-36230746

RESUMO

PURPOSE: Personalized treatments such as targeted therapy and immunotherapy have revolutionized the predominantly therapeutic paradigm for non-small cell lung cancer (NSCLC). However, these treatment decisions require the determination of targetable genomic and molecular alterations through invasive genetic or immunohistochemistry (IHC) tests. Numerous previous studies have demonstrated that artificial intelligence can accurately predict the single-gene status of tumors based on radiologic imaging, but few studies have achieved the simultaneous evaluation of multiple genes to reflect more realistic clinical scenarios. METHODS: We proposed a multi-label multi-task deep learning (MMDL) system for non-invasively predicting actionable NSCLC mutations and PD-L1 expression utilizing routinely acquired computed tomography (CT) images. This radiogenomic system integrated transformer-based deep learning features and radiomic features of CT volumes from 1096 NSCLC patients based on next-generation sequencing (NGS) and IHC tests. RESULTS: For each task cohort, we randomly split the corresponding dataset into training (80%), validation (10%), and testing (10%) subsets. The area under the receiver operating characteristic curves (AUCs) of the MMDL system achieved 0.862 (95% confidence interval (CI), 0.758-0.969) for discrimination of a panel of 8 mutated genes, including EGFR, ALK, ERBB2, BRAF, MET, ROS1, RET and KRAS, 0.856 (95% CI, 0.663-0.948) for identification of a 10-molecular status panel (previous 8 genes plus TP53 and PD-L1); and 0.868 (95% CI, 0.641-0.972) for classifying EGFR / PD-L1 subtype, respectively. CONCLUSIONS: To the best of our knowledge, this study is the first deep learning system to simultaneously analyze 10 molecular expressions, which might be utilized as an assistive tool in conjunction with or in lieu of ancillary testing to support precision treatment options.

8.
Front Med (Lausanne) ; 9: 935080, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35966878

RESUMO

With the increasing incidence and mortality of pulmonary tuberculosis, in addition to tough and controversial disease management, time-wasting and resource-limited conventional approaches to the diagnosis and differential diagnosis of tuberculosis are still awkward issues, especially in countries with high tuberculosis burden and backwardness. In the meantime, the climbing proportion of drug-resistant tuberculosis poses a significant hazard to public health. Thus, auxiliary diagnostic tools with higher efficiency and accuracy are urgently required. Artificial intelligence (AI), which is not new but has recently grown in popularity, provides researchers with opportunities and technical underpinnings to develop novel, precise, rapid, and automated implements for pulmonary tuberculosis care, including but not limited to tuberculosis detection. In this review, we aimed to introduce representative AI methods, focusing on deep learning and radiomics, followed by definite descriptions of the state-of-the-art AI models developed using medical images and genetic data to detect pulmonary tuberculosis, distinguish the infection from other pulmonary diseases, and identify drug resistance of tuberculosis, with the purpose of assisting physicians in deciding the appropriate therapeutic schedule in the early stage of the disease. We also enumerated the challenges in maximizing the impact of AI in this field such as generalization and clinical utility of the deep learning models.

9.
NPJ Digit Med ; 5(1): 124, 2022 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-35999467

RESUMO

Respiratory diseases impose a tremendous global health burden on large patient populations. In this study, we aimed to develop DeepMRDTR, a deep learning-based medical image interpretation system for the diagnosis of major respiratory diseases based on the automated identification of a wide range of radiological abnormalities through computed tomography (CT) and chest X-ray (CXR) from real-world, large-scale datasets. DeepMRDTR comprises four networks (two CT-Nets and two CXR-Nets) that exploit contrastive learning to generate pre-training parameters that are fine-tuned on the retrospective dataset collected from a single institution. The performance of DeepMRDTR was evaluated for abnormality identification and disease diagnosis on data from two different institutions: one was an internal testing dataset from the same institution as the training data and the second was collected from an external institution to evaluate the model generalizability and robustness to an unrelated population dataset. In such a difficult multi-class diagnosis task, our system achieved the average area under the receiver operating characteristic curve (AUC) of 0.856 (95% confidence interval (CI):0.843-0.868) and 0.841 (95%CI:0.832-0.887) for abnormality identification, and 0.900 (95%CI:0.872-0.958) and 0.866 (95%CI:0.832-0.887) for major respiratory diseases' diagnosis on CT and CXR datasets, respectively. Furthermore, to achieve a clinically actionable diagnosis, we deployed a preliminary version of DeepMRDTR into the clinical workflow, which was performed on par with senior experts in disease diagnosis, with an AUC of 0.890 and a Cohen's k of 0.746-0.877 at a reasonable timescale; these findings demonstrate the potential to accelerate the medical workflow to facilitate early diagnosis as a triage tool for respiratory diseases which supports improved clinical diagnoses and decision-making.

10.
Sci Rep ; 12(1): 13850, 2022 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-35974053

RESUMO

A wide-field fundus camera, which can selectively evaluate the retina and choroid, is desirable for better detection and treatment evaluation of eye diseases. Trans-palpebral illumination has been demonstrated for wide-field fundus photography, but its application for true-color retinal imaging is challenging due to the light efficiency delivered through the eyelid and sclera is highly wavelength dependent. This study is to test the feasibility of true-color retinal imaging using efficiency-balanced visible light illumination, and to validate multiple spectral imaging (MSI) of the retina and choroid. 530 nm, 625 nm, 780 nm and 970 nm light emission diodes (LED)s are used to quantitatively evaluate the spectral efficiency of the trans-palpebral illumination. In comparison with 530 nm illumination, the 625 nm, 780 nm and 970 nm light efficiencies are 30.25, 523.05, and 1238.35 times higher. The light efficiency-balanced 530 nm and 625 nm illumination control can be used to produce true-color retinal image with contrast enhancement. The 780 nm light image enhances the visibility of choroidal vasculature, and the 970 nm image is predominated by large veins in the choroid. Without the need of pharmacological pupillary dilation, a 140° eye-angle field of view (FOV) is demonstrated in a snapshot fundus image. In coordination with a fixation target, the FOV can be readily expanded over the equator of the eye to visualize vortex ampullas.


Assuntos
Oftalmopatias , Iluminação , Corioide/irrigação sanguínea , Corioide/diagnóstico por imagem , Oftalmopatias/diagnóstico , Pálpebras , Angiofluoresceinografia/métodos , Fundo de Olho , Humanos , Fotografação/métodos , Retina/diagnóstico por imagem
11.
Front Immunol ; 13: 828560, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35464416

RESUMO

Background: Programmed death-ligand 1 (PD-L1) assessment of lung cancer in immunohistochemical assays was only approved diagnostic biomarker for immunotherapy. But the tumor proportion score (TPS) of PD-L1 was challenging owing to invasive sampling and intertumoral heterogeneity. There was a strong demand for the development of an artificial intelligence (AI) system to measure PD-L1 expression signature (ES) non-invasively. Methods: We developed an AI system using deep learning (DL), radiomics and combination models based on computed tomography (CT) images of 1,135 non-small cell lung cancer (NSCLC) patients with PD-L1 status. The deep learning feature was obtained through a 3D ResNet as the feature map extractor and the specialized classifier was constructed for the prediction and evaluation tasks. Then, a Cox proportional-hazards model combined with clinical factors and PD-L1 ES was utilized to evaluate prognosis in survival cohort. Results: The combination model achieved a robust high-performance with area under the receiver operating characteristic curves (AUCs) of 0.950 (95% CI, 0.938-0.960), 0.934 (95% CI, 0.906-0.964), and 0.946 (95% CI, 0.933-0.958), for predicting PD-L1ES <1%, 1-49%, and ≥50% in validation cohort, respectively. Additionally, when combination model was trained on multi-source features the performance of overall survival evaluation (C-index: 0.89) could be superior compared to these of the clinical model alone (C-index: 0.86). Conclusion: A non-invasive measurement using deep learning was proposed to access PD-L1 expression and survival outcomes of NSCLC. This study also indicated that deep learning model combined with clinical characteristics improved prediction capabilities, which would assist physicians in making rapid decision on clinical treatment options.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Neoplasias Pulmonares , Algoritmos , Inteligência Artificial , Antígeno B7-H1/metabolismo , Carcinoma Pulmonar de Células não Pequenas/patologia , Humanos , Neoplasias Pulmonares/patologia
12.
Front Immunol ; 13: 813072, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35250988

RESUMO

BACKGROUND: Epidermal growth factor receptor (EGFR) genotyping and programmed death ligand-1 (PD-L1) expressions are of paramount importance for treatment guidelines such as the use of tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs) in lung cancer. Conventional identification of EGFR or PD-L1 status requires surgical or biopsied tumor specimens, which are obtained through invasive procedures associated with risk of morbidities and may be unavailable to access tissue samples. Here, we developed an artificial intelligence (AI) system that can predict EGFR and PD-L1 status in using non-invasive computed tomography (CT) images. METHODS: A multitask AI system including deep learning (DL) module, radiomics (RA) module, and joint (JO) module combining the DL, RA, and clinical features was developed, trained, and optimized with CT images to predict the EGFR and PD-L1 status. We used feature selectors and feature fusion methods to find the best model among combinations of module types. The models were evaluated using the areas under the receiver operating characteristic curves (AUCs). RESULTS: Our multitask AI system yielded promising performance for gene expression status, subtype classification, and joint prediction. The AUCs of DL module achieved 0.842 (95% CI, 0.825-0.855) in the EGFR mutated status and 0.805 (95% CI, 0.779-0.829) in the mutated-EGFR subtypes discrimination (19Del, L858R, other mutations). DL module also demonstrated the AUCs of 0.799 (95% CI, 0.762-0.854) in the PD-L1 expression status and 0.837 (95% CI, 0.775-0.911) in the positive-PD-L1 subtypes (PD-L1 tumor proportion score, 1%-49% and ≥50%). Furthermore, the JO module of our AI system performed well in the EGFR and PD-L1 joint cohort, with an AUC of 0.928 (95% CI, 0.909-0.946) for distinguishing EGFR mutated status and 0.905 (95% CI, 0.886-0.930) for discriminating PD-L1 expression status. CONCLUSION: Our AI system has demonstrated the encouraging results for identifying gene status and further assessing the genotypes. Both clinical indicators and radiomics features showed a complementary role in prediction and provided accurate estimates to predict EGFR and PD-L1 status. Furthermore, this non-invasive, high-throughput, and interpretable AI system can be used as an assistive tool in conjunction with or in lieu of ancillary tests and extensive diagnostic workups to facilitate early intervention.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Inteligência Artificial , Antígeno B7-H1/metabolismo , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/tratamento farmacológico , Carcinoma Pulmonar de Células não Pequenas/genética , Receptores ErbB/metabolismo , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/tratamento farmacológico , Neoplasias Pulmonares/genética , Tomografia Computadorizada por Raios X
13.
Exp Biol Med (Maywood) ; 247(4): 289-299, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34878934

RESUMO

A portable, low cost, widefield fundus camera is essential for developing affordable teleophthalmology. However, conventional trans-pupillary illumination used in traditional fundus cameras limits the field of view (FOV) in a snapshot image, and frequently requires pharmacologically pupillary dilation for reliable examination of eye conditions. This minireview summarizes recent developments in alternative illumination approaches for widefield fundus photography. Miniaturized indirect illumination has been used to enable compact design for developing low cost, portable, widefield fundus camera. Contact mode trans-pars-planar illumination has been validated for ultra-widefield fundus imaging of infant eyes. Contact-free trans-pars-planar illumination has been explored for widefield imaging of adult eyes. Trans-palpebral illumination has been also demonstrated in a smartphone-based widefield fundus imager to foster affordable teleophthalmology.


Assuntos
Oftalmopatias , Oftalmologia , Telemedicina , Técnicas de Diagnóstico Oftalmológico , Oftalmopatias/diagnóstico , Fundo de Olho , Humanos , Oftalmologia/métodos , Fotografação/métodos
14.
Front Med (Lausanne) ; 8: 753055, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34926501

RESUMO

Objective: To assess the performance of a novel deep learning (DL)-based artificial intelligence (AI) system in classifying computed tomography (CT) scans of pneumonia patients into different groups, as well as to present an effective clinically relevant machine learning (ML) system based on medical image identification and clinical feature interpretation to assist radiologists in triage and diagnosis. Methods: The 3,463 CT images of pneumonia used in this multi-center retrospective study were divided into four categories: bacterial pneumonia (n = 507), fungal pneumonia (n = 126), common viral pneumonia (n = 777), and COVID-19 (n = 2,053). We used DL methods based on images to distinguish pulmonary infections. A machine learning (ML) model for risk interpretation was developed using key imaging (learned from the DL methods) and clinical features. The algorithms were evaluated using the areas under the receiver operating characteristic curves (AUCs). Results: The median AUC of DL models for differentiating pulmonary infection was 99.5% (COVID-19), 98.6% (viral pneumonia), 98.4% (bacterial pneumonia), 99.1% (fungal pneumonia), respectively. By combining chest CT results and clinical symptoms, the ML model performed well, with an AUC of 99.7% for SARS-CoV-2, 99.4% for common virus, 98.9% for bacteria, and 99.6% for fungus. Regarding clinical features interpreting, the model revealed distinctive CT characteristics associated with specific pneumonia: in COVID-19, ground-glass opacity (GGO) [92.5%; odds ratio (OR), 1.76; 95% confidence interval (CI): 1.71-1.86]; larger lesions in the right upper lung (75.0%; OR, 1.12; 95% CI: 1.03-1.25) with viral pneumonia; older age (57.0 years ± 14.2, OR, 1.84; 95% CI: 1.73-1.99) with bacterial pneumonia; and consolidation (95.8%, OR, 1.29; 95% CI: 1.05-1.40) with fungal pneumonia. Conclusion: For classifying common types of pneumonia and assessing the influential factors for triage, our AI system has shown promising results. Our ultimate goal is to assist clinicians in making quick and accurate diagnoses, resulting in the potential for early therapeutic intervention.

15.
Biomed Opt Express ; 12(10): 6651-6659, 2021 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-34745762

RESUMO

Visual-angle has been used as the conventional unit to determine the field-of-view (FOV) in traditional fundus photography. Recently emerging usage of eye-angle as the unit in wide field fundus photography creates confusion about FOV interpretation in instrumentation design and clinical application. This study aims to systematically derive the relationship between the visual-angle θv and eye-angle θe, and thus to enable reliable determination of the FOV in wide field fundus photography. FOV conversion ratio θe/θv, angular conversion ratio Δθe/Δθv, retinal conversion ratio Δd/Δθv, retinal distance and area are quantitatively evaluated. Systematic analysis indicates that reliable conversion between the θv and θe requires determined nodal point and spherical radius of the eye; and the conversion ratio is not linear from the central field to peripheral region. Based on the eye model with average parameters, both angular conversion (Δθe/Δθv) and retinal conversion (Δd/Δθv) ratios are observed to have a 1.51-fold difference at the central field and far peripheral region. A conversion table, including θe/θv, Δθe/Δθv, Δd/Δθv, retinal area and percentage ratio, is created for reliable assessment of imaging systems with variable FOV.

16.
Health Data Sci ; 2021: 8786793, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-38487506

RESUMO

Importance. With the booming growth of artificial intelligence (AI), especially the recent advancements of deep learning, utilizing advanced deep learning-based methods for medical image analysis has become an active research area both in medical industry and academia. This paper reviewed the recent progress of deep learning research in medical image analysis and clinical applications. It also discussed the existing problems in the field and provided possible solutions and future directions.Highlights. This paper reviewed the advancement of convolutional neural network-based techniques in clinical applications. More specifically, state-of-the-art clinical applications include four major human body systems: the nervous system, the cardiovascular system, the digestive system, and the skeletal system. Overall, according to the best available evidence, deep learning models performed well in medical image analysis, but what cannot be ignored are the algorithms derived from small-scale medical datasets impeding the clinical applicability. Future direction could include federated learning, benchmark dataset collection, and utilizing domain subject knowledge as priors.Conclusion. Recent advanced deep learning technologies have achieved great success in medical image analysis with high accuracy, efficiency, stability, and scalability. Technological advancements that can alleviate the high demands on high-quality large-scale datasets could be one of the future developments in this area.

17.
Front Med ; 14(4): 450-469, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31840200

RESUMO

As a promising method in artificial intelligence, deep learning has been proven successful in several domains ranging from acoustics and images to natural language processing. With medical imaging becoming an important part of disease screening and diagnosis, deep learning-based approaches have emerged as powerful techniques in medical image areas. In this process, feature representations are learned directly and automatically from data, leading to remarkable breakthroughs in the medical field. Deep learning has been widely applied in medical imaging for improved image analysis. This paper reviews the major deep learning techniques in this time of rapid evolution and summarizes some of its key contributions and state-of-the-art outcomes. The topics include classification, detection, and segmentation tasks on medical image analysis with respect to pulmonary medical images, datasets, and benchmarks. A comprehensive overview of these methods implemented on various lung diseases consisting of pulmonary nodule diseases, pulmonary embolism, pneumonia, and interstitial lung disease is also provided. Lastly, the application of deep learning techniques to the medical image and an analysis of their future challenges and potential directions are discussed.


Assuntos
Aprendizado Profundo , Inteligência Artificial , Diagnóstico por Imagem , Humanos , Processamento de Imagem Assistida por Computador , Pulmão
18.
Math Biosci ; 311: 39-48, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30825482

RESUMO

Tissue-based gene expression data analyses, while most powerful, represent a significantly more challenging problem compared to cell-based gene expression data analyses, even for the simplest differential gene expression analyses. The result in determining if a gene is differentially expressed in tumor vs. non-tumorous control tissues does not only depend on the two expression values but also on the percentage of the tissue cells being tumor cells, i.e., the tumor purity. We developed a novel matched-pairs feature selection method, which takes into full consideration of the tumor purity when deciding if a gene is differentially expressed in tumor vs. control experiments, which is simple, effective, and accurate. To evaluate the validity and performance of the method, we have compared it with four published methods using both simulated datasets and actual cancer tissue datasets and found that our method achieved better performance with higher sensitivity and specificity than the other methods. Our method was the a matched-pairs feature selection method on gene expression analysis under matched case-control design which takes into consideration the tumor purity information, which can set a foundation for further development of other gene expression analysis needs.


Assuntos
Perfilação da Expressão Gênica , Modelos Biológicos , Modelos Estatísticos , Neoplasias/genética , Neoplasias/patologia , Projetos de Pesquisa/normas , Estudos de Casos e Controles , Humanos , Sensibilidade e Especificidade
19.
Radiol Artif Intell ; 1(3): e180084, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-33937792

RESUMO

PURPOSE: To compare sensitivity in the detection of lung nodules between the deep learning (DL) model and radiologists using various patient population and scanning parameters and to assess whether the radiologists' detection performance could be enhanced when using the DL model for assistance. MATERIALS AND METHODS: A total of 12 754 thin-section chest CT scans from January 2012 to June 2017 were retrospectively collected for DL model training, validation, and testing. Pulmonary nodules from these scans were categorized into four types: solid, subsolid, calcified, and pleural. The testing dataset was divided into three cohorts based on radiation dose, patient age, and CT manufacturer. Detection performance of the DL model was analyzed by using a free-response receiver operating characteristic curve. Sensitivities of the DL model and radiologists were compared by using exploratory data analysis. False-positive detection rates of the DL model were compared within each cohort. Detection performance of the same radiologist with and without the DL model were compared by using nodule-level sensitivity and patient-level localization receiver operating characteristic curves. RESULTS: The DL model showed elevated overall sensitivity compared with manual review of pulmonary nodules. No significant dependence regarding radiation dose, patient age range, or CT manufacturer was observed. The sensitivity of the junior radiologist was significantly dependent on patient age. When radiologists used the DL model for assistance, their performance improved and reading time was reduced. CONCLUSION: DL shows promise to enhance the identification of pulmonary nodules and benefit nodule management.© RSNA, 2019Supplemental material is available for this article.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...