Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
J Endod ; 50(2): 220-228, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37979653

ABSTRACT

INTRODUCTION: Training of Artificial Intelligence (AI) for biomedical image analysis depends on large annotated datasets. This study assessed the efficacy of Active Learning (AL) strategies training AI models for accurate multilabel segmentation and detection of periapical lesions in cone-beam CTs (CBCTs) using a limited dataset. METHODS: Limited field-of-view CBCT volumes (n = 20) were segmented by clinicians (clinician segmentation [CS]) and Bayesian U-Net-based AL strategies. Two AL functions, Bayesian Active Learning by Disagreement [BALD] and Max_Entropy [ME], were used for multilabel segmentation ("Lesion"-"Tooth Structure"-"Bone"-"Restorative Materials"-"Background"), and compared to a non-AL benchmark Bayesian U-Net function. The training-to-testing set ratio was 4:1. Comparisons between the AL and Bayesian U-Net functions versus CS were made by evaluating the segmentation accuracy with the Dice indices and lesion detection accuracy. The Kruskal-Wallis test was used to assess statistically significant differences. RESULTS: The final training set contained 26 images. After 8 AL iterations, lesion detection sensitivity was 84.0% for BALD, 76.0% for ME, and 32.0% for Bayesian U-Net, which was significantly different (P < .0001; H = 16.989). The mean Dice index for all labels was 0.680 ± 0.155 for Bayesian U-Net and 0.703 ± 0.166 for ME after eight AL iterations, compared to 0.601 ± 0.267 for Bayesian U-Net over the mean of all iterations. The Dice index for "Lesion" was 0.504 for BALD and 0.501 for ME after 8 AL iterations, and at a maximum 0.288 for Bayesian U-Net. CONCLUSIONS: Both AL strategies based on uncertainty quantification from Bayesian U-Net BALD, and ME, provided improved segmentation and lesion detection accuracy for CBCTs. AL may contribute to reducing extensive labeling needs for training AI algorithms for biomedical image analysis in dentistry.


Subject(s)
Algorithms , Artificial Intelligence , Bayes Theorem , Uncertainty , Cone-Beam Computed Tomography , Dental Materials , Image Processing, Computer-Assisted
2.
Bioengineering (Basel) ; 10(10)2023 Sep 28.
Article in English | MEDLINE | ID: mdl-37892871

ABSTRACT

Early diagnosis of Alzheimer's disease (AD) is an important task that facilitates the development of treatment and prevention strategies, and may potentially improve patient outcomes. Neuroimaging has shown great promise, including the amyloid-PET, which measures the accumulation of amyloid plaques in the brain-a hallmark of AD. It is desirable to train end-to-end deep learning models to predict the progression of AD for individuals at early stages based on 3D amyloid-PET. However, commonly used models are trained in a fully supervised learning manner, and they are inevitably biased toward the given label information. To this end, we propose a selfsupervised contrastive learning method to accurately predict the conversion to AD for individuals with mild cognitive impairment (MCI) with 3D amyloid-PET. The proposed method, SMoCo, uses both labeled and unlabeled data to capture general semantic representations underlying the images. As the downstream task is given as classification of converters vs. non-converters, unlike the general self-supervised learning problem that aims to generate task-agnostic representations, SMoCo additionally utilizes the label information in the pre-training. To demonstrate the performance of our method, we conducted experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results confirmed that the proposed method is capable of providing appropriate data representations, resulting in accurate classification. SMoCo showed the best classification performance over the existing methods, with AUROC = 85.17%, accuracy = 81.09%, sensitivity = 77.39%, and specificity = 82.17%. While SSL has demonstrated great success in other application domains of computer vision, this study provided the initial investigation of using a proposed self-supervised contrastive learning model, SMoCo, to effectively predict MCI conversion to AD based on 3D amyloid-PET.

4.
medRxiv ; 2023 Aug 25.
Article in English | MEDLINE | ID: mdl-37662267

ABSTRACT

Early detection of Alzheimer's Disease (AD) is crucial to ensure timely interventions and optimize treatment outcomes for patients. While integrating multi-modal neuroimages, such as MRI and PET, has shown great promise, limited research has been done to effectively handle incomplete multi-modal image datasets in the integration. To this end, we propose a deep learning-based framework that employs Mutual Knowledge Distillation (MKD) to jointly model different sub-cohorts based on their respective available image modalities. In MKD, the model with more modalities (e.g., MRI and PET) is considered a teacher while the model with fewer modalities (e.g., only MRI) is considered a student. Our proposed MKD framework includes three key components: First, we design a teacher model that is student-oriented, namely the Student-oriented Multi-modal Teacher (SMT), through multi-modal information disentanglement. Second, we train the student model by not only minimizing its classification errors but also learning from the SMT teacher. Third, we update the teacher model by transfer learning from the student's feature extractor because the student model is trained with more samples. Evaluations on Alzheimer's Disease Neuroimaging Initiative (ADNI) datasets highlight the effectiveness of our method. Our work demonstrates the potential of using AI for addressing the challenges of incomplete multi-modal neuroimage datasets, opening new avenues for advancing early AD detection and treatment strategies.

5.
IEEE Sens J ; 23(10): 10998-11006, 2023 May 15.
Article in English | MEDLINE | ID: mdl-37547101

ABSTRACT

Abnormal gait is a significant non-cognitive biomarker for Alzheimer's disease (AD) and AD-related dementia (ADRD). Micro-Doppler radar, a non-wearable technology, can capture human gait movements for potential early ADRD risk assessment. In this research, we propose to design STRIDE integrating micro-Doppler radar sensors with advanced artificial intelligence (AI) technologies. STRIDE embeds a new deep learning (DL) classification framework. As a proof of concept, we develop a "digital-twin" of STRIDE, consisting of a human walking simulation model and a micro-Doppler radar simulation model, to generate a gait signature dataset. Taking established human walking parameters, the walking model simulates individuals with ADRD under various conditions. The radar model based on electromagnetic scattering and the Doppler frequency shift model is employed to generate micro-Doppler signatures from different moving body parts (e.g., foot, limb, joint, torso, shoulder, etc.). A band-dependent DL framework is developed to predict ADRD risks. The experimental results demonstrate the effectiveness and feasibility of STRIDE for evaluating ADRD risk.

6.
medRxiv ; 2023 Apr 26.
Article in English | MEDLINE | ID: mdl-37162842

ABSTRACT

Early diagnosis of Alzheimer's disease (AD) is an important task that facilitates the development of treatment and prevention strategies and may potentially improve patient outcomes. Neuroimaging has shown great promise, including the amyloid-PET which measures the accumulation of amyloid plaques in the brain - a hallmark of AD. It is desirable to train end-to-end deep learning models to predict the progression of AD for individuals at early stages based on 3D amyloid-PET. However, commonly used models are trained in a fully supervised learning manner and they are inevitably biased toward the given label information. To this end, we propose a self-supervised contrastive learning method to predict AD progression with 3D amyloid-PET. It uses unlabeled data to capture general representations underlying the images. As the downstream task is given as classification, unlike the general self-supervised learning problem that aims to generate task-agnostic representations, we also propose a loss function to utilize the label information in the pre-training. To demonstrate the performance of our method, we conducted experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results confirmed that the proposed method is capable of providing appropriate data representations, resulting in accurate classification.

7.
Article in English | MEDLINE | ID: mdl-37022061

ABSTRACT

Indoor fall monitoring is challenging for community-dwelling older adults due to the need for high accuracy and privacy concerns. Doppler radar is promising, given its low cost and contactless sensing mechanism. However, the line-of-sight restriction limits the application of radar sensing in practice, as the Doppler signature will vary when the sensing angle changes, and signal strength will be substantially degraded with large aspect angles. Additionally, the similarity of the Doppler signatures among different fall types makes it extremely challenging for classification. To address these problems, in this paper we first present a comprehensive experimental study to obtain Doppler radar signals under large and arbitrary aspect angles for diverse types of simulated falls and daily living activities. We then develop a novel, explainable, multi-stream, feature-resonated neural network (eMSFRNet) that achieves fall detection and a pioneering study of classifying seven fall types. eMSFRNet is robust to both radar sensing angles and subjects. It is also the first method that can resonate and enhance feature information from noisy/weak Doppler signatures. The multiple feature extractors - including partial pre-trained layers from ResNet, DenseNet, and VGGNet - extracts diverse feature information with various spatial abstractions from a pair of Doppler signals. The feature-resonated-fusion design translates the multi-stream features to a single salient feature that is critical to fall detection and classification. eMSFRNet achieved 99.3% accuracy detecting falls and 76.8% accuracy for classifying seven fall types. Our work is the first effective multistatic robust sensing system that overcomes the challenges associated with Doppler signatures under large and arbitrary aspect angles, via our comprehensible feature-resonated deep neural network. Our work also demonstrates the potential to accommodate different radar monitoring tasks that demand precise and robust sensing.

8.
Respir Res ; 23(1): 105, 2022 Apr 29.
Article in English | MEDLINE | ID: mdl-35488261

ABSTRACT

BACKGROUND: Quantitative computed tomography (QCT) analysis may serve as a tool for assessing the severity of coronavirus disease 2019 (COVID-19) and for monitoring its progress. The present study aimed to assess the association between steroid therapy and quantitative CT parameters in a longitudinal cohort with COVID-19. METHODS: Between February 7 and February 17, 2020, 72 patients with severe COVID-19 were retrospectively enrolled. All 300 chest CT scans from these patients were collected and classified into five stages according to the interval between hospital admission and follow-up CT scans: Stage 1 (at admission); Stage 2 (3-7 days); Stage 3 (8-14 days); Stage 4 (15-21 days); and Stage 5 (22-31 days). QCT was performed using a threshold-based quantitative analysis to segment the lung according to different Hounsfield unit (HU) intervals. The primary outcomes were changes in percentage of compromised lung volume (%CL, - 500 to 100 HU) at different stages. Multivariate Generalized Estimating Equations were performed after adjusting for potential confounders. RESULTS: Of 72 patients, 31 patients (43.1%) received steroid therapy. Steroid therapy was associated with a decrease in %CL (- 3.27% [95% CI, - 5.86 to - 0.68, P = 0.01]) after adjusting for duration and baseline %CL. Associations between steroid therapy and changes in %CL varied between different stages or baseline %CL (all interactions, P < 0.01). Steroid therapy was associated with decrease in %CL after stage 3 (all P < 0.05), but not at stage 2. Similarly, steroid therapy was associated with a more significant decrease in %CL in the high CL group (P < 0.05), but not in the low CL group. CONCLUSIONS: Steroid administration was independently associated with a decrease in %CL, with interaction by duration or disease severity in a longitudinal cohort. The quantitative CT parameters, particularly compromised lung volume, may provide a useful tool to monitor COVID-19 progression during the treatment process. Trial registration Clinicaltrials.gov, NCT04953247. Registered July 7, 2021, https://clinicaltrials.gov/ct2/show/NCT04953247.


Subject(s)
COVID-19 Drug Treatment , Humans , Lung/diagnostic imaging , Lung Volume Measurements/methods , Retrospective Studies , Steroids/therapeutic use
9.
Quant Imaging Med Surg ; 12(4): 2344-2355, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35371946

ABSTRACT

Background: It is critical to have a deep learning-based system validated on an external dataset before it is used to assist clinical prognoses. The aim of this study was to assess the performance of an artificial intelligence (AI) system to detect tuberculosis (TB) in a large-scale external dataset. Methods: An artificial, deep convolutional neural network (DCNN) was developed to differentiate TB from other common abnormalities of the lung on large-scale chest X-ray radiographs. An internal dataset with 7,025 images was used to develop the AI system, including images were from five sources in the U.S. and China, after which a 6-year dynamic cohort accumulation dataset with 358,169 images was used to conduct an independent external validation of the trained AI system. Results: The developed AI system provided a delineation of the boundaries of the lung region with a Dice coefficient of 0.958. It achieved an AUC of 0.99 and an accuracy of 0.948 on the internal data set, and an AUC of 0.95 and an accuracy of 0.931 on the external data set when it was used to detect TB from normal images. The AI system achieved an AUC of more than 0.9 on the internal data set, and an AUC of over 0.8 on the external data set when it was applied to detect TB, non-TB abnormal and normal images. Conclusions: We conducted a real-world independent validation, which showed that the trained system can be used as a TB screening tool to flag possible cases for rapid radiologic review and guide further examinations for radiologists.

10.
Eur Radiol ; 32(4): 2235-2245, 2022 Apr.
Article in English | MEDLINE | ID: mdl-34988656

ABSTRACT

BACKGROUND: Main challenges for COVID-19 include the lack of a rapid diagnostic test, a suitable tool to monitor and predict a patient's clinical course and an efficient way for data sharing among multicenters. We thus developed a novel artificial intelligence system based on deep learning (DL) and federated learning (FL) for the diagnosis, monitoring, and prediction of a patient's clinical course. METHODS: CT imaging derived from 6 different multicenter cohorts were used for stepwise diagnostic algorithm to diagnose COVID-19, with or without clinical data. Patients with more than 3 consecutive CT images were trained for the monitoring algorithm. FL has been applied for decentralized refinement of independently built DL models. RESULTS: A total of 1,552,988 CT slices from 4804 patients were used. The model can diagnose COVID-19 based on CT alone with the AUC being 0.98 (95% CI 0.97-0.99), and outperforms the radiologist's assessment. We have also successfully tested the incorporation of the DL diagnostic model with the FL framework. Its auto-segmentation analyses co-related well with those by radiologists and achieved a high Dice's coefficient of 0.77. It can produce a predictive curve of a patient's clinical course if serial CT assessments are available. INTERPRETATION: The system has high consistency in diagnosing COVID-19 based on CT, with or without clinical data. Alternatively, it can be implemented on a FL platform, which would potentially encourage the data sharing in the future. It also can produce an objective predictive curve of a patient's clinical course for visualization. KEY POINTS: • CoviDet could diagnose COVID-19 based on chest CT with high consistency; this outperformed the radiologist's assessment. Its auto-segmentation analyses co-related well with those by radiologists and could potentially monitor and predict a patient's clinical course if serial CT assessments are available. It can be integrated into the federated learning framework. • CoviDet can be used as an adjunct to aid clinicians with the CT diagnosis of COVID-19 and can potentially be used for disease monitoring; federated learning can potentially open opportunities for global collaboration.


Subject(s)
Artificial Intelligence , COVID-19 , Algorithms , Humans , Radiologists , Tomography, X-Ray Computed/methods
11.
J Xray Sci Technol ; 29(5): 741-762, 2021.
Article in English | MEDLINE | ID: mdl-34397444

ABSTRACT

BACKGROUND AND OBJECTIVE: Monitoring recovery process of coronavirus disease 2019 (COVID-19) patients released from hospital is crucial for exploring residual effects of COVID-19 and beneficial for clinical care. In this study, a comprehensive analysis was carried out to clarify residual effects of COVID-19 on hospital discharged patients. METHODS: Two hundred sixty-eight cases with laboratory measured data at hospital discharge record and five follow-up visits were retrospectively collected to carry out statistical data analysis comprehensively, which includes multiple statistical methods (e.g., chi-square, T-test and regression) used in this study. RESULTS: Study found that 13 of 21 hematologic parameters in laboratory measured dataset and volume ratio of right lung lesions on CT images highly associated with COVID-19. Moderate patients had statistically significant lower neutrophils than mild and severe patients after hospital discharge, which is probably caused by more efforts on severe patients and slightly neglection of moderate patients. COVID-19 has residual effects on neutrophil-to-lymphocyte ratio (NLR) of patients who have hypertension or chronic obstructive pulmonary disease (COPD). After released from hospital, female showed better performance in T lymphocytes subset cells, especially T helper lymphocyte% (16% higher than male). According to this sex-based differentiation of COVID-19, male should be recommended to take clinical test more frequently to monitor recovery of immune system. Patients over 60 years old showed unstable recovery process of immune cells (e.g., CD45 + lymphocyte) within 75 days after discharge requiring longer clinical care. Additionally, right lung was vulnerable to COVID-19 and required more time to recover than left lung. CONCLUSIONS: Criterion of hospital discharge and strategy of clinical care should be flexible in different cases due to residual effects of COVID-19, which depend on several impact factors. Revealing remaining effects of COVID-19 is an effective way to eliminate disorder of mental health caused by COVID-19 infection.


Subject(s)
COVID-19/diagnosis , Patient Discharge/statistics & numerical data , Adolescent , Adult , Aged , Aged, 80 and over , Biomarkers/blood , China , Female , Humans , Longitudinal Studies , Lung/diagnostic imaging , Male , Middle Aged , Retrospective Studies , SARS-CoV-2 , Tomography, X-Ray Computed , Young Adult
12.
J Med Imaging (Bellingham) ; 8(Suppl 1): 014501, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33415179

ABSTRACT

Purpose: Given the recent COVID-19 pandemic and its stress on global medical resources, presented here is the development of a machine intelligent method for thoracic computed tomography (CT) to inform management of patients on steroid treatment. Approach: Transfer learning has demonstrated strong performance when applied to medical imaging, particularly when only limited data are available. A cascaded transfer learning approach extracted quantitative features from thoracic CT sections using a fine-tuned VGG19 network. The extracted slice features were axially pooled to provide a CT-scan-level representation of thoracic characteristics and a support vector machine was trained to distinguish between patients who required steroid administration and those who did not, with performance evaluated through receiver operating characteristic (ROC) curve analysis. Least-squares fitting was used to assess temporal trends using the transfer learning approach, providing a preliminary method for monitoring disease progression. Results: In the task of identifying patients who should receive steroid treatments, this approach yielded an area under the ROC curve of 0.85 ± 0.10 and demonstrated significant separation between patients who received steroids and those who did not. Furthermore, temporal trend analysis of the prediction score matched expected progression during hospitalization for both groups, with separation at early timepoints prior to convergence near the end of the duration of hospitalization. Conclusions: The proposed cascade deep learning method has strong clinical potential for informing clinical decision-making and monitoring patient treatment.

13.
IISE Trans ; 53(9): 1010-1022, 2021.
Article in English | MEDLINE | ID: mdl-37397785

ABSTRACT

Multimodality datasets are becoming increasingly common in various domains to provide complementary information for predictive analytics. One significant challenge in fusing multimodality data is that the multiple modalities are not universally available for all samples due to cost and accessibility constraints. This results in a unique data structure called Incomplete Multimodality Dataset (IMD). We propose a novel Incomplete-Multimodality Transfer Learning (IMTL) model that builds a predictive model for each sub-cohort of samples with the same missing modality pattern, and meanwhile couples the model estimation processes for different sub-cohorts to allow for transfer learning. We develop an Expectation-Maximization (EM) algorithm to estimate the parameters of IMTL and further extend it to a collaborative learning paradigm that is specifically valuable for patient privacy preservation in health care applications. We prove two advantageous properties of IMTL: the ability for out-of-sample prediction and a theoretical guarantee for a larger Fisher information compared with models without transfer learning. IMTL is applied to diagnosis and prognosis of the Alzheimer's Disease (AD) at an early stage called Mild Cognitive Impairment (MCI) using incomplete multimodality imaging data. IMTL achieves higher accuracy than competing methods without transfer learning. Supplementary materials are available for this article on the publisher's website.

14.
J Xray Sci Technol ; 29(1): 1-17, 2021.
Article in English | MEDLINE | ID: mdl-33164982

ABSTRACT

BACKGROUND: Accurate and rapid diagnosis of coronavirus disease (COVID-19) is crucial for timely quarantine and treatment. PURPOSE: In this study, a deep learning algorithm-based AI model using ResUNet network was developed to evaluate the performance of radiologists with and without AI assistance in distinguishing COVID-19 infected pneumonia patients from other pulmonary infections on CT scans. METHODS: For model development and validation, a total number of 694 cases with 111,066 CT slides were retrospectively collected as training data and independent test data in the study. Among them, 118 are confirmed COVID-19 infected pneumonia cases and 576 are other pulmonary infection cases (e.g. tuberculosis cases, common pneumonia cases and non-COVID-19 viral pneumonia cases). The cases were divided into training and testing datasets. The independent test was performed by evaluating and comparing the performance of three radiologists with different years of practice experience in distinguishing COVID-19 infected pneumonia cases with and without the AI assistance. RESULTS: Our final model achieved an overall test accuracy of 0.914 with an area of the receiver operating characteristic (ROC) curve (AUC) of 0.903 in which the sensitivity and specificity are 0.918 and 0.909, respectively. The deep learning-based model then achieved a comparable performance by improving the radiologists' performance in distinguish COVOD-19 from other pulmonary infections, yielding better average accuracy and sensitivity, from 0.941 to 0.951 and from 0.895 to 0.942, respectively, when compared to radiologists without using AI assistance. CONCLUSION: A deep learning algorithm-based AI model developed in this study successfully improved radiologists' performance in distinguishing COVID-19 from other pulmonary infections using chest CT images.


Subject(s)
Artificial Intelligence , COVID-19/diagnostic imaging , Radiologists , Tomography, X-Ray Computed/methods , Adult , Aged , Algorithms , Clinical Competence/statistics & numerical data , Deep Learning , Diagnosis, Differential , Female , Humans , Lung/diagnostic imaging , Lung/pathology , Male , Middle Aged , Radiologists/statistics & numerical data , Respiratory Tract Infections/diagnostic imaging , SARS-CoV-2 , Sensitivity and Specificity , Young Adult
15.
J Xray Sci Technol ; 28(5): 939-951, 2020.
Article in English | MEDLINE | ID: mdl-32651351

ABSTRACT

OBJECTIVE: Diagnosis of tuberculosis (TB) in multi-slice spiral computed tomography (CT) images is a difficult task in many TB prevalent locations in which experienced radiologists are lacking. To address this difficulty, we develop an automated detection system based on artificial intelligence (AI) in this study to simplify the diagnostic process of active tuberculosis (ATB) and improve the diagnostic accuracy using CT images. DATA: A CT image dataset of 846 patients is retrospectively collected from a large teaching hospital. The gold standard for ATB patients is sputum smear, and the gold standard for normal and pneumonia patients is the CT report result. The dataset is divided into independent training and testing data subsets. The training data contains 337 ATB, 110 pneumonia, and 120 normal cases, while the testing data contains 139 ATB, 40 pneumonia, and 100 normal cases, respectively. METHODS: A U-Net deep learning algorithm was applied for automatic detection and segmentation of ATB lesions. Image processing methods are then applied to CT layers diagnosed as ATB lesions by U-Net, which can detect potentially misdiagnosed layers, and can turn 2D ATB lesions into 3D lesions based on consecutive U-Net annotations. Finally, independent test data is used to evaluate the performance of the developed AI tool. RESULTS: For an independent test, the AI tool yields an AUC value of 0.980. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value are 0.968, 0.964, 0.971, 0.971, and 0.964, respectively, which shows that the AI tool performs well for detection of ATB and differential diagnosis of non-ATB (i.e. pneumonia and normal cases). CONCLUSION: An AI tool for automatic detection of ATB in chest CT is successfully developed in this study. The AI tool can accurately detect ATB patients, and distinguish between ATB and non- ATB cases, which simplifies the diagnosis process and lays a solid foundation for the next step of AI in CT diagnosis of ATB in clinical application.


Subject(s)
Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Tuberculosis, Pulmonary/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , Child , Child, Preschool , Female , Humans , Lung/diagnostic imaging , Male , Middle Aged , Young Adult
16.
J Xray Sci Technol ; 28(5): 885-892, 2020.
Article in English | MEDLINE | ID: mdl-32675436

ABSTRACT

In this article, we analyze and report cases of three patients who were admitted to Renmin Hospital, Wuhan University, China, for treating COVID-19 pneumonia in February 2020 and were unresponsive to initial treatment of steroids. They were then received titrated steroids treatment based on the assessment of computed tomography (CT) images augmented and analyzed with the artificial intelligence (AI) tool and output. Three patients were finally recovered and discharged. The result indicated that sufficient steroids may be effective in treating the COVID-19 patients after frequent evaluation and timely adjustment according to the disease severity assessed based on the quantitative analysis of the images of serial CT scans.


Subject(s)
Coronavirus Infections/diagnostic imaging , Coronavirus Infections/drug therapy , Glucocorticoids/therapeutic use , Pneumonia, Viral/diagnostic imaging , Pneumonia, Viral/drug therapy , Tomography, X-Ray Computed/methods , Aged , Artificial Intelligence , Betacoronavirus , COVID-19 , China , Coronavirus Infections/pathology , Coronavirus Infections/physiopathology , Dose-Response Relationship, Drug , Female , Humans , Lung/diagnostic imaging , Lung/drug effects , Lung/pathology , Lung/physiopathology , Male , Middle Aged , Pandemics , Pneumonia, Viral/pathology , Pneumonia, Viral/physiopathology , Retrospective Studies , SARS-CoV-2
17.
J Xray Sci Technol ; 28(3): 391-404, 2020.
Article in English | MEDLINE | ID: mdl-32538893

ABSTRACT

Recently, COVID-19 has spread in more than 100 countries and regions around the world, raising grave global concerns. COVID-19 transmits mainly through respiratory droplets and close contacts, causing cluster infections. The symptoms are dominantly fever, fatigue, and dry cough, and can be complicated with tiredness, sore throat, and headache. A few patients have symptoms such as stuffy nose, runny nose, and diarrhea. The severe disease can progress rapidly into the acute respiratory distress syndrome (ARDS). Reverse transcription polymerase chain reaction (RT-PCR) and Next-generation sequencing (NGS) are the gold standard for diagnosing COVID-19. Chest imaging is used for cross validation. Chest CT is highly recommended as the preferred imaging diagnosis method for COVID-19 due to its high density and high spatial resolution. The common CT manifestation of COVID-19 includes multiple segmental ground glass opacities (GGOs) distributed dominantly in extrapulmonary/subpleural zones and along bronchovascular bundles with crazy paving sign and interlobular septal thickening and consolidation. Pleural effusion or mediastinal lymphadenopathy is rarely seen. In CT imaging, COVID-19 manifests differently in its various stages including the early stage, the progression (consolidation) stage, and the absorption stage. In its early stage, it manifests as scattered flaky GGOs in various sizes, dominated by peripheral pulmonary zone/subpleural distributions. In the progression state, GGOs increase in number and/or size, and lung consolidations may become visible. The main manifestation in the absorption stage is interstitial change of both lungs, such as fibrous cords and reticular opacities. Differentiation between COVID-19 pneumonia and other viral pneumonias are also analyzed. Thus, CT examination can help reduce false negatives of nucleic acid tests.


Subject(s)
Betacoronavirus/pathogenicity , Coronavirus Infections/diagnosis , Coronavirus Infections/pathology , Lung/diagnostic imaging , Lung/pathology , Pneumonia, Viral/diagnosis , Pneumonia, Viral/pathology , Tomography, X-Ray Computed/methods , COVID-19 , Coronavirus Infections/complications , Diagnosis, Differential , Disease Progression , Humans , Pandemics , Pleural Effusion/etiology , Pleural Effusion/pathology , Pneumonia, Viral/complications , Real-Time Polymerase Chain Reaction , SARS-CoV-2
18.
Transl Res ; 194: 56-67, 2018 04.
Article in English | MEDLINE | ID: mdl-29352978

ABSTRACT

Alzheimer's disease (AD) is a major neurodegenerative disease and the most common cause of dementia. Currently, no treatment exists to slow down or stop the progression of AD. There is converging belief that disease-modifying treatments should focus on early stages of the disease, that is, the mild cognitive impairment (MCI) and preclinical stages. Making a diagnosis of AD and offering a prognosis (likelihood of converting to AD) at these early stages are challenging tasks but possible with the help of multimodality imaging, such as magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG)-positron emission topography (PET), amyloid-PET, and recently introduced tau-PET, which provides different but complementary information. This article is a focused review of existing research in the recent decade that used statistical machine learning and artificial intelligence methods to perform quantitative analysis of multimodality image data for diagnosis and prognosis of AD at the MCI or preclinical stages. We review the existing work in 3 subareas: diagnosis, prognosis, and methods for handling modality-wise missing data-a commonly encountered problem when using multimodality imaging for prediction or classification. Factors contributing to missing data include lack of imaging equipment, cost, difficulty of obtaining patient consent, and patient drop-off (in longitudinal studies). Finally, we summarize our major findings and provide some recommendations for potential future research directions.


Subject(s)
Alzheimer Disease/diagnostic imaging , Artificial Intelligence , Multimodal Imaging/methods , Cognitive Dysfunction/diagnostic imaging , Fluorodeoxyglucose F18 , Humans , Machine Learning , Magnetic Resonance Imaging , Positron-Emission Tomography , Prognosis
19.
Med Phys ; 42(6): 2853-62, 2015 Jun.
Article in English | MEDLINE | ID: mdl-26127038

ABSTRACT

PURPOSE: To help improve efficacy of screening mammography by eventually establishing a new optimal personalized screening paradigm, the authors investigated the potential of using the quantitative multiscale texture and density feature analysis of digital mammograms to predict near-term breast cancer risk. METHODS: The authors' dataset includes digital mammograms acquired from 340 women. Among them, 141 were positive and 199 were negative/benign cases. The negative digital mammograms acquired from the "prior" screening examinations were used in the study. Based on the intensity value distributions, five subregions at different scales were extracted from each mammogram. Five groups of features, including density and texture features, were developed and calculated on every one of the subregions. Sequential forward floating selection was used to search for the effective combinations. Using the selected features, a support vector machine (SVM) was optimized using a tenfold validation method to predict the risk of each woman having image-detectable cancer in the next sequential mammography screening. The area under the receiver operating characteristic curve (AUC) was used as the performance assessment index. RESULTS: From a total number of 765 features computed from multiscale subregions, an optimal feature set of 12 features was selected. Applying this feature set, a SVM classifier yielded performance of AUC = 0.729 ± 0.021. The positive predictive value was 0.657 (92 of 140) and the negative predictive value was 0.755 (151 of 200). CONCLUSIONS: The study results demonstrated a moderately high positive association between risk prediction scores generated by the quantitative multiscale mammographic image feature analysis and the actual risk of a woman having an image-detectable breast cancer in the next subsequent examinations.


Subject(s)
Breast Neoplasms/diagnostic imaging , Breast/cytology , Breast/pathology , Image Processing, Computer-Assisted/methods , Female , Humans , Mammography , Predictive Value of Tests , ROC Curve , Radiographic Image Enhancement , Risk Assessment , Support Vector Machine
20.
Comput Med Imaging Graph ; 38(5): 348-57, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24725671

ABSTRACT

Asymmetry of bilateral mammographic tissue density and patterns is a potentially strong indicator of having or developing breast abnormalities or early cancers. The purpose of this study is to design and test the global asymmetry features from bilateral mammograms to predict the near-term risk of women developing detectable high risk breast lesions or cancer in the next sequential screening mammography examination. The image dataset includes mammograms acquired from 90 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including image preprocessing, suspicious region segmentation, image feature extraction, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under curve (AUC) is 0.754±0.024 when applying the new computerized aided diagnosis (CAD) scheme to our testing dataset. The positive predictive value and the negative predictive value were 0.58 and 0.80, respectively.


Subject(s)
Breast Neoplasms/diagnostic imaging , Mammography/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Breast Density , Female , Humans , Mammary Glands, Human/abnormalities , Predictive Value of Tests , Risk Assessment
SELECTION OF CITATIONS
SEARCH DETAIL
...