Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Article in English | MEDLINE | ID: mdl-38465203

ABSTRACT

Whole-head segmentation from Magnetic Resonance Images (MRI) establishes the foundation for individualized computational models using finite element method (FEM). This foundation paves the path for computer-aided solutions in fields, particularly in non-invasive brain stimulation. Most current automatic head segmentation tools are developed using healthy young adults. Thus, they may neglect the older population that is more prone to age-related structural decline such as brain atrophy. In this work, we present a new deep learning method called GRACE, which stands for General, Rapid, And Comprehensive whole-hEad tissue segmentation. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. To the best of our knowledge, this work contains the largest manually corrected dataset to date in terms of number of MRIs and segmented tissues. GRACE outperforms five freely available software tools and a traditional 3D U-Net on a five-tissue segmentation task. On this task, GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36. GRACE can segment a whole-head MRI in about 3 seconds, while the fastest software tool takes about 3 minutes. In summary, GRACE segments a spectrum of tissue types from older adults T1-MRI scans at favorable accuracy and speed. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders. To support open science, the GRACE code and trained weights are made available online and open to the research community at https://github.com/lab-smile/GRACE.

2.
Brain Stimul ; 16(3): 969-974, 2023.
Article in English | MEDLINE | ID: mdl-37279860

ABSTRACT

BACKGROUND: Transcranial direct current stimulation (tDCS) paired with cognitive training (CT) is widely investigated as a therapeutic tool to enhance cognitive function in older adults with and without neurodegenerative disease. Prior research demonstrates that the level of benefit from tDCS paired with CT varies from person to person, likely due to individual differences in neuroanatomical structure. OBJECTIVE: The current study aims to develop a method to objectively optimize and personalize current dosage to maximize the functional gains of non-invasive brain stimulation. METHODS: A support vector machine (SVM) model was trained to predict treatment response based on computational models of current density in a sample dataset (n = 14). Feature weights of the deployed SVM were used in a weighted Gaussian Mixture Model (GMM) to maximize the likelihood of converting tDCS non-responders to responders by finding the most optimum electrode montage and applied current intensity (optimized models). RESULTS: Current distributions optimized by the proposed SVM-GMM model demonstrated 93% voxel-wise coherence within target brain regions between the originally non-responders and responders. The optimized current distribution in original non-responders was 3.38 standard deviations closer to the current dose of responders compared to the pre-optimized models. Optimized models also achieved an average treatment response likelihood and normalized mutual information of 99.993% and 91.21%, respectively. Following tDCS dose optimization, the SVM model successfully predicted all tDCS non-responders with optimized doses as responders. CONCLUSIONS: The results of this study serve as a foundation for a custom dose optimization strategy towards precision medicine in tDCS to improve outcomes in cognitive decline remediation for older adults.


Subject(s)
Neurodegenerative Diseases , Transcranial Direct Current Stimulation , Humans , Aged , Transcranial Direct Current Stimulation/methods , Cognition , Brain/physiology , Electrodes
3.
Softw Impacts ; 152023 Mar.
Article in English | MEDLINE | ID: mdl-37091721

ABSTRACT

Deep learning has achieved the state-of-the-art performance across medical imaging tasks; however, model calibration is often not considered. Uncalibrated models are potentially dangerous in high-risk applications since the user does not know when they will fail. Therefore, this paper proposes a novel domain-aware loss function to calibrate deep learning models. The proposed loss function applies a class-wise penalty based on the similarity between classes within a given target domain. Thus, the approach improves the calibration while also ensuring that the model makes less risky errors even when incorrect. The code for this software is available at https://github.com/lab-smile/DOMINO.

4.
BMC Med Inform Decis Mak ; 22(Suppl 3): 255, 2022 09 27.
Article in English | MEDLINE | ID: mdl-36167551

ABSTRACT

BACKGROUND: Diabetic retinopathy (DR) is a leading cause of blindness in American adults. If detected, DR can be treated to prevent further damage causing blindness. There is an increasing interest in developing artificial intelligence (AI) technologies to help detect DR using electronic health records. The lesion-related information documented in fundus image reports is a valuable resource that could help diagnoses of DR in clinical decision support systems. However, most studies for AI-based DR diagnoses are mainly based on medical images; there is limited studies to explore the lesion-related information captured in the free text image reports. METHODS: In this study, we examined two state-of-the-art transformer-based natural language processing (NLP) models, including BERT and RoBERTa, compared them with a recurrent neural network implemented using Long short-term memory (LSTM) to extract DR-related concepts from clinical narratives. We identified four different categories of DR-related clinical concepts including lesions, eye parts, laterality, and severity, developed annotation guidelines, annotated a DR-corpus of 536 image reports, and developed transformer-based NLP models for clinical concept extraction and relation extraction. We also examined the relation extraction under two settings including 'gold-standard' setting-where gold-standard concepts were used-and end-to-end setting. RESULTS: For concept extraction, the BERT model pretrained with the MIMIC III dataset achieve the best performance (0.9503 and 0.9645 for strict/lenient evaluation). For relation extraction, BERT model pretrained using general English text achieved the best strict/lenient F1-score of 0.9316. The end-to-end system, BERT_general_e2e, achieved the best strict/lenient F1-score of 0.8578 and 0.8881, respectively. Another end-to-end system based on the RoBERTa architecture, RoBERTa_general_e2e, also achieved the same performance as BERT_general_e2e in strict scores. CONCLUSIONS: This study demonstrated the efficiency of transformer-based NLP models for clinical concept extraction and relation extraction. Our results show that it's necessary to pretrain transformer models using clinical text to optimize the performance for clinical concept extraction. Whereas, for relation extraction, transformers pretrained using general English text perform better.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Artificial Intelligence , Blindness , Diabetic Retinopathy/diagnosis , Electronic Health Records , Humans , Natural Language Processing
5.
Brain Stimul ; 13(6): 1753-1764, 2020.
Article in English | MEDLINE | ID: mdl-33049412

ABSTRACT

BACKGROUND: Transcranial direct current stimulation (tDCS) is widely investigated as a therapeutic tool to enhance cognitive function in older adults with and without neurodegenerative disease. Prior research demonstrates that electric current delivery to the brain can vary significantly across individuals. Quantification of this variability could enable person-specific optimization of tDCS outcomes. This pilot study used machine learning and MRI-derived electric field models to predict working memory improvements as a proof of concept for precision cognitive intervention. METHODS: Fourteen healthy older adults received 20 minutes of 2 mA tDCS stimulation (F3/F4) during a two-week cognitive training intervention. Participants performed an N-back working memory task pre-/post-intervention. MRI-derived current models were passed through a linear Support Vector Machine (SVM) learning algorithm to characterize crucial tDCS current components (intensity and direction) that induced working memory improvements in tDCS responders versus non-responders. MAIN RESULTS: SVM models of tDCS current components had 86% overall accuracy in classifying treatment responders vs. non-responders, with current intensity producing the best overall model differentiating changes in working memory performance. Median current intensity and direction in brain regions near the electrodes were positively related to intervention responses (r=0.811,p<0.001 and r=0.774,p=0.001). CONCLUSIONS: This study provides the first evidence that pattern recognition analyses of MRI-derived tDCS current models can provide individual prognostic classification of tDCS treatment response with 86% accuracy. Individual differences in current intensity and direction play important roles in determining treatment response to tDCS. These findings provide important insights into mechanisms of tDCS response as well as proof of concept for future precision dosing models of tDCS intervention.


Subject(s)
Brain/diagnostic imaging , Individuality , Machine Learning , Transcranial Direct Current Stimulation/methods , Aged , Aged, 80 and over , Brain/physiology , Cognition/physiology , Double-Blind Method , Female , Forecasting , Humans , Machine Learning/trends , Magnetic Resonance Imaging/methods , Male , Memory, Short-Term/physiology , Pilot Projects , Transcranial Direct Current Stimulation/trends , Treatment Outcome
6.
Med Image Anal ; 64: 101742, 2020 08.
Article in English | MEDLINE | ID: mdl-32540699

ABSTRACT

Diabetic Retinopathy (DR) represents a highly-prevalent complication of diabetes in which individuals suffer from damage to the blood vessels in the retina. The disease manifests itself through lesion presence, starting with microaneurysms, at the nonproliferative stage before being characterized by neovascularization in the proliferative stage. Retinal specialists strive to detect DR early so that the disease can be treated before substantial, irreversible vision loss occurs. The level of DR severity indicates the extent of treatment necessary - vision loss may be preventable by effective diabetes management in mild (early) stages, rather than subjecting the patient to invasive laser surgery. Using artificial intelligence (AI), highly accurate and efficient systems can be developed to help assist medical professionals in screening and diagnosing DR earlier and without the full resources that are available in specialty clinics. In particular, deep learning facilitates diagnosis earlier and with higher sensitivity and specificity. Such systems make decisions based on minimally handcrafted features and pave the way for personalized therapies. Thus, this survey provides a comprehensive description of the current technology used in each step of DR diagnosis. First, it begins with an introduction to the disease and the current technologies and resources available in this space. It proceeds to discuss the frameworks that different teams have used to detect and classify DR. Ultimately, we conclude that deep learning systems offer revolutionary potential to DR identification and prevention of vision loss.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Artificial Intelligence , Diabetic Retinopathy/diagnostic imaging , Humans , Mass Screening , Retina , Sensitivity and Specificity
7.
Med Image Anal ; 61: 101654, 2020 04.
Article in English | MEDLINE | ID: mdl-32066065

ABSTRACT

Objective and quantitative assessment of fundus image quality is essential for the diagnosis of retinal diseases. The major factors in fundus image quality assessment are image artifact, clarity, and field definition. Unfortunately, most of existing quality assessment methods focus on the quality of overall image, without interpretable quality feedback for real-time adjustment. Furthermore, these models are often sensitive to the specific imaging devices, and cannot generalize well under different imaging conditions. This paper presents a new multi-task domain adaptation framework to automatically assess fundus image quality. The proposed framework provides interpretable quality assessment with both quantitative scores and quality visualization for potential real-time image recapture with proper adjustment. In particular, the present approach can detect optic disc and fovea structures as landmarks, to assist the assessment through coarse-to-fine feature encoding. The framework also exploit semi-tied adversarial discriminative domain adaptation to make the model generalizable across different data sources. Experimental results demonstrated that the proposed algorithm outperforms different state-of-the-art approaches and achieves an area under the ROC curve of 0.9455 for the overall quality classification.


Subject(s)
Fundus Oculi , Image Interpretation, Computer-Assisted/methods , Machine Learning , Optic Disk/diagnostic imaging , Photography , Retinal Diseases/diagnostic imaging , Artifacts , Datasets as Topic , Humans
8.
Front Neurol ; 10: 647, 2019.
Article in English | MEDLINE | ID: mdl-31297079

ABSTRACT

Computed Tomography Perfusion (CTP) imaging is a cost-effective and fast approach to provide diagnostic images for acute stroke treatment. Its cine scanning mode allows the visualization of anatomic brain structures and blood flow; however, it requires contrast agent injection and continuous CT scanning over an extended time. In fact, the accumulative radiation dose to patients will increase health risks such as skin irritation, hair loss, cataract formation, and even cancer. Solutions for reducing radiation exposure include reducing the tube current and/or shortening the X-ray radiation exposure time. However, images scanned at lower tube currents are usually accompanied by higher levels of noise and artifacts. On the other hand, shorter X-ray radiation exposure time with longer scanning intervals will lead to image information that is insufficient to capture the blood flow dynamics between frames. Thus, it is critical for us to seek a solution that can preserve the image quality when the tube current and the temporal frequency are both low. We propose STIR-Net in this paper, an end-to-end spatial-temporal convolutional neural network structure, which exploits multi-directional automatic feature extraction and image reconstruction schema to recover high-quality CT slices effectively. With the inputs of low-dose and low-resolution patches at different cross-sections of the spatio-temporal data, STIR-Net blends the features from both spatial and temporal domains to reconstruct high-quality CT volumes. In this study, we finalize extensive experiments to appraise the image restoration performance at different levels of tube current and spatial and temporal resolution scales.The results demonstrate the capability of our STIR-Net to restore high-quality scans at as low as 11% of absorbed radiation dose of the current imaging protocol, yielding an average of 10% improvement for perfusion maps compared to the patch-based log likelihood method.

SELECTION OF CITATIONS
SEARCH DETAIL
...