Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
Acad Radiol ; 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38997881

RESUMO

RATIONALE AND OBJECTIVES: Given the high volume of chest radiographs, radiologists frequently encounter heavy workloads. In outpatient imaging, a substantial portion of chest radiographs show no actionable findings. Automatically identifying these cases could improve efficiency by facilitating shorter reading workflows. PURPOSE: A large-scale study to assess the performance of AI on identifying chest radiographs with no actionable disease (NAD) in an outpatient imaging population using comprehensive, objective, and reproducible criteria for NAD. MATERIALS AND METHODS: The independent validation study includes 15000 patients with chest radiographs in posterior-anterior (PA) and lateral projections from an outpatient imaging center in the United States. Ground truth was established by reviewing CXR reports and classifying cases as NAD or actionable disease (AD). The NAD definition includes completely normal chest radiographs and radiographs with well-defined non-actionable findings. The AI NAD Analyzer1 (trained with 100 million multimodal images and fine-tuned on 1.3 million radiographs) utilizes a tandem system with image-level rule in and compartment-level rule out to provide case level output as NAD or potential actionable disease (PAD). RESULTS: A total of 14057 cases met our eligibility criteria (age 56 ± 16.1 years, 55% women and 45% men). The prevalence of NAD cases in the study population was 70.7%. The AI NAD Analyzer correctly classified NAD cases with a sensitivity of 29.1% and a yield of 20.6%. The specificity was 98.9% which corresponds to a miss rate of 0.3% of cases. Significant findings were missed in 0.06% of cases, while no cases with critical findings were missed by AI. CONCLUSION: In an outpatient population, AI can identify 20% of chest radiographs as NAD with a very low rate of missed findings. These cases could potentially be read using a streamlined protocol, thus improving efficiency and consequently reducing daily workload for radiologists.

2.
J Clin Med ; 12(24)2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-38137606

RESUMO

BACKGROUND: Chronic rhinosinusitis with nasal polyps (CRSwNP) is a disease of real interest for researchers due to its heterogenicity and complex pathophysiological mechanisms. Identification of the factors that ensure success after treatment represents one of the main challenges in CRSwNP research. No consensus in this direction has been reached so far. Biomarkers for poor outcomes have been noted, but nonetheless, their prognostic value has not been extensively investigated, and needs to be sought. We aimed to evaluate the correlation between potential prognostic predictors for recalcitrant disease in patients with CRSwNP. METHODS: The study group consisted of CRSwNP patients who underwent surgical treatment and nasal polyp (NP) tissue sampling. The preoperative workup included Lund-Mackay assessment, nasal endoscopy, eosinophil blood count, asthma, and environmental allergy questionnaire. Postoperatively, in subjects with poor outcomes, imagistic osteitis severity was evaluated, and IL-33 expression was measured. RESULTS: IL-33 expression in NP was positively and significantly correlated with postoperative osteitis on CT scans (p = 0.01). Furthermore, high osteitis CT scores were related to high blood eosinophilia (p = 0.01). A positive strong correlation was found between postoperative osteitis and the Lund-Mackay preoperative score (p = 0.01), as well as the nasal endoscopy score (p = 0.01). CONCLUSIONS: Our research analyzed the levels of polyp IL-33, relative to blood eosinophilia, overall disease severity score, and osteitis severity, in patients with CRSwNP. These variables are prognostic predictors for poor outcomes and recalcitrant disease. Considering the importance of bone involvement in CRSwNP, this research aims to provide a better insight into the correlations of osteitis with clinical and biological factors.

3.
Sci Rep ; 13(1): 21097, 2023 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-38036602

RESUMO

The evaluation of deep-learning (DL) systems typically relies on the Area under the Receiver-Operating-Curve (AU-ROC) as a performance metric. However, AU-ROC, in its holistic form, does not sufficiently consider performance within specific ranges of sensitivity and specificity, which are critical for the intended operational context of the system. Consequently, two systems with identical AU-ROC values can exhibit significantly divergent real-world performance. This issue is particularly pronounced in the context of anomaly detection tasks, a commonly employed application of DL systems across various research domains, including medical imaging, industrial automation, manufacturing, cyber security, fraud detection, and drug research, among others. The challenge arises from the heavy class imbalance in training datasets, with the abnormality class often incurring a considerably higher misclassification cost compared to the normal class. Traditional DL systems address this by adjusting the weighting of the cost function or optimizing for specific points along the ROC curve. While these approaches yield reasonable results in many cases, they do not actively seek to maximize performance for the desired operating point. In this study, we introduce a novel technique known as AUCReshaping, designed to reshape the ROC curve exclusively within the specified sensitivity and specificity range, by optimizing sensitivity at a predetermined specificity level. This reshaping is achieved through an adaptive and iterative boosting mechanism that allows the network to focus on pertinent samples during the learning process. We primarily investigated the impact of AUCReshaping in the context of abnormality detection tasks, specifically in Chest X-Ray (CXR) analysis, followed by breast mammogram and credit card fraud detection tasks. The results reveal a substantial improvement, ranging from 2 to 40%, in sensitivity at high-specificity levels for binary classification tasks.


Assuntos
Algoritmos , Mamografia , Sensibilidade e Especificidade , Curva ROC , Radiografia
4.
Med Image Anal ; 84: 102680, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36481607

RESUMO

In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.


Assuntos
Benchmarking , Neoplasias Hepáticas , Humanos , Estudos Retrospectivos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Fígado/diagnóstico por imagem , Fígado/patologia , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
5.
J Med Imaging (Bellingham) ; 9(6): 064503, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36466078

RESUMO

Purpose: Building accurate and robust artificial intelligence systems for medical image assessment requires the creation of large sets of annotated training examples. However, constructing such datasets is very costly due to the complex nature of annotation tasks, which often require expert knowledge (e.g., a radiologist). To counter this limitation, we propose a method to learn from medical images at scale in a self-supervised way. Approach: Our approach, based on contrastive learning and online feature clustering, leverages training datasets of over 100,000,000 medical images of various modalities, including radiography, computed tomography (CT), magnetic resonance (MR) imaging, and ultrasonography (US). We propose to use the learned features to guide model training in supervised and hybrid self-supervised/supervised regime on various downstream tasks. Results: We highlight a number of advantages of this strategy on challenging image assessment problems in radiography, CT, and MR: (1) significant increase in accuracy compared to the state-of-the-art (e.g., area under the curve boost of 3% to 7% for detection of abnormalities from chest radiography scans and hemorrhage detection on brain CT); (2) acceleration of model convergence during training by up to 85% compared with using no pretraining (e.g., 83% when training a model for detection of brain metastases in MR scans); and (3) increase in robustness to various image augmentations, such as intensity variations, rotations or scaling reflective of data variation seen in the field. Conclusions: The proposed approach enables large gains in accuracy and robustness on challenging image assessment problems. The improvement is significant compared with other state-of-the-art approaches trained on medical or vision images (e.g., ImageNet).

6.
Insects ; 13(9)2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-36135502

RESUMO

Edible insects such as the black soldier fly Hermetia illucens L. represent a potential and sustainable source of nutrients for food and feed due to their valuable nutritional composition, which can be modulated through dietary enrichment. The high content of saturated fatty acid (FA) of Hermetia illucens larvae fats can be modulated through dietary enrichment as a result of adding vegetable oils in the rearing substrate. Therefore, the present research aims to highlight the effects of a 10% addition of vegetable oils from five dietary fat sources (linseed oil, soybean oil, sunflower oil, rapeseed oil, and hempseed oil) on the growth, development, reproductive performance, and the fat and fatty acids profile of H. illucens. Oil inclusion in the larval diet improved (p < 0.05) the weight of larvae, prepupae, pupae, and imago without influencing (p > 0.05) the egg clutch weight and the number of eggs in the clutch. In addition, the larvae fatty acid profile was different (p < 0.001) according to the oil type, because the unsaturated FAs (UFA) increased from 11.23 to 48.74% of FAME, as well as according to the larvae age, because the saturated FAs decreased from 85.86 to 49.56% of FAME. Linseed oil inclusion led to the improvement of the FA profile at 10 days age of larvae, followed by hempseed and rapeseed oil. These three dietary treatments recorded the highest concentrations in UFA (29.94−48.74% of FAME), especially in polyunsaturated FA (18.91−37.22% of FAME) from the omega-3 series (3.19−15.55% of FAME) and the appropriate n−6/n−3 ratio. As a result, the degree of the lipid polyunsaturation index increased (17.76−41.44) and the value of the atherogenic (3.22−1.22) and thrombogenic (1.43−0.48) indices decreased. Based on the obtained results, it can be concluded that enriching the larval diet with these oils rich in UFA can modulate the larvae FA profile, making them suitable sources of quality fats for feed and indirectly for food.

7.
Radiol Artif Intell ; 4(3): e210115, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35652116

RESUMO

Purpose: To present a method that automatically detects, subtypes, and locates acute or subacute intracranial hemorrhage (ICH) on noncontrast CT (NCCT) head scans; generates detection confidence scores to identify high-confidence data subsets with higher accuracy; and improves radiology worklist prioritization. Such scores may enable clinicians to better use artificial intelligence (AI) tools. Materials and Methods: This retrospective study included 46 057 studies from seven "internal" centers for development (training, architecture selection, hyperparameter tuning, and operating-point calibration; n = 25 946) and evaluation (n = 2947) and three "external" centers for calibration (n = 400) and evaluation (n = 16 764). Internal centers contributed developmental data, whereas external centers did not. Deep neural networks predicted the presence of ICH and subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and/or epidural hemorrhage) and segmentations per case. Two ICH confidence scores are discussed: a calibrated classifier entropy score and a Dempster-Shafer score. Evaluation was completed by using receiver operating characteristic curve analysis and report turnaround time (RTAT) modeling on the evaluation set and on confidence score-defined subsets using bootstrapping. Results: The areas under the receiver operating characteristic curve for ICH were 0.97 (0.97, 0.98) and 0.95 (0.94, 0.95) on internal and external center data, respectively. On 80% of the data stratified by calibrated classifier and Dempster-Shafer scores, the system improved the Youden indexes, increasing them from 0.84 to 0.93 (calibrated classifier) and from 0.84 to 0.92 (Dempster-Shafer) for internal centers and increasing them from 0.78 to 0.88 (calibrated classifier) and from 0.78 to 0.89 (Dempster-Shafer) for external centers (P < .001). Models estimated shorter RTAT for AI-prioritized worklists with confidence measures than for AI-prioritized worklists without confidence measures, shortening RTAT by 27% (calibrated classifier) and 27% (Dempster-Shafer) for internal centers and shortening RTAT by 25% (calibrated classifier) and 27% (Dempster-Shafer) for external centers (P < .001). Conclusion: AI that provided statistical confidence measures for ICH detection on NCCT scans reliably detected and subtyped hemorrhages, identified high-confidence predictions, and improved worklist prioritization in simulation.Keywords: CT, Head/Neck, Hemorrhage, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.

8.
Nutrients ; 14(6)2022 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-35334808

RESUMO

Knowledge regarding the influence of the microbial community in cancer promotion or protection has expanded even more through the study of bacterial metabolic products and how they can modulate cancer risk, which represents an extremely challenging approach for the relationship between intestinal microbiota and colorectal cancer (CRC). This review discusses research progress on the effect of bacterial dysbiosis from a metabolic point of view, particularly on the biochemical mechanisms of butyrate, one of the main short chain fatty acids (SCFAs) with anti-inflammatory and anti-tumor properties in CRC. Increased daily intake of omega-3 polyunsaturated fatty acids (PUFAs) significantly increases the density of bacteria that are known to produce butyrate. Omega-3 PUFAs have been proposed as a treatment to prevent gut microbiota dysregulation and lower the risk or progression of CRC.


Assuntos
Neoplasias Colorretais , Ácidos Graxos Ômega-3 , Microbioma Gastrointestinal , Butiratos/farmacologia , Neoplasias Colorretais/patologia , Disbiose , Ácidos Graxos Ômega-3/farmacologia , Microbioma Gastrointestinal/fisiologia , Humanos
9.
Med Image Anal ; 72: 102087, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34015595

RESUMO

Chest radiography is the most common radiographic examination performed in daily clinical practice for the detection of various heart and lung abnormalities. The large amount of data to be read and reported, with more than 100 studies per day for a single radiologist, poses a challenge in consistently maintaining high interpretation accuracy. The introduction of large-scale public datasets has led to a series of novel systems for automated abnormality classification. However, the labels of these datasets were obtained using natural language processed medical reports, yielding a large degree of label noise that can impact the performance. In this study, we propose novel training strategies that handle label noise from such suboptimal data. Prior label probabilities were measured on a subset of training data re-read by 4 board-certified radiologists and were used during training to increase the robustness of the training model to the label noise. Furthermore, we exploit the high comorbidity of abnormalities observed in chest radiography and incorporate this information to further reduce the impact of label noise. Additionally, anatomical knowledge is incorporated by training the system to predict lung and heart segmentation, as well as spatial knowledge labels. To deal with multiple datasets and images derived from various scanners that apply different post-processing techniques, we introduce a novel image normalization strategy. Experiments were performed on an extensive collection of 297,541 chest radiographs from 86,876 patients, leading to a state-of-the-art performance level for 17 abnormalities from 2 datasets. With an average AUC score of 0.880 across all abnormalities, our proposed training strategies can be used to significantly improve performance scores.


Assuntos
Pneumopatias , Pulmão , Humanos , Pulmão/diagnóstico por imagem , Radiografia
10.
Eur Radiol ; 31(11): 8775-8785, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33934177

RESUMO

OBJECTIVES: To investigate machine learning classifiers and interpretable models using chest CT for detection of COVID-19 and differentiation from other pneumonias, interstitial lung disease (ILD) and normal CTs. METHODS: Our retrospective multi-institutional study obtained 2446 chest CTs from 16 institutions (including 1161 COVID-19 patients). Training/validation/testing cohorts included 1011/50/100 COVID-19, 388/16/33 ILD, 189/16/33 other pneumonias, and 559/17/34 normal (no pathologies) CTs. A metric-based approach for the classification of COVID-19 used interpretable features, relying on logistic regression and random forests. A deep learning-based classifier differentiated COVID-19 via 3D features extracted directly from CT attenuation and probability distribution of airspace opacities. RESULTS: Most discriminative features of COVID-19 are the percentage of airspace opacity and peripheral and basal predominant opacities, concordant with the typical characterization of COVID-19 in the literature. Unsupervised hierarchical clustering compares feature distribution across COVID-19 and control cohorts. The metrics-based classifier achieved AUC = 0.83, sensitivity = 0.74, and specificity = 0.79 versus respectively 0.93, 0.90, and 0.83 for the DL-based classifier. Most of ambiguity comes from non-COVID-19 pneumonia with manifestations that overlap with COVID-19, as well as mild COVID-19 cases. Non-COVID-19 classification performance is 91% for ILD, 64% for other pneumonias, and 94% for no pathologies, which demonstrates the robustness of our method against different compositions of control groups. CONCLUSIONS: Our new method accurately discriminates COVID-19 from other types of pneumonia, ILD, and CTs with no pathologies, using quantitative imaging features derived from chest CT, while balancing interpretability of results and classification performance and, therefore, may be useful to facilitate diagnosis of COVID-19. KEY POINTS: • Unsupervised clustering reveals the key tomographic features including percent airspace opacity and peripheral and basal opacities most typical of COVID-19 relative to control groups. • COVID-19-positive CTs were compared with COVID-19-negative chest CTs (including a balanced distribution of non-COVID-19 pneumonia, ILD, and no pathologies). Classification accuracies for COVID-19, pneumonia, ILD, and CT scans with no pathologies are respectively 90%, 64%, 91%, and 94%. • Our deep learning (DL)-based classification method demonstrates an AUC of 0.93 (sensitivity 90%, specificity 83%). Machine learning methods applied to quantitative chest CT metrics can therefore improve diagnostic accuracy in suspected COVID-19, particularly in resource-constrained environments.


Assuntos
COVID-19 , Humanos , Aprendizado de Máquina , Estudos Retrospectivos , SARS-CoV-2 , Tórax
11.
Sci Rep ; 11(1): 6876, 2021 03 25.
Artigo em Inglês | MEDLINE | ID: mdl-33767226

RESUMO

With the rapid growth and increasing use of brain MRI, there is an interest in automated image classification to aid human interpretation and improve workflow. We aimed to train a deep convolutional neural network and assess its performance in identifying abnormal brain MRIs and critical intracranial findings including acute infarction, acute hemorrhage and mass effect. A total of 13,215 clinical brain MRI studies were categorized to training (74%), validation (9%), internal testing (8%) and external testing (8%) datasets. Up to eight contrasts were included from each brain MRI and each image volume was reformatted to common resolution to accommodate for differences between scanners. Following reviewing the radiology reports, three neuroradiologists assigned each study to abnormal vs normal, and identified three critical findings including acute infarction, acute hemorrhage, and mass effect. A deep convolutional neural network was constructed by a combination of localization feature extraction (LFE) modules and global classifiers to identify the presence of 4 variables in brain MRIs including abnormal, acute infarction, acute hemorrhage and mass effect. Training, validation and testing sets were randomly defined on a patient basis. Training was performed on 9845 studies using balanced sampling to address class imbalance. Receiver operating characteristic (ROC) analysis was performed. The ROC analysis of our models for 1050 studies within our internal test data showed AUC/sensitivity/specificity of 0.91/83%/86% for normal versus abnormal brain MRI, 0.95/92%/88% for acute infarction, 0.90/89%/81% for acute hemorrhage, and 0.93/93%/85% for mass effect. For 1072 studies within our external test data, it showed AUC/sensitivity/specificity of 0.88/80%/80% for normal versus abnormal brain MRI, 0.97/90%/97% for acute infarction, 0.83/72%/88% for acute hemorrhage, and 0.87/79%/81% for mass effect. Our proposed deep convolutional network can accurately identify abnormal and critical intracranial findings on individual brain MRIs, while addressing the fact that some MR contrasts might not be available in individual studies.


Assuntos
Encéfalo/anatomia & histologia , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Redes Neurais de Computação , Neuroimagem/métodos , Humanos , Curva ROC
12.
Med Image Anal ; 68: 101855, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33260116

RESUMO

The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Humanos , Aprendizado de Máquina , Incerteza
13.
IEEE Trans Med Imaging ; 40(1): 335-345, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32966215

RESUMO

Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Detecção Precoce de Câncer , Humanos , Processamento de Imagem Assistida por Computador , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
14.
ArXiv ; 2020 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-32550252

RESUMO

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobewise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April, 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO(P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

15.
Radiol Artif Intell ; 2(4): e200048, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33928255

RESUMO

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobe-wise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO (P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

16.
Scand J Clin Lab Invest ; 79(6): 437-442, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31462125

RESUMO

Polycystic ovary syndrome (PCOS), characterized by oligo-anovulation and androgen excess is considered a high-risk condition for metabolic disorders. Herein, untargeted metabolomics analysis was applied to women with PCOS, aiming to provide deeper insights into lipidomics biomarkers signature of PCOS, for better diagnosis and management. This was a cross-sectional study in which 15 Caucasian women with PCOS and 15 Caucasian healthy, age-matched women were enrolled. Lipidomics analysis was performed using Ultra-High Performance Liquid Chromatography-Quadrupole Time of Flight Electrospray Mass Spectrometry. Partial Least Squares Discriminant Analysis retrieved the most important discriminative metabolites. Significantly increased levels of triacylglycerol (18:2/18:2/0-18:0) in addition to cholestane-3beta, 5alpha, 6beta-triol (18:0/0:0) and cholestane-5alpha (18:1/0:0) appeared as valuable variables to differentiate subjects with PCOS from controls. Acyl-carnitine 2-hydroxylauroylcarnitine was significantly elevated in PCOS in opposition to decreased phosphocholines metabolites (18:1/18:4, 18:3/18:2), to suggest a metabolic pattern linked to lipid peroxidation. A high fat intake or reduced fat energy consumption during nighttime due to diminished ability to switch to lipid oxidation during fasting time possibly contribute to hypertriglyceridemia found in PCOS. Furthermore, inflammatory mediators including metabolites of the prostaglandin (PG) E2 pathway and oxo-leukotrienes (LT) were increased in patients with PCOS. Potential lipidomics biomarkers were identified that could stratify between women with PCOS and healthy controls. The results show particular alterations in acylglycerols, PGs and LTs and phosphocholines and carnitine metabolites. The lipidomics profiles of PCOS indicate a higher risk of developing metabolic diseases.


Assuntos
Doenças Metabólicas/complicações , Síndrome do Ovário Policístico/metabolismo , Adulto , Biomarcadores/metabolismo , Cromatografia Líquida de Alta Pressão , Estudos Transversais , Feminino , Humanos , Lipidômica , Doenças Metabólicas/metabolismo , Metabolômica , Síndrome do Ovário Policístico/complicações , Síndrome do Ovário Policístico/diagnóstico , Medição de Risco , Espectrometria de Massas por Ionização por Electrospray
17.
Comput Med Imaging Graph ; 75: 24-33, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31129477

RESUMO

Simultaneous segmentation of multiple organs from different medical imaging modalities is a crucial task as it can be utilized for computer-aided diagnosis, computer-assisted surgery, and therapy planning. Thanks to the recent advances in deep learning, several deep neural networks for medical image segmentation have been introduced successfully for this purpose. In this paper, we focus on learning a deep multi-organ segmentation network that labels voxels. In particular, we examine the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. The input imbalance refers to the class-imbalance in the input training samples (i.e., small foreground objects embedded in an abundance of background voxels, as well as organs of varying sizes). The output imbalance refers to the imbalance between the false positives and false negatives of the inference model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning based loss function. Specifically, we leverage Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time gradually learn better model parameters by penalizing for false positives/negatives using a cross entropy term. We evaluated the proposed loss function on three datasets: whole body positron emission tomography (PET) scans with 5 target organs, magnetic resonance imaging (MRI) prostate scans, and ultrasound echocardigraphy images with a single target organ i.e., left ventricular. We show that a simple network architecture with the proposed integrative loss function can outperform state-of-the-art methods and results of the competing methods can be improved when our proposed loss is used.


Assuntos
Interpretação de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Currículo , Aprendizado Profundo , Educação Médica , Eletrocardiografia , Humanos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Ultrassonografia
18.
IEEE Trans Pattern Anal Mach Intell ; 41(1): 176-189, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-29990011

RESUMO

Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. We couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis. In other words, an artificial agent is trained not only to distinguish the target anatomical object from the rest of the body but also how to find the object by learning and following an optimal navigation path to the target object in the imaged volumetric space. We evaluated our approach on 1487 3D-CT volumes from 532 patients, totaling over 500,000 image slices and show that it significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective, while also achieving a 20-30 percent higher detection accuracy. Most importantly, we improve the detection-speed of the reference methods by 2-3 orders of magnitude, achieving unmatched real-time performance on large 3D-CT scans.

20.
Med Image Anal ; 48: 203-213, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29966940

RESUMO

Robust and fast detection of anatomical structures represents an important component of medical image analysis technologies. Current solutions for anatomy detection are based on machine learning, and are generally driven by suboptimal and exhaustive search strategies. In particular, these techniques do not effectively address cases of incomplete data, i.e., scans acquired with a partial field-of-view. We address these challenges by following a new paradigm, which reformulates the detection task to teaching an intelligent artificial agent how to actively search for an anatomical structure. Using the principles of deep reinforcement learning with multi-scale image analysis, artificial agents are taught optimal navigation paths in the scale-space representation of an image, while accounting for structures that are missing from the field-of-view. The spatial coherence of the observed anatomical landmarks is ensured using elements from statistical shape modeling and robust estimation theory. Experiments show that our solution outperforms marginal space deep learning, a powerful deep learning method, at detecting different anatomical structures without any failure. The dataset contains 5043 3D-CT volumes from over 2000 patients, totaling over 2,500,000 image slices. In particular, our solution achieves 0% false-positive and 0% false-negative rates at detecting whether the landmarks are captured in the field-of-view of the scan (excluding all border cases), with an average detection accuracy of 2.78 mm. In terms of runtime, we reduce the detection-time of the marginal space deep learning method by 20-30 times to under 40 ms, an unmatched performance for high resolution incomplete 3D-CT data.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Pontos de Referência Anatômicos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...