Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 68
Filtrar
1.
Int J Biol Macromol ; 275(Pt 1): 133350, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38960255

RESUMO

Saccharide mapping was a promising scheme to unveil the mystery of polysaccharide structure by analysis of the fragments generated from polysaccharide decomposition process. However, saccharide mapping was not widely applied in the polysaccharide analysis for lacking of systematic introduction. In this review, a detailed description of the establishment process of saccharide mapping, the pros and cons of downstream technologies, an overview of the application of saccharide mapping, and practical strategies were summarized. With the updating of the available downstream technologies, saccharide mapping had been expanding its scope of application to various kinds of polysaccharides. The process of saccharide mapping analysis included polysaccharides degradation and hydrolysates analysis, and the degradation process was no longer limited to acid hydrolysis. Some downstream technologies were convenient for rapid qualitative analysis, while others could achieve quantitative analysis. For the more detailed structure information could be provided by saccharide mapping, it was possible to improve the quality control of polysaccharides during preparation and application. This review filled the blank of basic information about saccharide mapping and was helpful for the establishment of a professional workflow for the saccharide mapping application to promote the deep study of polysaccharide structure.

2.
Commun Med (Lond) ; 4(1): 133, 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-38971887

RESUMO

BACKGROUND: Advances in self-supervised learning (SSL) have enabled state-of-the-art automated medical image diagnosis from small, labeled datasets. This label efficiency is often desirable, given the difficulty of obtaining expert labels for medical image recognition tasks. However, most efforts toward SSL in medical imaging are not adapted to video-based modalities, such as echocardiography. METHODS: We developed a self-supervised contrastive learning approach, EchoCLR, for echocardiogram videos with the goal of learning strong representations for efficient fine-tuning on downstream cardiac disease diagnosis. EchoCLR pretraining involves (i) contrastive learning, where the model is trained to identify distinct videos of the same patient, and (ii) frame reordering, where the model is trained to predict the correct of video frames after being randomly shuffled. RESULTS: When fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improves classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS) over other transfer learning and SSL approaches across internal and external test sets. When fine-tuning on 10% of available training data (519 studies), an EchoCLR-pretrained model achieves 0.72 AUROC (95% CI: [0.69, 0.75]) on LVH classification, compared to 0.61 AUROC (95% CI: [0.57, 0.64]) with a standard transfer learning approach. Similarly, using 1% of available training data (53 studies), EchoCLR pretraining achieves 0.82 AUROC (95% CI: [0.79, 0.84]) on severe AS classification, compared to 0.61 AUROC (95% CI: [0.58, 0.65]) with transfer learning. CONCLUSIONS: EchoCLR is unique in its ability to learn representations of echocardiogram videos and demonstrates that SSL can enable label-efficient disease classification from small amounts of labeled data.


Artificial intelligence (AI) has been used to develop software that can automatically diagnose diseases from medical images. However, these AI models require thousands or millions of examples to properly learn from, which can be very expensive, as diagnosis is often time-consuming and requires clinical expertise. Using a technique called self-supervised learning (SSL), we develop an AI method to effectively diagnose heart disease from as few as 50 instances. Our method, EchoCLR, is designed for echocardiography, a key imaging technique to monitor heart health, and outperforms other methods on disease diagnosis from small amounts of data. This method can advance AI for echocardiography and enable researchers with limited resources to create disease diagnosis models from small medical imaging datasets.

3.
Med Image Anal ; 97: 103224, 2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38850624

RESUMO

Many real-world image recognition problems, such as diagnostic medical imaging exams, are "long-tailed" - there are a few common findings followed by many more relatively rare conditions. In chest radiography, diagnosis is both a long-tailed and multi-label problem, as patients often present with multiple findings simultaneously. While researchers have begun to study the problem of long-tailed learning in medical image recognition, few have studied the interaction of label imbalance and label co-occurrence posed by long-tailed, multi-label disease classification. To engage with the research community on this emerging topic, we conducted an open challenge, CXR-LT, on long-tailed, multi-label thorax disease classification from chest X-rays (CXRs). We publicly release a large-scale benchmark dataset of over 350,000 CXRs, each labeled with at least one of 26 clinical findings following a long-tailed distribution. We synthesize common themes of top-performing solutions, providing practical recommendations for long-tailed, multi-label medical image classification. Finally, we use these insights to propose a path forward involving vision-language foundation models for few- and zero-shot disease classification.

4.
JAMA Cardiol ; 9(6): 534-544, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38581644

RESUMO

Importance: Aortic stenosis (AS) is a major public health challenge with a growing therapeutic landscape, but current biomarkers do not inform personalized screening and follow-up. A video-based artificial intelligence (AI) biomarker (Digital AS Severity index [DASSi]) can detect severe AS using single-view long-axis echocardiography without Doppler characterization. Objective: To deploy DASSi to patients with no AS or with mild or moderate AS at baseline to identify AS development and progression. Design, Setting, and Participants: This is a cohort study that examined 2 cohorts of patients without severe AS undergoing echocardiography in the Yale New Haven Health System (YNHHS; 2015-2021) and Cedars-Sinai Medical Center (CSMC; 2018-2019). A novel computational pipeline for the cross-modal translation of DASSi into cardiac magnetic resonance (CMR) imaging was further developed in the UK Biobank. Analyses were performed between August 2023 and February 2024. Exposure: DASSi (range, 0-1) derived from AI applied to echocardiography and CMR videos. Main Outcomes and Measures: Annualized change in peak aortic valve velocity (AV-Vmax) and late (>6 months) aortic valve replacement (AVR). Results: A total of 12 599 participants were included in the echocardiographic study (YNHHS: n = 8798; median [IQR] age, 71 [60-80] years; 4250 [48.3%] women; median [IQR] follow-up, 4.1 [2.4-5.4] years; and CSMC: n = 3801; median [IQR] age, 67 [54-78] years; 1685 [44.3%] women; median [IQR] follow-up, 3.4 [2.8-3.9] years). Higher baseline DASSi was associated with faster progression in AV-Vmax (per 0.1 DASSi increment: YNHHS, 0.033 m/s per year [95% CI, 0.028-0.038] among 5483 participants; CSMC, 0.082 m/s per year [95% CI, 0.053-0.111] among 1292 participants), with values of 0.2 or greater associated with a 4- to 5-fold higher AVR risk than values less than 0.2 (YNHHS: 715 events; adjusted hazard ratio [HR], 4.97 [95% CI, 2.71-5.82]; CSMC: 56 events; adjusted HR, 4.04 [95% CI, 0.92-17.70]), independent of age, sex, race, ethnicity, ejection fraction, and AV-Vmax. This was reproduced across 45 474 participants (median [IQR] age, 65 [59-71] years; 23 559 [51.8%] women; median [IQR] follow-up, 2.5 [1.6-3.9] years) undergoing CMR imaging in the UK Biobank (for participants with DASSi ≥0.2 vs those with DASSi <.02, adjusted HR, 11.38 [95% CI, 2.56-50.57]). Saliency maps and phenome-wide association studies supported associations with cardiac structure and function and traditional cardiovascular risk factors. Conclusions and Relevance: In this cohort study of patients without severe AS undergoing echocardiography or CMR imaging, a new AI-based video biomarker was independently associated with AS development and progression, enabling opportunistic risk stratification across cardiovascular imaging modalities as well as potential application on handheld devices.


Assuntos
Estenose da Valva Aórtica , Inteligência Artificial , Progressão da Doença , Ecocardiografia , Índice de Gravidade de Doença , Humanos , Estenose da Valva Aórtica/diagnóstico por imagem , Estenose da Valva Aórtica/cirurgia , Estenose da Valva Aórtica/fisiopatologia , Feminino , Masculino , Idoso , Ecocardiografia/métodos , Pessoa de Meia-Idade , Biomarcadores , Idoso de 80 Anos ou mais , Estudos de Coortes , Gravação em Vídeo , Imagem Multimodal/métodos , Imageamento por Ressonância Magnética/métodos
5.
Artigo em Inglês | MEDLINE | ID: mdl-38687659

RESUMO

Recently, zero-shot (or training-free) Neural Architecture Search (NAS) approaches have been proposed to liberate NAS from the expensive training process. The key idea behind zero-shot NAS approaches is to design proxies that can predict the accuracy of some given networks without training the network parameters. The proxies proposed so far are usually inspired by recent progress in theoretical understanding of deep learning and have shown great potential on several datasets and NAS benchmarks. This paper aims to comprehensively review and compare the state-of-the-art (SOTA) zero-shot NAS approaches, with an emphasis on their hardware awareness. To this end, we first review the mainstream zero-shot proxies and discuss their theoretical underpinnings. We then compare these zero-shot proxies through large-scale experiments and demonstrate their effectiveness in both hardware-aware and hardware-oblivious NAS scenarios. Finally, we point out several promising ideas to design better proxies. Our source code and the list of related papers are available on https://github.com/SLDGroup/survey-zero-shot-nas.

6.
medRxiv ; 2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-38559021

RESUMO

BACKGROUND: Point-of-care ultrasonography (POCUS) enables cardiac imaging at the bedside and in communities but is limited by abbreviated protocols and variation in quality. We developed and tested artificial intelligence (AI) models to automate the detection of under-diagnosed cardiomyopathies from cardiac POCUS. METHODS: In a development set of 290,245 transthoracic echocardiographic videos across the Yale-New Haven Health System (YNHHS), we used augmentation approaches and a customized loss function weighted for view quality to derive a POCUS-adapted, multi-label, video-based convolutional neural network (CNN) that discriminates HCM (hypertrophic cardiomyopathy) and ATTR-CM (transthyretin amyloid cardiomyopathy) from controls without known disease. We evaluated the final model across independent, internal and external, retrospective cohorts of individuals who underwent cardiac POCUS across YNHHS and Mount Sinai Health System (MSHS) emergency departments (EDs) (2011-2024) to prioritize key views and validate the diagnostic and prognostic performance of single-view screening protocols. FINDINGS: We identified 33,127 patients (median age 61 [IQR: 45-75] years, n=17,276 [52.2%] female) at YNHHS and 5,624 (57 [IQR: 39-71] years, n=1,953 [34.7%] female) at MSHS with 78,054 and 13,796 eligible cardiac POCUS videos, respectively. An AI-enabled single-view screening approach successfully discriminated HCM (AUROC of 0.90 [YNHHS] & 0.89 [MSHS]) and ATTR-CM (YNHHS: AUROC of 0.92 [YNHHS] & 0.99 [MSHS]). In YNHHS, 40 (58.0%) HCM and 23 (47.9%) ATTR-CM cases had a positive screen at median of 2.1 [IQR: 0.9-4.5] and 1.9 [IQR: 1.0-3.4] years before clinical diagnosis. Moreover, among 24,448 participants without known cardiomyopathy followed over 2.2 [IQR: 1.1-5.8] years, AI-POCUS probabilities in the highest (vs lowest) quintile for HCM and ATTR-CM conferred a 15% (adj.HR 1.15 [95%CI: 1.02-1.29]) and 39% (adj.HR 1.39 [95%CI: 1.22-1.59]) higher age- and sex-adjusted mortality risk, respectively. INTERPRETATION: We developed and validated an AI framework that enables scalable, opportunistic screening of treatable cardiomyopathies wherever POCUS is used.

7.
J Am Med Inform Assoc ; 31(4): 855-865, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38269618

RESUMO

OBJECTIVE: Artificial intelligence (AI) detects heart disease from images of electrocardiograms (ECGs). However, traditional supervised learning is limited by the need for large amounts of labeled data. We report the development of Biometric Contrastive Learning (BCL), a self-supervised pretraining approach for label-efficient deep learning on ECG images. MATERIALS AND METHODS: Using pairs of ECGs from 78 288 individuals from Yale (2000-2015), we trained a convolutional neural network to identify temporally separated ECG pairs that varied in layouts from the same patient. We fine-tuned BCL-pretrained models to detect atrial fibrillation (AF), gender, and LVEF < 40%, using ECGs from 2015 to 2021. We externally tested the models in cohorts from Germany and the United States. We compared BCL with ImageNet initialization and general-purpose self-supervised contrastive learning for images (simCLR). RESULTS: While with 100% labeled training data, BCL performed similarly to other approaches for detecting AF/Gender/LVEF < 40% with an AUROC of 0.98/0.90/0.90 in the held-out test sets, it consistently outperformed other methods with smaller proportions of labeled data, reaching equivalent performance at 50% of data. With 0.1% data, BCL achieved AUROC of 0.88/0.79/0.75, compared with 0.51/0.52/0.60 (ImageNet) and 0.61/0.53/0.49 (simCLR). In external validation, BCL outperformed other methods even at 100% labeled training data, with an AUROC of 0.88/0.88 for Gender and LVEF < 40% compared with 0.83/0.83 (ImageNet) and 0.84/0.83 (simCLR). DISCUSSION AND CONCLUSION: A pretraining strategy that leverages biometric signatures of different ECGs from the same patient enhances the efficiency of developing AI models for ECG images. This represents a major advance in detecting disorders from ECG images with limited labeled data.


Assuntos
Fibrilação Atrial , Aprendizado Profundo , Humanos , Inteligência Artificial , Eletrocardiografia , Biometria
8.
medRxiv ; 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-37808685

RESUMO

Importance: Aortic stenosis (AS) is a major public health challenge with a growing therapeutic landscape, but current biomarkers do not inform personalized screening and follow-up. Objective: A video-based artificial intelligence (AI) biomarker (Digital AS Severity index [DASSi]) can detect severe AS using single-view long-axis echocardiography without Doppler. Here, we deploy DASSi to patients with no or mild/moderate AS at baseline to identify AS development and progression. Design Setting and Participants: We defined two cohorts of patients without severe AS undergoing echocardiography in the Yale-New Haven Health System (YNHHS) (2015-2021, 4.1[IQR:2.4-5.4] follow-up years) and Cedars-Sinai Medical Center (CSMC) (2018-2019, 3.4[IQR:2.8-3.9] follow-up years). We further developed a novel computational pipeline for the cross-modality translation of DASSi into cardiac magnetic resonance (CMR) imaging in the UK Biobank (2.5[IQR:1.6-3.9] follow-up years). Analyses were performed between August 2023-February 2024. Exposure: DASSi (range: 0-1) derived from AI applied to echocardiography and CMR videos. Main Outcomes and Measures: Annualized change in peak aortic valve velocity (AV-Vmax) and late (>6 months) aortic valve replacement (AVR). Results: A total of 12,599 participants were included in the echocardiographic study (YNHHS: n=8,798, median age of 71 [IQR (interquartile range):60-80] years, 4250 [48.3%] women, and CSMC: n=3,801, 67 [IQR:54-78] years, 1685 [44.3%] women). Higher baseline DASSi was associated with faster progression in AV-Vmax (per 0.1 DASSi increments: YNHHS: +0.033 m/s/year [95%CI:0.028-0.038], n=5,483, and CSMC: +0.082 m/s/year [0.053-0.111], n=1,292), with levels ≥ vs <0.2 linked to a 4-to-5-fold higher AVR risk (715 events in YNHHS; adj.HR 4.97 [95%CI: 2.71-5.82], 56 events in CSMC: 4.04 [0.92-17.7]), independent of age, sex, ethnicity/race, ejection fraction and AV-Vmax. This was reproduced across 45,474 participants (median age 65 [IQR:59-71] years, 23,559 [51.8%] women) undergoing CMR in the UK Biobank (adj.HR 11.4 [95%CI:2.56-50.60] for DASSi ≥vs<0.2). Saliency maps and phenome-wide association studies supported links with traditional cardiovascular risk factors and diastolic dysfunction. Conclusions and Relevance: In this cohort study of patients without severe AS undergoing echocardiography or CMR imaging, a new AI-based video biomarker is independently associated with AS development and progression, enabling opportunistic risk stratification across cardiovascular imaging modalities as well as potential application on handheld devices.

9.
ArXiv ; 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-37986726

RESUMO

Many real-world image recognition problems, such as diagnostic medical imaging exams, are "long-tailed" - there are a few common findings followed by many more relatively rare conditions. In chest radiography, diagnosis is both a long-tailed and multi-label problem, as patients often present with multiple findings simultaneously. While researchers have begun to study the problem of long-tailed learning in medical image recognition, few have studied the interaction of label imbalance and label co-occurrence posed by long-tailed, multi-label disease classification. To engage with the research community on this emerging topic, we conducted an open challenge, CXR-LT, on long-tailed, multi-label thorax disease classification from chest X-rays (CXRs). We publicly release a large-scale benchmark dataset of over 350,000 CXRs, each labeled with at least one of 26 clinical findings following a long-tailed distribution. We synthesize common themes of top-performing solutions, providing practical recommendations for long-tailed, multi-label medical image classification. Finally, we use these insights to propose a path forward involving vision-language foundation models for few- and zero-shot disease classification.

10.
ArXiv ; 2023 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-37791108

RESUMO

Pruning has emerged as a powerful technique for compressing deep neural networks, reducing memory usage and inference time without significantly affecting overall performance. However, the nuanced ways in which pruning impacts model behavior are not well understood, particularly for long-tailed, multi-label datasets commonly found in clinical settings. This knowledge gap could have dangerous implications when deploying a pruned model for diagnosis, where unexpected model behavior could impact patient well-being. To fill this gap, we perform the first analysis of pruning's effect on neural networks trained to diagnose thorax diseases from chest X-rays (CXRs). On two large CXR datasets, we examine which diseases are most affected by pruning and characterize class "forgettability" based on disease frequency and co-occurrence behavior. Further, we identify individual CXRs where uncompressed and heavily pruned models disagree, known as pruning-identified exemplars (PIEs), and conduct a human reader study to evaluate their unifying qualities. We find that radiologists perceive PIEs as having more label noise, lower image quality, and higher diagnosis difficulty. This work represents a first step toward understanding the impact of pruning on model behavior in deep long-tailed, multi-label medical image classification. All code, model weights, and data access instructions can be found at https://github.com/VITA-Group/PruneCXR.

11.
Med Image Comput Comput Assist Interv ; 14224: 663-673, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37829549

RESUMO

Pruning has emerged as a powerful technique for compressing deep neural networks, reducing memory usage and inference time without significantly affecting overall performance. However, the nuanced ways in which pruning impacts model behavior are not well understood, particularly for long-tailed, multi-label datasets commonly found in clinical settings. This knowledge gap could have dangerous implications when deploying a pruned model for diagnosis, where unexpected model behavior could impact patient well-being. To fill this gap, we perform the first analysis of pruning's effect on neural networks trained to diagnose thorax diseases from chest X-rays (CXRs). On two large CXR datasets, we examine which diseases are most affected by pruning and characterize class "forgettability" based on disease frequency and co-occurrence behavior. Further, we identify individual CXRs where uncompressed and heavily pruned models disagree, known as pruning-identified exemplars (PIEs), and conduct a human reader study to evaluate their unifying qualities. We find that radiologists perceive PIEs as having more label noise, lower image quality, and higher diagnosis difficulty. This work represents a first step toward understanding the impact of pruning on model behavior in deep long-tailed, multi-label medical image classification. All code, model weights, and data access instructions can be found at https://github.com/VITA-Group/PruneCXR.

12.
Proc AAAI Conf Artif Intell ; 37(7): 7893-7901, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37846298

RESUMO

Federated learning (FL) emerges as a popular distributed learning schema that learns a model from a set of participating users without sharing raw data. One major challenge of FL comes with heterogeneous users, who may have distributionally different (or non-iid) data and varying computation resources. As federated users would use the model for prediction, they often demand the trained model to be robust against malicious attackers at test time. Whereas adversarial training (AT) provides a sound solution for centralized learning, extending its usage for federated users has imposed significant challenges, as many users may have very limited training data and tight computational budgets, to afford the data-hungry and costly AT. In this paper, we study a novel FL strategy: propagating adversarial robustness from rich-resource users that can afford AT, to those with poor resources that cannot afford it, during federated learning. We show that existing FL techniques cannot be effectively integrated with the strategy to propagate robustness among non-iid users and propose an efficient propagation approach by the proper use of batch-normalization. We demonstrate the rationality and effectiveness of our method through extensive experiments. Especially, the proposed method is shown to grant federated models remarkable robustness even when only a small portion of users afford AT during learning. Source code will be released.

13.
Nat Commun ; 14(1): 6261, 2023 10 06.
Artigo em Inglês | MEDLINE | ID: mdl-37803009

RESUMO

Deep learning has become a popular tool for computer-aided diagnosis using medical images, sometimes matching or exceeding the performance of clinicians. However, these models can also reflect and amplify human bias, potentially resulting inaccurate missed diagnoses. Despite this concern, the problem of improving model fairness in medical image classification by deep learning has yet to be fully studied. To address this issue, we propose an algorithm that leverages the marginal pairwise equal opportunity to reduce bias in medical image classification. Our evaluations across four tasks using four independent large-scale cohorts demonstrate that our proposed algorithm not only improves fairness in individual and intersectional subgroups but also maintains overall performance. Specifically, the relative change in pairwise fairness difference between our proposed model and the baseline model was reduced by over 35%, while the relative change in AUC value was typically within 1%. By reducing the bias generated by deep learning models, our proposed approach can potentially alleviate concerns about the fairness and reliability of image-based computer-aided diagnosis.


Assuntos
Algoritmos , Diagnóstico por Computador , Humanos , Reprodutibilidade dos Testes , Diagnóstico por Computador/métodos , Computadores
14.
medRxiv ; 2023 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-37745527

RESUMO

Objective: Artificial intelligence (AI) detects heart disease from images of electrocardiograms (ECGs), however traditional supervised learning is limited by the need for large amounts of labeled data. We report the development of Biometric Contrastive Learning (BCL), a self-supervised pretraining approach for label-efficient deep learning on ECG images. Materials and Methods: Using pairs of ECGs from 78,288 individuals from Yale (2000-2015), we trained a convolutional neural network to identify temporally-separated ECG pairs that varied in layouts from the same patient. We fine-tuned BCL-pretrained models to detect atrial fibrillation (AF), gender, and LVEF<40%, using ECGs from 2015-2021. We externally tested the models in cohorts from Germany and the US. We compared BCL with random initialization and general-purpose self-supervised contrastive learning for images (simCLR). Results: While with 100% labeled training data, BCL performed similarly to other approaches for detecting AF/Gender/LVEF<40% with AUROC of 0.98/0.90/0.90 in the held-out test sets, it consistently outperformed other methods with smaller proportions of labeled data, reaching equivalent performance at 50% of data. With 0.1% data, BCL achieved AUROC of 0.88/0.79/0.75, compared with 0.51/0.52/0.60 (random) and 0.61/0.53/0.49 (simCLR). In external validation, BCL outperformed other methods even at 100% labeled training data, with AUROC of 0.88/0.88 for Gender and LVEF<40% compared with 0.83/0.83 (random) and 0.84/0.83 (simCLR). Discussion and Conclusion: A pretraining strategy that leverages biometric signatures of different ECGs from the same patient enhances the efficiency of developing AI models for ECG images. This represents a major advance in detecting disorders from ECG images with limited labeled data.

15.
IEEE Trans Image Process ; 32: 5270-5282, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37721872

RESUMO

In blurry images, the degree of image blurs may vary drastically due to different factors, such as varying speeds of shaking cameras and moving objects, as well as defects of the camera lens. However, current end-to-end models failed to explicitly take into account such diversity of blurs. This unawareness compromises the specialization at each blur level, yielding sub-optimal deblurred images as well as redundant post-processing. Therefore, how to specialize one model simultaneously at different blur levels, while still ensuring coverage and generalization, becomes an emerging challenge. In this work, we propose Ada-Deblur, a super-network that can be applied to a "broad spectrum" of blur levels with no re-training on novel blurs. To balance between individual blur level specialization and wide-range blur levels coverage, the key idea is to dynamically adapt the network architectures from a single well-trained super-network structure, targeting flexible image processing with different deblurring capacities at test time. Extensive experiments demonstrate that our work outperforms strong baselines by demonstrating better reconstruction accuracy while incurring minimal computational overhead. Besides, we show that our method is effective for both synthetic and realistic blurs compared to these baselines. The performance gap between our model and the state-of-the-art becomes more prominent when testing with unseen and strong blur levels. Specifically, our model demonstrates surprising deblurring performance on these images with PSNR improvements of around 1 dB. Our code is publicly available at https://github.com/wuqiuche/Ada-Deblur.

16.
Eur Heart J ; 44(43): 4592-4604, 2023 11 14.
Artigo em Inglês | MEDLINE | ID: mdl-37611002

RESUMO

BACKGROUND AND AIMS: Early diagnosis of aortic stenosis (AS) is critical to prevent morbidity and mortality but requires skilled examination with Doppler imaging. This study reports the development and validation of a novel deep learning model that relies on two-dimensional (2D) parasternal long axis videos from transthoracic echocardiography without Doppler imaging to identify severe AS, suitable for point-of-care ultrasonography. METHODS AND RESULTS: In a training set of 5257 studies (17 570 videos) from 2016 to 2020 [Yale-New Haven Hospital (YNHH), Connecticut], an ensemble of three-dimensional convolutional neural networks was developed to detect severe AS, leveraging self-supervised contrastive pretraining for label-efficient model development. This deep learning model was validated in a temporally distinct set of 2040 consecutive studies from 2021 from YNHH as well as two geographically distinct cohorts of 4226 and 3072 studies, from California and other hospitals in New England, respectively. The deep learning model achieved an area under the receiver operating characteristic curve (AUROC) of 0.978 (95% CI: 0.966, 0.988) for detecting severe AS in the temporally distinct test set, maintaining its diagnostic performance in geographically distinct cohorts [0.952 AUROC (95% CI: 0.941, 0.963) in California and 0.942 AUROC (95% CI: 0.909, 0.966) in New England]. The model was interpretable with saliency maps identifying the aortic valve, mitral annulus, and left atrium as the predictive regions. Among non-severe AS cases, predicted probabilities were associated with worse quantitative metrics of AS suggesting an association with various stages of AS severity. CONCLUSION: This study developed and externally validated an automated approach for severe AS detection using single-view 2D echocardiography, with potential utility for point-of-care screening.


Assuntos
Estenose da Valva Aórtica , Aprendizado Profundo , Humanos , Ecocardiografia , Estenose da Valva Aórtica/diagnóstico por imagem , Estenose da Valva Aórtica/complicações , Valva Aórtica/diagnóstico por imagem , Ultrassonografia
17.
AMIA Jt Summits Transl Sci Proc ; 2023: 370-377, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37350910

RESUMO

In the United States, primary open-angle glaucoma (POAG) is the leading cause of blindness, especially among African American and Hispanic individuals. Deep learning has been widely used to detect POAG using fundus images as its performance is comparable to or even surpasses diagnosis by clinicians. However, human bias in clinical diagnosis may be reflected and amplified in the widely-used deep learning models, thus impacting their performance. Biases may cause (1) underdiagnosis, increasing the risks of delayed or inadequate treatment, and (2) overdiagnosis, which may increase individuals' stress, fear, well-being, and unnecessary/costly treatment. In this study, we examined the underdiagnosis and overdiagnosis when applying deep learning in POAG detection based on the Ocular Hypertension Treatment Study (OHTS) from 22 centers across 16 states in the United States. Our results show that the widely-used deep learning model can underdiagnose or overdiagnose under-served populations. The most underdiagnosed group is female younger (< 60 yrs) group, and the most overdiagnosed group is Black older (≥ 60 yrs) group. Biased diagnosis through traditional deep learning methods may delay disease detection, treatment and create burdens among under-served populations, thereby, raising ethical concerns about using deep learning models in ophthalmology clinics.

18.
IEEE Trans Image Process ; 32: 3481-3492, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37220042

RESUMO

Imagery collected from outdoor visual environments is often degraded due to the presence of dense smoke or haze. A key challenge for research in scene understanding in these degraded visual environments (DVE) is the lack of representative benchmark datasets. These datasets are required to evaluate state-of-the-art object recognition and other computer vision algorithms in degraded settings. In this paper, we address some of these limitations by introducing the first realistic haze image benchmark, from both aerial and ground view, with paired haze-free images, and in-situ haze density measurements. This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene, and consists of images captured from the perspective of both an unmanned aerial vehicle (UAV) and an unmanned ground vehicle (UGV). We also evaluate a set of representative state-of-the-art dehazing approaches as well as object detectors on the dataset. The full dataset presented in this paper, including the ground truth object classification bounding boxes and haze density measurements, is provided for the community to evaluate their algorithms at: https://a2i2-archangel.vision. A subset of this dataset has been used for the "Object Detection in Haze" Track of CVPR UG2 2022 challenge at https://cvpr2022.ug2challenge.org/track1.html.


Assuntos
Algoritmos , Benchmarking , Percepção Visual
19.
CNS Neurosci Ther ; 29(11): 3183-3198, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37222223

RESUMO

AIMS: This metabolomic study aimed to evaluate the role of N-acetylneuraminic acid (Neu5Ac) in the neurological deficits of normal pressure hydrocephalus (NPH) and its potential therapeutic effect. METHODS: We analyzed the metabolic profiles of NPH using cerebrospinal fluid with multivariate and univariate statistical analyses in a set of 42 NPH patients and 38 controls. We further correlated the levels of differential metabolites with severity-related clinical parameters, including the normal pressure hydrocephalus grading scale (NPHGS). We then established kaolin-induced hydrocephalus in mice and treated them using N-acetylmannosamine (ManNAc), a precursor of Neu5Ac. We examined brain Neu5Ac, astrocyte polarization, demyelination, and neurobehavioral outcomes to explore its therapeutic effect. RESULTS: Three metabolites were significantly altered in NPH patients. Only decreased Neu5Ac levels were correlated with NPHGS scores. Decreased brain Neu5Ac levels have been observed in hydrocephalic mice. Increasing brain Neu5Ac by ManNAc suppressed the activation of astrocytes and promoted their transition from A1 to A2 polarization. ManNAc also attenuated the periventricular white matter demyelination and improved neurobehavioral outcomes in hydrocephalic mice. CONCLUSION: Increasing brain Neu5Ac improved the neurological outcomes associated with the regulation of astrocyte polarization and the suppression of demyelination in hydrocephalic mice, which may be a potential therapeutic strategy for NPH.


Assuntos
Doenças Desmielinizantes , Hidrocefalia de Pressão Normal , Humanos , Camundongos , Animais , Ácido N-Acetilneuramínico/metabolismo , Encéfalo/metabolismo , Metabolômica
20.
IEEE Winter Conf Appl Comput Vis ; 2023: 4976-4985, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37051561

RESUMO

Deep neural networks (DNNs) have rapidly become a de facto choice for medical image understanding tasks. However, DNNs are notoriously fragile to the class imbalance in image classification. We further point out that such imbalance fragility can be amplified when it comes to more sophisticated tasks such as pathology localization, as imbalances in such problems can have highly complex and often implicit forms of presence. For example, different pathology can have different sizes or colors (w.r.t.the background), different underlying demographic distributions, and in general different difficulty levels to recognize, even in a meticulously curated balanced distribution of training data. In this paper, we propose to use pruning to automatically and adaptively identify hard-to-learn (HTL) training samples, and improve pathology localization by attending them explicitly, during training in supervised, semi-supervised, and weakly-supervised settings. Our main inspiration is drawn from the recent finding that deep classification models have difficult-to-memorize samples and those may be effectively exposed through network pruning [15] - and we extend such observation beyond classification for the first time. We also present an interesting demographic analysis which illustrates HTLs ability to capture complex demographic imbalances. Our extensive experiments on the Skin Lesion Localization task in multiple training settings by paying additional attention to HTLs show significant improvement of localization performance by ~2-3%.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...