Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Med Biol Eng Comput ; 2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38664348

ABSTRACT

In the contemporary era, artificial intelligence (AI) has undergone a transformative evolution, exerting a profound influence on neuroimaging data analysis. This development has significantly elevated our comprehension of intricate brain functions. This study investigates the ramifications of employing AI techniques on neuroimaging data, with a specific objective to improve diagnostic capabilities and contribute to the overall progress of the field. A systematic search was conducted in prominent scientific databases, including PubMed, IEEE Xplore, and Scopus, meticulously curating 456 relevant articles on AI-driven neuroimaging analysis spanning from 2013 to 2023. To maintain rigor and credibility, stringent inclusion criteria, quality assessments, and precise data extraction protocols were consistently enforced throughout this review. Following a rigorous selection process, 104 studies were selected for review, focusing on diverse neuroimaging modalities with an emphasis on mental and neurological disorders. Among these, 19.2% addressed mental illness, and 80.7% focused on neurological disorders. It is found that the prevailing clinical tasks are disease classification (58.7%) and lesion segmentation (28.9%), whereas image reconstruction constituted 7.3%, and image regression and prediction tasks represented 9.6%. AI-driven neuroimaging analysis holds tremendous potential, transforming both research and clinical applications. Machine learning and deep learning algorithms outperform traditional methods, reshaping the field significantly.

2.
Article in English | MEDLINE | ID: mdl-37021897

ABSTRACT

Deep learning techniques can help minimize inter-physician analysis variability and the medical expert workloads, thereby enabling more accurate diagnoses. However, their implementation requires large-scale annotated dataset whose acquisition incurs heavy time and human-expertise costs. Hence, to significantly minimize the annotation cost, this study presents a novel framework that enables the deployment of deep learning methods in ultrasound (US) image segmentation requiring only very limited manually annotated samples. We propose SegMix, a fast and efficient approach that exploits a segment-paste-blend concept to generate large number of annotated samples based on a few manually acquired labels. Besides, a series of US-specific augmentation strategies built upon image enhancement algorithms are introduced to make maximum use of the available limited number of manually delineated images. The feasibility of the proposed framework is validated on the left ventricle (LV) segmentation and fetal head (FH) segmentation tasks, respectively. Experimental results demonstrate that using only 10 manually annotated images, the proposed framework can achieve a Dice and JI of 82.61% and 83.92%, and 88.42% and 89.27% for LV segmentation and FH segmentation, respectively. Compared with training using the entire training set, there is over 98% of annotation cost reduction while achieving comparable segmentation performance. This indicates that the proposed framework enables satisfactory deep leaning performance when very limited number of annotated samples is available. Therefore, we believe that it can be a reliable solution for annotation cost reduction in medical image analysis.

3.
Comput Biol Med ; 152: 106385, 2023 01.
Article in English | MEDLINE | ID: mdl-36493732

ABSTRACT

BACKGROUND: Numerous traditional filtering approaches and deep learning-based methods have been proposed to improve the quality of ultrasound (US) image data. However, their results tend to suffer from over-smoothing and loss of texture and fine details. Moreover, they perform poorly on images with different degradation levels and mainly focus on speckle reduction, even though texture and fine detail enhancement are of crucial importance in clinical diagnosis. METHODS: We propose an end-to-end framework termed US-Net for simultaneous speckle suppression and texture enhancement in US images. The architecture of US-Net is inspired by U-Net, whereby a feature refinement attention block (FRAB) is introduced to enable an effective learning of multi-level and multi-contextual representative features. Specifically, FRAB aims to emphasize high-frequency image information, which helps boost the restoration and preservation of fine-grained and textural details. Furthermore, our proposed US-Net is trained essentially with real US image data, whereby real US images embedded with simulated multi-level speckle noise are used as an auxiliary training set. RESULTS: Extensive quantitative and qualitative experiments indicate that although trained with only one US image data type, our proposed US-Net is capable of restoring images acquired from different body parts and scanning settings with different degradation levels, while exhibiting favorable performance against state-of-the-art image enhancement approaches. Furthermore, utilizing our proposed US-Net as a pre-processing stage for COVID-19 diagnosis results in a gain of 3.6% in diagnostic accuracy. CONCLUSIONS: The proposed framework can help improve the accuracy of ultrasound diagnosis.


Subject(s)
COVID-19 Testing , COVID-19 , Humans , Ultrasonography/methods , Image Enhancement/methods , Image Processing, Computer-Assisted , Algorithms
4.
Comput Biol Med ; 149: 106090, 2022 10.
Article in English | MEDLINE | ID: mdl-36115304

ABSTRACT

BACKGROUND: In recent years, deep learning techniques have demonstrated promising performances in echocardiography (echo) data segmentation, which constitutes a critical step in the diagnosis and prognosis of cardiovascular diseases (CVDs). However, their successful implementation requires large number and high-quality annotated samples, whose acquisition is arduous and expertise-demanding. To this end, this study aims at circumventing the tedious, time-consuming and expertise-demanding data annotation involved in deep learning-based echo data segmentation. METHODS: We propose a two-phase framework for fast generation of annotated echo data needed for implementing intelligent cardiac structure segmentation systems. First, multi-size and multi-orientation cardiac structures are simulated leveraging polynomial fitting method. Second, the obtained cardiac structures are embedded onto curated endoscopic ultrasound images using Fourier Transform algorithm, resulting in pairs of annotated samples. The practical significance of the proposed framework is validated through using the generated realistic annotated images as auxiliary dataset to pretrain deep learning models for automatic segmentation of left ventricle and left ventricle wall in real echo data, respectively. RESULTS: Extensive experimental analyses indicate that compared with training from scratch, fine-tuning after pretraining with the generated dataset always results in significant performance improvement whereby the improvement margins in terms of Dice and IoU can reach 12.9% and 7.74%, respectively. CONCLUSION: The proposed framework has great potential to overcome the shortage of labeled data hampering the deployment of deep learning approaches in echo data analysis.


Subject(s)
Algorithms , Echocardiography , Heart/diagnostic imaging , Heart Ventricles/diagnostic imaging
5.
Acad Radiol ; 28(11): 1507-1523, 2021 11.
Article in English | MEDLINE | ID: mdl-34649779

ABSTRACT

RATIONALE AND OBJECTIVE: To perform a meta-analysis to compare the diagnostic test accuracy (DTA) of deep learning (DL) in detecting coronavirus disease 2019 (COVID-19), and to investigate how network architecture and type of datasets affect DL performance. MATERIALS AND METHODS: We searched PubMed, Web of Science and Inspec from January 1, 2020, to December 3, 2020, for retrospective and prospective studies on deep learning detection with at least reported sensitivity and specificity. Pooled DTA was obtained using random-effect models. Sub-group analysis between studies was also carried out for data source and network architectures. RESULTS: The pooled sensitivity and specificity were 91% (95% confidence interval [CI]: 88%, 93%; I2 = 69%) and 92% (95% CI: 88%, 94%; I2 = 88%), respectively for 19 studies. The pooled AUC and diagnostic odds ratio (DOR) were 0.95 (95% CI: 0.88, 0.92) and 112.5 (95% CI: 57.7, 219.3; I2 = 90%) respectively. The overall accuracy, recall, F1-score, LR+ and LR- are 89.5%, 89.5%, 89.7%, 23.13 and 0.13. Sub-group analysis shows that the sensitivity and DOR significantly vary with the type of network architectures and sources of data with low heterogeneity are (I2 = 0%) and (I2 = 18%) for ResNet architecture and single-source datasets, respectively. CONCLUSION: The diagnosis of COVID-19 via deep learning has achieved incredible performance, and the source of datasets, as well as network architectures, strongly affect DL performance.


Subject(s)
COVID-19 , Deep Learning , Diagnostic Tests, Routine , Humans , Prospective Studies , Retrospective Studies , SARS-CoV-2
6.
Biomed Eng Online ; 17(1): 96, 2018 Jul 16.
Article in English | MEDLINE | ID: mdl-30012167

ABSTRACT

BACKGROUND: Early and automatic detection of pulmonary nodules from CT lung screening is the prerequisite for precise management of lung cancer. However, a large number of false positives appear in order to increase the sensitivity, especially for detecting micro-nodules (diameter < 3 mm), which increases the radiologists' workload and causes unnecessary anxiety for the patients. To decrease the false positive rate, we propose to use CNN models to discriminate between pulmonary micro-nodules and non-nodules from CT image patches. METHODS: A total of 13,179 micro-nodules and 21,315 non-nodules marked by radiologists are extracted with three different patch sizes (16 × 16, 32 × 32 and 64 × 64) from LIDC/IDRI database and used in the experiments. Three CNN models with different depths (1, 2 or 4 convolutional layers) are designed; their performances are evaluated by the fivefold cross-validation in term of the accuracy, area under the curve (AUC), F-score and sensitivity. The network parameters are also optimized. RESULTS: It is found that the performance of the CNN models is greatly dependent on the patches size and the number of convolutional layers. The CNN model with two convolutional layers presented the best performance in case of 32 × 32 patches size, achieving an accuracy of 88.28%, an AUC of 0.87, a F-score of 83.45% and a sensitivity of 83.82%. CONCLUSIONS: The CNN models with appropriate depth and size of image patches can effectively discriminate between pulmonary micro-nodules and non-nodules, and reduce the false positives and help manage lung cancer precisely.


Subject(s)
Image Processing, Computer-Assisted , Lung/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed , Databases, Factual , False Positive Reactions , Humans , Lung Neoplasms/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...