Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
Ultrasound Med Biol ; 50(5): 703-711, 2024 05.
Article in English | MEDLINE | ID: mdl-38350787

ABSTRACT

OBJECTIVE: The aim of this study was address the challenges posed by the manual labeling of fetal ultrasound images by introducing an unsupervised approach, the fetal ultrasound semantic clustering (FUSC) method. The primary objective was to automatically cluster a large volume of ultrasound images into various fetal views, reducing or eliminating the need for labor-intensive manual labeling. METHODS: The FUSC method was developed by using a substantial data set comprising 88,063 images. The methodology involves an unsupervised clustering approach to categorize ultrasound images into diverse fetal views. The method's effectiveness was further evaluated on an additional, unseen data set consisting of 8187 images. The evaluation included assessment of the clustering purity, and the entire process is detailed to provide insights into the method's performance. RESULTS: The FUSC method exhibited notable success, achieving >92% clustering purity on the evaluation data set of 8187 images. The results signify the feasibility of automatically clustering fetal ultrasound images without relying on manual labeling. The study showcases the potential of this approach in handling a large volume of ultrasound scans encountered in clinical practice, with implications for improving efficiency and accuracy in fetal ultrasound imaging. CONCLUSION: The findings of this investigation suggest that the FUSC method holds significant promise for the field of fetal ultrasound imaging. By automating the clustering of ultrasound images, this approach has the potential to reduce the manual labeling burden, making the process more efficient. The results pave the way for advanced automated labeling solutions, contributing to the enhancement of clinical practices in fetal ultrasound imaging. Our code is available at https://github.com/BioMedIA-MBZUAI/FUSC.


Subject(s)
Semantics , Ultrasonography, Prenatal , Pregnancy , Female , Humans , Pregnancy Trimester, Second , Ultrasonography, Prenatal/methods , Supervised Machine Learning , Cluster Analysis
2.
Med Image Anal ; 92: 103047, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38157647

ABSTRACT

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Cell Nucleus/pathology , Histological Techniques/methods
3.
Med Image Anal ; 90: 102989, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37827111

ABSTRACT

The number of studies on deep learning for medical diagnosis is expanding, and these systems are often claimed to outperform clinicians. However, only a few systems have shown medical efficacy. From this perspective, we examine a wide range of deep learning algorithms for the assessment of glioblastoma - a common brain tumor in older adults that is lethal. Surgery, chemotherapy, and radiation are the standard treatments for glioblastoma patients. The methylation status of the MGMT promoter, a specific genetic sequence found in the tumor, affects chemotherapy's effectiveness. MGMT promoter methylation improves chemotherapy response and survival in several cancers. MGMT promoter methylation is determined by a tumor tissue biopsy, which is then genetically tested. This lengthy and invasive procedure increases the risk of infection and other complications. Thus, researchers have used deep learning models to examine the tumor from brain MRI scans to determine the MGMT promoter's methylation state. We employ deep learning models and one of the largest public MRI datasets of 585 participants to predict the methylation status of the MGMT promoter in glioblastoma tumors using MRI scans. We test these models using Grad-CAM, occlusion sensitivity, feature visualizations, and training loss landscapes. Our results show no correlation between these two, indicating that external cohort data should be used to verify these models' performance to assure the accuracy and reliability of deep learning systems in cancer diagnosis.


Subject(s)
Brain Neoplasms , Deep Learning , Glioblastoma , Humans , Aged , Glioblastoma/diagnostic imaging , Glioblastoma/genetics , Methylation , Reproducibility of Results , DNA Modification Methylases/genetics , DNA Modification Methylases/metabolism , DNA Modification Methylases/therapeutic use , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/genetics , Magnetic Resonance Imaging/methods , Tumor Suppressor Proteins/genetics , Tumor Suppressor Proteins/metabolism , Tumor Suppressor Proteins/therapeutic use , DNA Repair Enzymes/genetics , DNA Repair Enzymes/metabolism , DNA Repair Enzymes/therapeutic use
4.
Bioengineering (Basel) ; 10(7)2023 Jul 24.
Article in English | MEDLINE | ID: mdl-37508906

ABSTRACT

Medical image segmentation is a vital healthcare endeavor requiring precise and efficient models for appropriate diagnosis and treatment. Vision transformer (ViT)-based segmentation models have shown great performance in accomplishing this task. However, to build a powerful backbone, the self-attention block of ViT requires large-scale pre-training data. The present method of modifying pre-trained models entails updating all or some of the backbone parameters. This paper proposes a novel fine-tuning strategy for adapting a pretrained transformer-based segmentation model on data from a new medical center. This method introduces a small number of learnable parameters, termed prompts, into the input space (less than 1% of model parameters) while keeping the rest of the model parameters frozen. Extensive studies employing data from new unseen medical centers show that the prompt-based fine-tuning of medical segmentation models provides excellent performance regarding the new-center data with a negligible drop regarding the old centers. Additionally, our strategy delivers great accuracy with minimum re-training on new-center data, significantly decreasing the computational and time costs of fine-tuning pre-trained models. Our source code will be made publicly available.

5.
NPJ Digit Med ; 6(1): 36, 2023 Mar 09.
Article in English | MEDLINE | ID: mdl-36894653

ABSTRACT

Accurate estimation of gestational age is an essential component of good obstetric care and informs clinical decision-making throughout pregnancy. As the date of the last menstrual period is often unknown or uncertain, ultrasound measurement of fetal size is currently the best method for estimating gestational age. The calculation assumes an average fetal size at each gestational age. The method is accurate in the first trimester, but less so in the second and third trimesters as growth deviates from the average and variation in fetal size increases. Consequently, fetal ultrasound late in pregnancy has a wide margin of error of at least ±2 weeks' gestation. Here, we utilise state-of-the-art machine learning methods to estimate gestational age using only image analysis of standard ultrasound planes, without any measurement information. The machine learning model is based on ultrasound images from two independent datasets: one for training and internal validation, and another for external validation. During validation, the model was blinded to the ground truth of gestational age (based on a reliable last menstrual period date and confirmatory first-trimester fetal crown rump length). We show that this approach compensates for increases in size variation and is even accurate in cases of intrauterine growth restriction. Our best machine-learning based model estimates gestational age with a mean absolute error of 3.0 (95% CI, 2.9-3.2) and 4.3 (95% CI, 4.1-4.5) days in the second and third trimesters, respectively, which outperforms current ultrasound-based clinical biometry at these gestational ages. Our method for dating the pregnancy in the second and third trimesters is, therefore, more accurate than published methods.

6.
Pac Symp Biocomput ; 28: 263-274, 2023.
Article in English | MEDLINE | ID: mdl-36540983

ABSTRACT

We have gained access to vast amounts of multi-omics data thanks to Next Generation Sequencing. However, it is challenging to analyse this data due to its high dimensionality and much of it not being annotated. Lack of annotated data is a significant problem in machine learning, and Self-Supervised Learning (SSL) methods are typically used to deal with limited labelled data. However, there is a lack of studies that use SSL methods to exploit inter-omics relationships on unlabelled multi-omics data. In this work, we develop a novel and efficient pre-training paradigm that consists of various SSL components, including but not limited to contrastive alignment, data recovery from corrupted samples, and using one type of omics data to recover other omic types. Our pre-training paradigm improves performance on downstream tasks with limited labelled data. We show that our approach outperforms the state-of-the-art method in cancer type classification on the TCGA pancancer dataset in semi-supervised setting. Moreover, we show that the encoders that are pre-trained using our approach can be used as powerful feature extractors even without fine-tuning. Our ablation study shows that the method is not overly dependent on any pretext task component. The network architectures in our approach are designed to handle missing omic types and multiple datasets for pre-training and downstream training. Our pre-training paradigm can be extended to perform zero-shot classification of rare cancers.


Subject(s)
Multiomics , Neoplasms , Humans , Computational Biology , Neoplasms/genetics , High-Throughput Nucleotide Sequencing , Supervised Machine Learning
7.
Ultrasound Med Biol ; 47(12): 3470-3479, 2021 12.
Article in English | MEDLINE | ID: mdl-34538535

ABSTRACT

The aims of this work were to create a robust automatic software tool for measurement of the levator hiatal area on transperineal ultrasound (TPUS) volumes and to measure the potential reduction in variability and time taken for analysis in a clinical setting. The proposed tool automatically detects the C-plane (i.e., the plane of minimal hiatal dimensions) from a 3-D TPUS volume and subsequently uses the extracted plane to automatically segment the levator hiatus, using a convolutional neural network. The automatic pipeline was tested using 73 representative TPUS volumes. Reference hiatal outlines were obtained manually by two experts and compared with the pipeline's automated outlines. The Hausdorff distance, area, a clinical quality score, C-plane angle and C-plane Euclidean distance were used to evaluate C-plane detection and quantify levator hiatus segmentation accuracy. A visual Turing test was created to compare the performance of the software with that of the expert, based on the visual assessment of C-plane and hiatal segmentation quality. The overall time taken to extract the hiatal area with both measurement methods (i.e., manual and automatic) was measured. Each metric was calculated both for computer-observer differences and for inter-and intra-observer differences. The automatic method gave results similar to those of the expert when determining the hiatal outline from a TPUS volume. Indeed, the hiatal area measured by the algorithm and by an expert were within the intra-observer variability. Similarly, the method identified the C-plane with an accuracy of 5.76 ± 5.06° and 6.46 ± 5.18 mm in comparison to the inter-observer variability of 9.39 ± 6.21° and 8.48 ± 6.62 mm. The visual Turing test suggested that the automatic method identified the C-plane position within the TPUS volume visually as well as the expert. The average time taken to identify the C-plane and segment the hiatal area manually was 2 min and 35 ± 17 s, compared with 35 ± 4 s for the automatic result. This study presents a method for automatically measuring the levator hiatal area using artificial intelligence-based methodologies whereby the C-plane within a TPUS volume is detected and subsequently traced for the levator hiatal outline. The proposed solution was determined to be accurate, relatively quick, robust and reliable and, importantly, to reduce time and expertise required for pelvic floor disorder assessment.


Subject(s)
Pelvic Floor , Valsalva Maneuver , Artificial Intelligence , Humans , Imaging, Three-Dimensional , Pelvic Floor/diagnostic imaging , Ultrasonography
9.
J Med Imaging (Bellingham) ; 7(5): 057001, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32968691

ABSTRACT

Purpose: We present an original method for simulating realistic fetal neurosonography images specifically generating third-trimester pregnancy ultrasound images from second-trimester images. Our method was developed using unpaired data, as pairwise data were not available. We also report original insights on the general appearance differences between second- and third-trimester fetal head transventricular (TV) plane images. Approach: We design a cycle-consistent adversarial network (Cycle-GAN) to simulate visually realistic third-trimester images from unpaired second- and third-trimester ultrasound images. Simulation realism is evaluated qualitatively by experienced sonographers who blindly graded real and simulated images. A quantitative evaluation is also performed whereby a validated deep-learning-based image recognition algorithm (ScanNav®) acts as the expert reference to allow hundreds of real and simulated images to be automatically analyzed and compared efficiently. Results: Qualitative evaluation shows that the human expert cannot tell the difference between real and simulated third-trimester scan images. 84.2% of the simulated third-trimester images could not be distinguished from the real third-trimester images. As a quantitative baseline, on 3000 images, the visibility drop of the choroid, CSP, and mid-line falx between real second- and real third-trimester scans was computed by ScanNav® and found to be 72.5%, 61.5%, and 67%, respectively. The visibility drop of the same structures between real second-trimester and simulated third-trimester was found to be 77.5%, 57.7%, and 56.2%, respectively. Therefore, the real and simulated third-trimester images were consider to be visually similar to each other. Our evaluation also shows that the third-trimester simulation of a conventional GAN is much easier to distinguish, and the visibility drop of the structures is smaller than our proposed method. Conclusions: The results confirm that it is possible to simulate realistic third-trimester images from second-trimester images using a modified Cycle-GAN, which may be useful for deep learning researchers with a restricted availability of third-trimester scans but with access to ample second trimester images. We also show convincing simulation improvements, both qualitatively and quantitatively, using the Cycle-GAN method compared with a conventional GAN. Finally, the use of a machine learning-based reference (in the case ScanNav®) for large-scale quantitative image analysis evaluation is also a first to our knowledge.

10.
J Med Imaging (Bellingham) ; 7(1): 014501, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31956665

ABSTRACT

Obstetric ultrasound is a fundamental ingredient of modern prenatal care with many applications including accurate dating of a pregnancy, identifying pregnancy-related complications, and diagnosis of fetal abnormalities. However, despite its many benefits, two factors currently prevent wide-scale uptake of this technology for point-of-care clinical decision-making in low- and middle-income country (LMIC) settings. First, there is a steep learning curve for scan proficiency, and second, there has been a lack of easy-to-use, affordable, and portable ultrasound devices. We introduce a framework toward addressing these barriers, enabled by recent advances in machine learning applied to medical imaging. The framework is designed to be realizable as a point-of-care ultrasound (POCUS) solution with an affordable wireless ultrasound probe, a smartphone or tablet, and automated machine-learning-based image processing. Specifically, we propose a machine-learning-based algorithm pipeline designed to automatically estimate the gestational age of a fetus from a short fetal ultrasound scan. We present proof-of-concept evaluation of accuracy of the key image analysis algorithms for automatic head transcerebellar plane detection, automatic transcerebellar diameter measurement, and estimation of gestational age on conventional ultrasound data simulating the POCUS task and discuss next steps toward translation via a first application on clinical ultrasound video from a low-cost ultrasound probe.

11.
Phys Med Biol ; 64(18): 185010, 2019 09 17.
Article in English | MEDLINE | ID: mdl-31408850

ABSTRACT

The first trimester fetal ultrasound scan is important to confirm fetal viability, to estimate the gestational age of the fetus, and to detect fetal anomalies early in pregnancy. First trimester ultrasound images have a different appearance than for the second trimester scan, reflecting the different stage of fetal development. There is limited literature on automation of image-based assessment for this earlier trimester, and most of the literature is focused on one specific fetal anatomy. In this paper, we consider automation to support first trimester fetal assessment of multiple fetal anatomies including both visualization and the measurements from a single 3D ultrasound scan. We present a deep learning and image processing solution (i) to perform semantic segmentation of the whole fetus, (ii) to estimate plane orientation for standard biometry views, (iii) to localize and automatically estimate biometry, and (iv) to detect fetal limbs from a 3D first trimester volume. Computational analysis methods were built using a real-world dataset (n = 44 volumes). An evaluation on a further independent clinical dataset (n = 21 volumes) showed that the automated methods approached human expert assessment of a 3D volume.


Subject(s)
Fetal Development , Fetus/diagnostic imaging , Gestational Age , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Ultrasonography, Prenatal/methods , Abdomen/diagnostic imaging , Algorithms , Female , Head/diagnostic imaging , Humans , Pregnancy , Pregnancy Trimester, First
12.
Med Image Anal ; 46: 1-14, 2018 05.
Article in English | MEDLINE | ID: mdl-29499436

ABSTRACT

Methods for aligning 3D fetal neurosonography images must be robust to (i) intensity variations, (ii) anatomical and age-specific differences within the fetal population, and (iii) the variations in fetal position. To this end, we propose a multi-task fully convolutional neural network (FCN) architecture to address the problem of 3D fetal brain localization, structural segmentation, and alignment to a referential coordinate system. Instead of treating these tasks as independent problems, we optimize the network by simultaneously learning features shared within the input data pertaining to the correlated tasks, and later branching out into task-specific output streams. Brain alignment is achieved by defining a parametric coordinate system based on skull boundaries, location of the eye sockets, and head pose, as predicted from intracranial structures. This information is used to estimate an affine transformation to align a volumetric image to the skull-based coordinate system. Co-alignment of 140 fetal ultrasound volumes (age range: 26.0 ±â€¯4.4 weeks) was achieved with high brain overlap and low eye localization error, regardless of gestational age or head size. The automatically co-aligned volumes show good structural correspondence between fetal anatomies.


Subject(s)
Brain/diagnostic imaging , Brain/embryology , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Neuroimaging/methods , Ultrasonography, Prenatal/methods , Adult , Algorithms , Female , Gestational Age , Humans , Image Processing, Computer-Assisted/methods , Pregnancy
13.
Eur J Prev Cardiol ; 24(17): 1799-1806, 2017 11.
Article in English | MEDLINE | ID: mdl-28925747

ABSTRACT

Background Ultrasound imaging is able to quantify carotid arterial wall structure for the assessment of cerebral and cardiovascular disease risks. We describe a protocol and quality assurance process to enable carotid imaging at large scale that has been developed for the UK Biobank Imaging Enhancement Study of 100,000 individuals. Design An imaging protocol was developed to allow measurement of carotid intima-media thickness from the far wall of both common carotid arteries. Six quality assurance criteria were defined and a web-based interface (Intelligent Ultrasound) was developed to facilitate rapid assessment of images against each criterion. Results and conclusions Excellent inter and intra-observer agreements were obtained for image quality evaluations on a test dataset from 100 individuals. The image quality criteria then were applied in the UK Biobank Imaging Enhancement Study. Data from 2560 participants were evaluated. Feedback of results to the imaging team led to improvement in quality assurance, with quality assurance failures falling from 16.2% in the first two-month period examined to 6.4% in the last. Eighty per cent had all carotid intima-media thickness images graded as of acceptable quality, with at least one image acceptable for 98% of participants. Carotid intima-media thickness measures showed expected associations with increasing age and gender. Carotid imaging can be performed consistently, with semi-automated quality assurance of all scans, in a limited timeframe within a large scale multimodality imaging assessment. Routine feedback of quality control metrics to operators can improve the quality of the data collection.


Subject(s)
Carotid Arteries/diagnostic imaging , Carotid Artery Diseases/diagnostic imaging , Carotid Intima-Media Thickness/standards , Clinical Protocols/standards , Quality Assurance, Health Care/standards , Quality Improvement/standards , Quality Indicators, Health Care/standards , Aged , Data Collection/standards , Female , Humans , Male , Middle Aged , Observer Variation , Predictive Value of Tests , Prognosis , Program Development , Program Evaluation , Reproducibility of Results , United Kingdom
14.
Ultrasound Med Biol ; 43(12): 2925-2933, 2017 12.
Article in English | MEDLINE | ID: mdl-28958729

ABSTRACT

During routine ultrasound assessment of the fetal brain for biometry estimation and detection of fetal abnormalities, accurate imaging planes must be found by sonologists following a well-defined imaging protocol or clinical standard, which can be difficult for non-experts to do well. This assessment helps provide accurate biometry estimation and the detection of possible brain abnormalities. We describe a machine-learning method to assess automatically that transventricular ultrasound images of the fetal brain have been correctly acquired and meet the required clinical standard. We propose a deep learning solution, which breaks the problem down into three stages: (i) accurate localization of the fetal brain, (ii) detection of regions that contain structures of interest and (iii) learning the acoustic patterns in the regions that enable plane verification. We evaluate the developed methodology on a large real-world clinical data set of 2-D mid-gestation fetal images. We show that the automatic verification method approaches human expert assessment.


Subject(s)
Brain/diagnostic imaging , Brain/embryology , Image Processing, Computer-Assisted/methods , Machine Learning , Neural Networks, Computer , Ultrasonography, Prenatal/methods , Female , Humans , Pregnancy
15.
Avicenna J Med ; 7(1): 23-27, 2017.
Article in English | MEDLINE | ID: mdl-28182034

ABSTRACT

AIM OF THE STUDY: Coronary artery bypass graft surgery is the gold standard for the treatment of multivessel and left main coronary artery disease. However, there is considerable debate that whether left internal mammary artery (IMA) should be taken as pedicled or skeletonized. This study was conducted to assess the difference in blood flow after the application of topical vasodilator in skeletonized and pedicled IMA. MATERIALS AND METHODS: In this study, each patient underwent either skeletonized (n = 25) or pedicled IMA harvesting (n = 25). The type of graft on each individual patient was decided randomly. Intraoperative variables such as conduit length and blood flow were measured by the surgeon himself. The length of the grafted IMA was carefully determined in vivo, with the proximal and distal ends attached, from the first rib to IMA divergence. The IMA flow was measured on two separate occasions, before and after application of topical vasodilator. Known cases of subclavian artery stenosis and previous sternal radiation were excluded from the study. RESULTS: The blood flow before the application of topical vasodilator was similar in both the groups (P = 0.227). However, the flow was significantly less in pedicled than skeletonized IMA after application of vasodilator (P < 0.0001). Similarly, the length of skeletonized graft was significantly higher than the length of pedicled graft (P < 0.0001). CONCLUSION: Our study signifies that skeletonization of IMA results in increased graft length and blood flow after the application of topical vasodilator. However, we recommend that long-term clinical trials should be conducted to fully determine long-term patency rates of skeletonized IMA.

16.
IEEE J Biomed Health Inform ; 20(4): 1120-8, 2016 07.
Article in English | MEDLINE | ID: mdl-26011873

ABSTRACT

The parasagittal (PS) plane is a 2-D diagnostic plane used routinely in cranial ultrasonography of the neonatal brain. This paper develops a novel approach to find the PS plane in a 3-D fetal ultrasound scan to allow image-based biomarkers to be tracked from prebirth through the first weeks of postbirth life. We propose an accurate plane-finding solution based on regression forests (RF). The method initially localizes the fetal brain and its midline automatically. The midline on several axial slices is used to detect the midsagittal plane, which is used as a constraint in the proposed RF framework to detect the PS plane. The proposed learning algorithm guides the RF learning method in a novel way by: 1) using informative voxels and voxel informative strength as a weighting within the training stage objective function, and 2) introducing regularization of the RF by proposing a geometrical feature within the training stage. Results on clinical data indicate that the new automated method is more reproducible than manual plane finding obtained by two clinicians.


Subject(s)
Brain/diagnostic imaging , Fetus/diagnostic imaging , Imaging, Three-Dimensional/methods , Ultrasonography, Prenatal/methods , Female , Humans , Pregnancy , Regression Analysis , Signal Processing, Computer-Assisted
17.
Med Image Anal ; 21(1): 72-86, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25624045

ABSTRACT

We propose an automated framework for predicting gestational age (GA) and neurodevelopmental maturation of a fetus based on 3D ultrasound (US) brain image appearance. Our method capitalizes on age-related sonographic image patterns in conjunction with clinical measurements to develop, for the first time, a predictive age model which improves on the GA-prediction potential of US images. The framework benefits from a manifold surface representation of the fetal head which delineates the inner skull boundary and serves as a common coordinate system based on cranial position. This allows for fast and efficient sampling of anatomically-corresponding brain regions to achieve like-for-like structural comparison of different developmental stages. We develop bespoke features which capture neurosonographic patterns in 3D images, and using a regression forest classifier, we characterize structural brain development both spatially and temporally to capture the natural variation existing in a healthy population (N=447) over an age range of active brain maturation (18-34weeks). On a routine clinical dataset (N=187) our age prediction results strongly correlate with true GA (r=0.98,accurate within±6.10days), confirming the link between maturational progression and neurosonographic activity observable across gestation. Our model also outperforms current clinical methods by ±4.57 days in the third trimester-a period complicated by biological variations in the fetal population. Through feature selection, the model successfully identified the most age-discriminating anatomies over this age range as being the Sylvian fissure, cingulate, and callosal sulci.


Subject(s)
Artificial Intelligence , Brain/embryology , Echoencephalography/methods , Gestational Age , Image Interpretation, Computer-Assisted/methods , Ultrasonography, Prenatal/methods , Algorithms , Crown-Rump Length , Female , Humans , Image Enhancement/methods , Male , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity
18.
Echocardiography ; 32(2): 302-9, 2015 Feb.
Article in English | MEDLINE | ID: mdl-24924997

ABSTRACT

BACKGROUND: Three-dimensional fusion echocardiography (3DFE) is a novel postprocessing approach that utilizes imaging data acquired from multiple 3D acquisitions. We assessed image quality, endocardial border definition, and cardiac wall motion in patients using 3DFE compared to standard 3D images (3D) and results obtained with contrast echocardiography (2DC). METHODS: Twenty-four patients (mean age 66.9 ± 13 years, 17 males, 7 females) undergoing 2DC had three, noncontrast, 3D apical volumes acquired at rest. Images were fused using an automated image fusion approach. Quality of the 3DFE was compared to both 3D and 2DC based on contrast-to-noise ratio (CNR) and endocardial border definition. We then compared clinical wall-motion score index (WMSI) calculated from 3DFE and 3D to those obtained from 2DC images. RESULTS: Fused 3D volumes had significantly improved CNR (8.92 ± 1.35 vs. 6.59 ± 1.19, P < 0.0005) and segmental image quality (2.42 ± 0.99 vs. 1.93 ± 1.18, P < 0.005) compared to unfused 3D acquisitions. Levels achieved were closer to scores for 2D contrast images (CNR: 9.04 ± 2.21, P = 0.6; segmental image quality: 2.91 ± 0.37, P < 0.005). WMSI calculated from fused 3D volumes did not differ significantly from those obtained from 2D contrast echocardiography (1.06 ± 0.09 vs. 1.07 ± 0.15, P = 0.69), whereas unfused images produced significantly more variable results (1.19 ± 0.30). This was confirmed by a better intraclass correlation coefficient (ICC 0.72; 95% CI 0.32-0.88) relative to comparisons with unfused images (ICC 0.56; 95% CI 0.02-0.81). CONCLUSION: 3DFE significantly improves left ventricular image quality compared to unfused 3D in a patient population and allows noncontrast assessment of wall motion that approaches that achieved with 2D contrast echocardiography.


Subject(s)
Contrast Media , Echocardiography, Three-Dimensional/methods , Heart Ventricles/diagnostic imaging , Image Processing, Computer-Assisted/methods , Ventricular Dysfunction, Left/diagnostic imaging , Aged , Echocardiography/methods , Female , Humans , Image Enhancement , Male , Observer Variation , Phospholipids , Reproducibility of Results , Sulfur Hexafluoride
19.
Article in English | MEDLINE | ID: mdl-25485387

ABSTRACT

We propose an automated framework for predicting age and neurodevelopmental maturation of a fetus based on 3D ultrasound (US) brain image appearance. A topology-preserving manifold representation of the fetal skull enabled design of bespoke scale-invariant image features. Our regression forest model used these features to learn a mapping from age-related sonographic image patterns to fetal age and development. The Sylvian Fissure was identified as a critical region for accurate age estimation, and restricting the search space to this anatomy improved prediction accuracy on a set of 130 healthy fetuses (error ± 3.8 days; r = 0.98 performing the best current clinical method. Our framework remained robust when applied to a routine clinical population.


Subject(s)
Aging/physiology , Brain/growth & development , Echoencephalography/methods , Gestational Age , Image Interpretation, Computer-Assisted/methods , Multimodal Imaging/methods , Ultrasonography, Prenatal/methods , Algorithms , Female , Humans , Imaging, Three-Dimensional/methods , Male , Reproducibility of Results , Sensitivity and Specificity
20.
IEEE Trans Med Imaging ; 33(4): 797-813, 2014 Apr.
Article in English | MEDLINE | ID: mdl-23934664

ABSTRACT

This paper presents the evaluation results of the methods submitted to Challenge US: Biometric Measurements from Fetal Ultrasound Images, a segmentation challenge held at the IEEE International Symposium on Biomedical Imaging 2012. The challenge was set to compare and evaluate current fetal ultrasound image segmentation methods. It consisted of automatically segmenting fetal anatomical structures to measure standard obstetric biometric parameters, from 2D fetal ultrasound images taken on fetuses at different gestational ages (21 weeks, 28 weeks, and 33 weeks) and with varying image quality to reflect data encountered in real clinical environments. Four independent sub-challenges were proposed, according to the objects of interest measured in clinical practice: abdomen, head, femur, and whole fetus. Five teams participated in the head sub-challenge and two teams in the femur sub-challenge, including one team who tackled both. Nobody attempted the abdomen and whole fetus sub-challenges. The challenge goals were two-fold and the participants were asked to submit the segmentation results as well as the measurements derived from the segmented objects. Extensive quantitative (region-based, distance-based, and Bland-Altman measurements) and qualitative evaluation was performed to compare the results from a representative selection of current methods submitted to the challenge. Several experts (three for the head sub-challenge and two for the femur sub-challenge), with different degrees of expertise, manually delineated the objects of interest to define the ground truth used within the evaluation framework. For the head sub-challenge, several groups produced results that could be potentially used in clinical settings, with comparable performance to manual delineations. The femur sub-challenge had inferior performance to the head sub-challenge due to the fact that it is a harder segmentation problem and that the techniques presented relied more on the femur's appearance.


Subject(s)
Biometry/methods , Image Processing, Computer-Assisted/methods , Ultrasonography, Prenatal/methods , Female , Gestational Age , Humans , Pregnancy
SELECTION OF CITATIONS
SEARCH DETAIL
...