Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Med Image Anal ; 94: 103147, 2024 May.
Article in English | MEDLINE | ID: mdl-38547665

ABSTRACT

Three-dimensional (3D) ultrasound imaging has contributed to our understanding of fetal developmental processes by providing rich contextual information of the inherently 3D anatomies. However, its use is limited in clinical settings, due to the high purchasing costs and limited diagnostic practicality. Freehand 2D ultrasound imaging, in contrast, is routinely used in standard obstetric exams, but inherently lacks a 3D representation of the anatomies, which limits its potential for more advanced assessment. Such full representations are challenging to recover even with external tracking devices due to internal fetal movement which is independent from the operator-led trajectory of the probe. Capitalizing on the flexibility offered by freehand 2D ultrasound acquisition, we propose ImplicitVol to reconstruct 3D volumes from non-sensor-tracked 2D ultrasound sweeps. Conventionally, reconstructions are performed on a discrete voxel grid. We, however, employ a deep neural network to represent, for the first time, the reconstructed volume as an implicit function. Specifically, ImplicitVol takes a set of 2D images as input, predicts their locations in 3D space, jointly refines the inferred locations, and learns a full volumetric reconstruction. When testing natively-acquired and volume-sampled 2D ultrasound video sequences collected from different manufacturers, the 3D volumes reconstructed by ImplicitVol show significantly better visual and semantic quality than the existing interpolation-based reconstruction approaches. The inherent continuity of implicit representation also enables ImplicitVol to reconstruct the volume to arbitrarily high resolutions. As formulated, ImplicitVol has the potential to integrate seamlessly into the clinical workflow, while providing richer information for diagnosis and evaluation of the developing brain.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Humans , Female , Pregnancy , Imaging, Three-Dimensional/methods , Ultrasonography/methods , Ultrasonography, Prenatal , Brain/diagnostic imaging
2.
Nature ; 623(7985): 106-114, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37880365

ABSTRACT

Maturation of the human fetal brain should follow precisely scheduled structural growth and folding of the cerebral cortex for optimal postnatal function1. We present a normative digital atlas of fetal brain maturation based on a prospective international cohort of healthy pregnant women2, selected using World Health Organization recommendations for growth standards3. Their fetuses were accurately dated in the first trimester, with satisfactory growth and neurodevelopment from early pregnancy to 2 years of age4,5. The atlas was produced using 1,059 optimal quality, three-dimensional ultrasound brain volumes from 899 of the fetuses and an automated analysis pipeline6-8. The atlas corresponds structurally to published magnetic resonance images9, but with finer anatomical details in deep grey matter. The between-study site variability represented less than 8.0% of the total variance of all brain measures, supporting pooling data from the eight study sites to produce patterns of normative maturation. We have thereby generated an average representation of each cerebral hemisphere between 14 and 31 weeks' gestation with quantification of intracranial volume variability and growth patterns. Emergent asymmetries were detectable from as early as 14 weeks, with peak asymmetries in regions associated with language development and functional lateralization between 20 and 26 weeks' gestation. These patterns were validated in 1,487 three-dimensional brain volumes from 1,295 different fetuses in the same cohort. We provide a unique spatiotemporal benchmark of fetal brain maturation from a large cohort with normative postnatal growth and neurodevelopment.


Subject(s)
Brain , Fetal Development , Fetus , Child, Preschool , Female , Humans , Pregnancy , Brain/anatomy & histology , Brain/embryology , Brain/growth & development , Fetus/embryology , Gestational Age , Gray Matter/anatomy & histology , Gray Matter/embryology , Gray Matter/growth & development , Healthy Volunteers , Internationality , Magnetic Resonance Imaging , Organ Size , Prospective Studies , World Health Organization , Imaging, Three-Dimensional , Ultrasonography
3.
Neuroimage ; 254: 119117, 2022 07 01.
Article in English | MEDLINE | ID: mdl-35331871

ABSTRACT

The quantification of subcortical volume development from 3D fetal ultrasound can provide important diagnostic information during pregnancy monitoring. However, manual segmentation of subcortical structures in ultrasound volumes is time-consuming and challenging due to low soft tissue contrast, speckle and shadowing artifacts. For this reason, we developed a convolutional neural network (CNN) for the automated segmentation of the choroid plexus (CP), lateral posterior ventricle horns (LPVH), cavum septum pellucidum et vergae (CSPV), and cerebellum (CB) from 3D ultrasound. As ground-truth labels are scarce and expensive to obtain, we applied few-shot learning, in which only a small number of manual annotations (n = 9) are used to train a CNN. We compared training a CNN with only a few individually annotated volumes versus many weakly labelled volumes obtained from atlas-based segmentations. This showed that segmentation performance close to intra-observer variability can be obtained with only a handful of manual annotations. Finally, the trained models were applied to a large number (n = 278) of ultrasound image volumes of a diverse, healthy population, obtaining novel US-specific growth curves of the respective structures during the second trimester of gestation.


Subject(s)
Deep Learning , Brain/diagnostic imaging , Female , Humans , Image Processing, Computer-Assisted , Neural Networks, Computer , Observer Variation , Pregnancy , Ultrasonography
4.
IEEE Trans Biomed Eng ; 68(3): 759-770, 2021 03.
Article in English | MEDLINE | ID: mdl-32790624

ABSTRACT

OBJECTIVE: The segmentation of the breast from the chest wall is an important first step in the analysis of breast magnetic resonance images. 3D U-Nets have been shown to obtain high segmentation accuracy and appear to generalize well when trained on one scanner type and tested on another scanner, provided that a very similar MR protocol is used. There has, however, been little work addressing the problem of domain adaptation when image intensities or patient orientation differ markedly between the training set and an unseen test set. In this work we aim to address this domain shift problem. METHOD: We propose to apply extensive intensity augmentation in addition to geometric augmentation during training. We explored both style transfer and a novel intensity remapping approach as intensity augmentation strategies. For our experiments, we trained a 3D U-Net on T1-weighted scans. We tested our network on T2-weighted scans from the same dataset as well as on an additional independent test set acquired with a T1-weighted TWIST sequence and a different coil configuration. RESULTS: By applying intensity augmentation we increased segmentation performance for the T2-weighted scans from a Dice of 0.71 to 0.88. This performance is very close to the baseline performance of training with T2-weighted scans (0.92). On the T1-weighted dataset we obtained a performance increase from 0.77 to 0.85. CONCLUSION: Our results show that the proposed intensity augmentation increases segmentation performance across different datasets. SIGNIFICANCE: The proposed method can improve whole breast segmentation of clinical MR scans acquired with different protocols.


Subject(s)
Breast , Magnetic Resonance Imaging , Breast/diagnostic imaging , Humans , Image Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL
...