Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
2.
JID Innov ; 3(5): 100213, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37719662

ABSTRACT

Assessing the severity of eczema in clinical research requires face-to-face skin examination by trained staff. Such approaches are resource-intensive for participants and staff, challenging during pandemics, and prone to inter- and intra-observer variation. Computer vision algorithms have been proposed to automate the assessment of eczema severity using digital camera images. However, they often require human intervention to detect eczema lesions and cannot automatically assess eczema severity from real-world images in an end-to-end pipeline. We developed a model to detect eczema lesions from images using data augmentation and pixel-level segmentation of eczema lesions on 1,345 images provided by dermatologists. We evaluated the quality of the obtained segmentation compared with that of the clinicians, the robustness to varying imaging conditions encountered in real-life images, such as lighting, focus, and blur, and the performance of downstream severity prediction when using the detected eczema lesions. The quality and robustness of eczema lesion detection increased by approximately 25% and 40%, respectively, compared with that of our previous eczema detection model. The performance of the downstream severity prediction remained unchanged. Use of skin segmentation as an alternative to eczema segmentation that requires specialist labeling showed the performance on par with when eczema segmentation is used.

3.
Med Image Anal ; 80: 102498, 2022 08.
Article in English | MEDLINE | ID: mdl-35665663

ABSTRACT

Accurate 3D modelling of cardiac chambers is essential for clinical assessment of cardiac volume and function, including structural, and motion analysis. Furthermore, to study the correlation between cardiac morphology and other patient information within a large population, it is necessary to automatically generate cardiac mesh models of each subject within the population. In this study, we introduce MCSI-Net (Multi-Cue Shape Inference Network), where we embed a statistical shape model inside a convolutional neural network and leverage both phenotypic and demographic information from the cohort to infer subject-specific reconstructions of all four cardiac chambers in 3D. In this way, we leverage the ability of the network to learn the appearance of cardiac chambers in cine cardiac magnetic resonance (CMR) images, and generate plausible 3D cardiac shapes, by constraining the prediction using a shape prior, in the form of the statistical modes of shape variation learned a priori from a subset of the population. This, in turn, enables the network to generalise to samples across the entire population. To the best of our knowledge, this is the first work that uses such an approach for patient-specific cardiac shape generation. MCSI-Net is capable of producing accurate 3D shapes using just a fraction (about 23% to 46%) of the available image data, which is of significant importance to the community as it supports the acceleration of CMR scan acquisitions. Cardiac MR images from the UK Biobank were used to train and validate the proposed method. We also present the results from analysing 40,000 subjects of the UK Biobank at 50 time-frames, totalling two million image volumes. Our model can generate more globally consistent heart shape than that of manual annotations in the presence of inter-slice motion and shows strong agreement with the reference ranges for cardiac structure and function across cardiac ventricles and atria.


Subject(s)
Biological Specimen Banks , Image Interpretation, Computer-Assisted , Heart Atria , Heart Ventricles/diagnostic imaging , Humans , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging , Magnetic Resonance Imaging, Cine/methods , United Kingdom
4.
Med Image Anal ; 74: 102228, 2021 12.
Article in English | MEDLINE | ID: mdl-34563860

ABSTRACT

Shape reconstruction from sparse point clouds/images is a challenging and relevant task required for a variety of applications in computer vision and medical image analysis (e.g. surgical navigation, cardiac motion analysis, augmented/virtual reality systems). A subset of such methods, viz. 3D shape reconstruction from 2D contours, is especially relevant for computer-aided diagnosis and intervention applications involving meshes derived from multiple 2D image slices, views or projections. We propose a deep learning architecture, coined Mesh Reconstruction Network (MR-Net), which tackles this problem. MR-Net enables accurate 3D mesh reconstruction in real-time despite missing data and with sparse annotations. Using 3D cardiac shape reconstruction from 2D contours defined on short-axis cardiac magnetic resonance image slices as an exemplar, we demonstrate that our approach consistently outperforms state-of-the-art techniques for shape reconstruction from unstructured point clouds. Our approach can reconstruct 3D cardiac meshes to within 2.5-mm point-to-point error, concerning the ground-truth data (the original image spatial resolution is ∼1.8×1.8×10mm3). We further evaluate the robustness of the proposed approach to incomplete data, and contours estimated using an automatic segmentation algorithm. MR-Net is generic and could reconstruct shapes of other organs, making it compelling as a tool for various applications in medical image analysis.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Heart , Humans , Magnetic Resonance Imaging
5.
Med Image Anal ; 67: 101812, 2021 01.
Article in English | MEDLINE | ID: mdl-33129140

ABSTRACT

Accurate ventricular volume measurements are the primary indicators of normal/abnor- mal cardiac function and are dependent on the Cardiac Magnetic Resonance (CMR) volumes being complete. However, missing or unusable slices owing to the presence of image artefacts such as respiratory or motion ghosting, aliasing, ringing and signal loss in CMR sequences, significantly hinder accuracy of anatomical and functional cardiac quantification, and recovering from those is insufficiently addressed in population imaging. In this work, we propose a new robust approach, coined Image Imputation Generative Adversarial Network (I2-GAN), to learn key features of cardiac short axis (SAX) slices near missing information, and use them as conditional variables to infer missing slices in the query volumes. In I2-GAN, the slices are first mapped to latent vectors with position features through a regression net. The latent vector corresponding to the desired position is then projected onto the slice manifold, conditioned on intensity features through a generator net. The generator comprises residual blocks with normalisation layers that are modulated with auxiliary slice information, enabling propagation of fine details through the network. In addition, a multi-scale discriminator was implemented, along with a discriminator-based feature matching loss, to further enhance performance and encourage the synthesis of visually realistic slices. Experimental results show that our method achieves significant improvements over the state-of-the-art, in missing slice imputation for CMR, with an average SSIM of 0.872. Linear regression analysis yields good agreement between reference and imputed CMR images for all cardiac measurements, with correlation coefficients of 0.991 for left ventricular volume, 0.977 for left ventricular mass and 0.961 for right ventricular volume.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Artifacts , Humans
6.
Med Image Anal ; 56: 26-42, 2019 08.
Article in English | MEDLINE | ID: mdl-31154149

ABSTRACT

Population imaging studies generate data for developing and implementing personalised health strategies to prevent, or more effectively treat disease. Large prospective epidemiological studies acquire imaging for pre-symptomatic populations. These studies enable the early discovery of alterations due to impending disease, and enable early identification of individuals at risk. Such studies pose new challenges requiring automatic image analysis. To date, few large-scale population-level cardiac imaging studies have been conducted. One such study stands out for its sheer size, careful implementation, and availability of top quality expert annotation; the UK Biobank (UKB). The resulting massive imaging datasets (targeting ca. 100,000 subjects) has put published approaches for cardiac image quantification to the test. In this paper, we present and evaluate a cardiac magnetic resonance (CMR) image analysis pipeline that properly scales up and can provide a fully automatic analysis of the UKB CMR study. Without manual user interactions, our pipeline performs end-to-end image analytics from multi-view cine CMR images all the way to anatomical and functional bi-ventricular quantification. All this, while maintaining relevant quality controls of the CMR input images, and resulting image segmentations. To the best of our knowledge, this is the first published attempt to fully automate the extraction of global and regional reference ranges of all key functional cardiovascular indexes, from both left and right cardiac ventricles, for a population of 20,000 subjects imaged at 50 time frames per subject, for a total of one million CMR volumes. In addition, our pipeline provides 3D anatomical bi-ventricular models of the heart. These models enable the extraction of detailed information of the morphodynamics of the two ventricles for subsequent association to genetic, omics, lifestyle habits, exposure information, and other information provided in population imaging studies. We validated our proposed CMR analytics pipeline against manual expert readings on a reference cohort of 4620 subjects with contour delineations and corresponding clinical indexes. Our results show broad significant agreement between the manually obtained reference indexes, and those automatically computed via our framework. 80.67% of subjects were processed with mean contour distance of less than 1 pixel, and 17.50% with mean contour distance between 1 and 2 pixels. Finally, we compare our pipeline with a recently published approach reporting on UKB data, and based on deep learning. Our comparison shows similar performance in terms of segmentation accuracy with respect to human experts.


Subject(s)
Heart Ventricles/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging, Cine/methods , Models, Statistical , Neural Networks, Computer , Biological Specimen Banks , Female , Humans , Imaging, Three-Dimensional , Male , Pattern Recognition, Automated , United Kingdom
7.
Comput Biol Med ; 105: 54-63, 2019 02.
Article in English | MEDLINE | ID: mdl-30583250

ABSTRACT

Examining and interpreting of a large number of wireless endoscopic images from the gastrointestinal tract is a tiresome task for physicians. A practical solution is to automatically construct a two dimensional representation of the gastrointestinal tract for easy inspection. However, little has been done on wireless endoscopic image stitching, let alone systematic investigation. The proposed new wireless endoscopic image stitching method consists of two main steps to improve the accuracy and efficiency of image registration. First, the keypoints are extracted by Principle Component Analysis and Scale Invariant Feature Transform (PCA-SIFT) algorithm and refined with Maximum Likelihood Estimation SAmple Consensus (MLESAC) outlier removal to find the most reliable keypoints. Second, the optimal transformation parameters obtained from first step are fed to the Normalised Mutual Information (NMI) algorithm as an initial solution. With modified Marquardt-Levenberg search strategy in a multiscale framework, the NMI can find the optimal transformation parameters in the shortest time. The proposed methodology has been tested on two different datasets - one with real wireless endoscopic images and another with images obtained from Micro-Ball (a new wireless cubic endoscopy system with six image sensors). The results have demonstrated the accuracy and robustness of the proposed methodology both visually and quantitatively - registration residual error of 0.93±0.33 pixels on 2500 real endoscopy image pairs and residual error accumulation of 16.59 pixels and without affecting the visual registration quality on stitching 152 images of Micro-Ball.


Subject(s)
Algorithms , Duodenoscopy , Image Interpretation, Computer-Assisted , Imaging, Three-Dimensional , Intestine, Small , Pattern Recognition, Automated , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...