Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Med Image Anal ; 91: 103033, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38000256

ABSTRACT

Large medical imaging data sets are becoming increasingly available. A common challenge in these data sets is to ensure that each sample meets minimum quality requirements devoid of significant artefacts. Despite a wide range of existing automatic methods having been developed to identify imperfections and artefacts in medical imaging, they mostly rely on data-hungry methods. In particular, the scarcity of artefact-containing scans available for training has been a major obstacle in the development and implementation of machine learning in clinical research. To tackle this problem, we propose a novel framework having four main components: (1) a set of artefact generators inspired by magnetic resonance physics to corrupt brain MRI scans and augment a training dataset, (2) a set of abstract and engineered features to represent images compactly, (3) a feature selection process that depends on the class of artefact to improve classification performance, and (4) a set of Support Vector Machine (SVM) classifiers trained to identify artefacts. Our novel contributions are threefold: first, we use the novel physics-based artefact generators to generate synthetic brain MRI scans with controlled artefacts as a data augmentation technique. This will avoid the labour-intensive collection and labelling process of scans with rare artefacts. Second, we propose a large pool of abstract and engineered image features developed to identify 9 different artefacts for structural MRI. Finally, we use an artefact-based feature selection block that, for each class of artefacts, finds the set of features that provide the best classification performance. We performed validation experiments on a large data set of scans with artificially-generated artefacts, and in a multiple sclerosis clinical trial where real artefacts were identified by experts, showing that the proposed pipeline outperforms traditional methods. In particular, our data augmentation increases performance by up to 12.5 percentage points on the accuracy, F1, F2, precision and recall. At the same time, the computation cost of our pipeline remains low - less than a second to process a single scan - with the potential for real-time deployment. Our artefact simulators obtained using adversarial learning enable the training of a quality control system for brain MRI that otherwise would have required a much larger number of scans in both supervised and unsupervised settings. We believe that systems for quality control will enable a wide range of high-throughput clinical applications based on the use of automatic image-processing pipelines.


Subject(s)
Artifacts , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods , Neuroimaging , Machine Learning
2.
Med Image Anal ; 75: 102257, 2022 01.
Article in English | MEDLINE | ID: mdl-34731771

ABSTRACT

Accurate and realistic simulation of high-dimensional medical images has become an important research area relevant to many AI-enabled healthcare applications. However, current state-of-the-art approaches lack the ability to produce satisfactory high-resolution and accurate subject-specific images. In this work, we present a deep learning framework, namely 4D-Degenerative Adversarial NeuroImage Net (4D-DANI-Net), to generate high-resolution, longitudinal MRI scans that mimic subject-specific neurodegeneration in ageing and dementia. 4D-DANI-Net is a modular framework based on adversarial training and a set of novel spatiotemporal, biologically-informed constraints. To ensure efficient training and overcome memory limitations affecting such high-dimensional problems, we rely on three key technological advances: i) a new 3D training consistency mechanism called Profile Weight Functions (PWFs), ii) a 3D super-resolution module and iii) a transfer learning strategy to fine-tune the system for a given individual. To evaluate our approach, we trained the framework on 9852 T1-weighted MRI scans from 876 participants in the Alzheimer's Disease Neuroimaging Initiative dataset and held out a separate test set of 1283 MRI scans from 170 participants for quantitative and qualitative assessment of the personalised time series of synthetic images. We performed three evaluations: i) image quality assessment; ii) quantifying the accuracy of regional brain volumes over and above benchmark models; and iii) quantifying visual perception of the synthetic images by medical experts. Overall, both quantitative and qualitative results show that 4D-DANI-Net produces realistic, low-artefact, personalised time series of synthetic T1 MRI that outperforms benchmark models.


Subject(s)
Alzheimer Disease , Neuroimaging , Aging , Alzheimer Disease/diagnostic imaging , Brain/diagnostic imaging , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging
3.
IEEE J Biomed Health Inform ; 24(11): 3066-3075, 2020 11.
Article in English | MEDLINE | ID: mdl-32749977

ABSTRACT

Eye-tracking technology is an innovative tool that holds promise for enhancing dementia screening. In this work, we introduce a novel way of extracting salient features directly from the raw eye-tracking data of a mixed sample of dementia patients during a novel instruction-less cognitive test. Our approach is based on self-supervised representation learning where, by training initially a deep neural network to solve a pretext task using well-defined available labels (e.g. recognising distinct cognitive activities in healthy individuals), the network encodes high-level semantic information which is useful for solving other problems of interest (e.g. dementia classification). Inspired by previous work in explainable AI, we use the Layer-wise Relevance Propagation (LRP) technique to describe our network's decisions in differentiating between the distinct cognitive activities. The extent to which eye-tracking features of dementia patients deviate from healthy behaviour is then explored, followed by a comparison between self-supervised and handcrafted representations on discriminating between participants with and without dementia. Our findings not only reveal novel self-supervised learning features that are more sensitive than handcrafted features in detecting performance differences between participants with and without dementia across a variety of tasks, but also validate that instruction-less eye-tracking tests can detect oculomotor biomarkers of dementia-related cognitive dysfunction. This work highlights the contribution of self-supervised representation learning techniques in biomedical applications where the small number of patients, the non-homogenous presentations of the disease and the complexity of the setting can be a challenge using state-of-the-art feature extraction methods.


Subject(s)
Cognitive Dysfunction , Dementia , Cognition , Dementia/diagnosis , Eye-Tracking Technology , Humans , Neuropsychological Tests
4.
Int J Comput Assist Radiol Surg ; 15(7): 1167-1175, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32415459

ABSTRACT

PURPOSE: Probe-based confocal laser endomicroscopy (pCLE) enables performing an optical biopsy via a probe. pCLE probes consist of multiple optical fibres arranged in a bundle, which taken together generate signals in an irregularly sampled pattern. Current pCLE reconstruction is based on interpolating irregular signals onto an over-sampled Cartesian grid, using a naive linear interpolation. It was shown that convolutional neural networks (CNNs) could improve pCLE image quality. Yet classical CNNs may be suboptimal in regard to irregular data. METHODS: We compare pCLE reconstruction and super-resolution (SR) methods taking irregularly sampled or reconstructed pCLE images as input. We also propose to embed a Nadaraya-Watson (NW) kernel regression into the CNN framework as a novel trainable CNN layer. We design deep learning architectures allowing for reconstructing high-quality pCLE images directly from the irregularly sampled input data. We created synthetic sparse pCLE images to evaluate our methodology. RESULTS: The results were validated through an image quality assessment based on a combination of the following metrics: peak signal-to-noise ratio and the structural similarity index. Our analysis indicates that both dense and sparse CNNs outperform the reconstruction method currently used in the clinic. CONCLUSION: The main contributions of our study are a comparison of sparse and dense approach in pCLE image reconstruction. We also implement trainable generalised NW kernel regression as a novel sparse approach. We also generated synthetic data for training pCLE SR.


Subject(s)
Endoscopy/methods , Microscopy, Confocal/methods , Microsurgery/methods , Neural Networks, Computer , Humans , Signal-To-Noise Ratio
5.
Med Image Anal ; 53: 123-131, 2019 04.
Article in English | MEDLINE | ID: mdl-30769327

ABSTRACT

In recent years, endomicroscopy has become increasingly used for diagnostic purposes and interventional guidance. It can provide intraoperative aids for real-time tissue characterization and can help to perform visual investigations aimed for example to discover epithelial cancers. Due to physical constraints on the acquisition process, endomicroscopy images, still today have a low number of informative pixels which hampers their quality. Post-processing techniques, such as Super-Resolution (SR), are a potential solution to increase the quality of these images. SR techniques are often supervised, requiring aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to train a model. However, in our domain, the lack of HR images hinders the collection of such pairs and makes supervised training unsuitable. For this reason, we propose an unsupervised SR framework based on an adversarial deep neural network with a physically-inspired cycle consistency, designed to impose some acquisition properties on the super-resolved images. Our framework can exploit HR images, regardless of the domain where they are coming from, to transfer the quality of the HR images to the initial LR images. This property can be particularly useful in all situations where pairs of LR/HR are not available during the training. Our quantitative analysis, validated using a database of 238 endomicroscopy video sequences from 143 patients, shows the ability of the pipeline to produce convincing super-resolved images. A Mean Opinion Score (MOS) study also confirms this quantitative image quality assessment.


Subject(s)
Microscopy, Confocal , Unsupervised Machine Learning , Algorithms , Colon , Datasets as Topic , Esophagus , Humans , Video Recording
6.
Int J Comput Assist Radiol Surg ; 13(6): 917-924, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29687176

ABSTRACT

PURPOSE: Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. METHODS: In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). RESULTS: Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. CONCLUSION: The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.


Subject(s)
Algorithms , Endoscopy/education , General Surgery/education , Image-Guided Biopsy/methods , Machine Learning , Microscopy, Confocal/methods , Microsurgery/education , Humans
7.
PLoS One ; 13(3): e0193721, 2018.
Article in English | MEDLINE | ID: mdl-29554126

ABSTRACT

Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.


Subject(s)
Brain Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Neurosurgical Procedures , Brain Neoplasms/surgery , Cluster Analysis , Humans , Intraoperative Period , Supervised Machine Learning , Unsupervised Machine Learning
8.
Sensors (Basel) ; 18(2)2018 Feb 01.
Article in English | MEDLINE | ID: mdl-29389893

ABSTRACT

Hyperspectral imaging (HSI) allows for the acquisition of large numbers of spectral bands throughout the electromagnetic spectrum (within and beyond the visual range) with respect to the surface of scenes captured by sensors. Using this information and a set of complex classification algorithms, it is possible to determine which material or substance is located in each pixel. The work presented in this paper aims to exploit the characteristics of HSI to develop a demonstrator capable of delineating tumor tissue from brain tissue during neurosurgical operations. Improved delineation of tumor boundaries is expected to improve the results of surgery. The developed demonstrator is composed of two hyperspectral cameras covering a spectral range of 400-1700 nm. Furthermore, a hardware accelerator connected to a control unit is used to speed up the hyperspectral brain cancer detection algorithm to achieve processing during the time of surgery. A labeled dataset comprised of more than 300,000 spectral signatures is used as the training dataset for the supervised stage of the classification algorithm. In this preliminary study, thematic maps obtained from a validation database of seven hyperspectral images of in vivo brain tissue captured and processed during neurosurgical operations demonstrate that the system is able to discriminate between normal and tumor tissue in the brain. The results can be provided during the surgical procedure (~1 min), making it a practical system for neurosurgeons to use in the near future to improve excision and potentially improve patient outcomes.


Subject(s)
Brain Neoplasms/diagnostic imaging , Brain Neoplasms/surgery , Monitoring, Intraoperative/methods , Optical Imaging , Spectrum Analysis , Algorithms , Databases, Factual , Humans
9.
IEEE Trans Med Imaging ; 36(9): 1845-1857, 2017 09.
Article in English | MEDLINE | ID: mdl-28436854

ABSTRACT

Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.


Subject(s)
Brain , Semantics , Algorithms , Humans , Neuroimaging , Pattern Recognition, Automated
10.
IEEE J Biomed Health Inform ; 21(1): 4-21, 2017 01.
Article in English | MEDLINE | ID: mdl-28055930

ABSTRACT

With a massive influx of multimodality data, the role of data analytics in health informatics has grown rapidly in the last decade. This has also prompted increasing interests in the generation of analytical, data driven models based on machine learning in health informatics. Deep learning, a technique with its foundation in artificial neural networks, is emerging in recent years as a powerful tool for machine learning, promising to reshape the future of artificial intelligence. Rapid improvements in computational power, fast data storage, and parallelization have also contributed to the rapid uptake of the technology in addition to its predictive power and ability to generate automatically optimized high-level features and semantic interpretation from the input data. This article presents a comprehensive up-to-date review of research employing deep learning in health informatics, providing a critical analysis of the relative merit, and potential pitfalls of the technique as well as its future outlook. The paper mainly focuses on key applications of deep learning in the fields of translational bioinformatics, medical imaging, pervasive sensing, medical informatics, and public health.


Subject(s)
Computational Biology/methods , Machine Learning , Medical Informatics/methods , Humans , Monitoring, Ambulatory , Public Health
11.
IEEE J Biomed Health Inform ; 21(1): 56-64, 2017 01.
Article in English | MEDLINE | ID: mdl-28026792

ABSTRACT

The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.


Subject(s)
Human Activities/classification , Machine Learning , Monitoring, Ambulatory , Neural Networks, Computer , Signal Processing, Computer-Assisted , Humans , Monitoring, Ambulatory/instrumentation , Monitoring, Ambulatory/methods
12.
IEEE Trans Image Process ; 23(5): 2081-95, 2014 May.
Article in English | MEDLINE | ID: mdl-24723572

ABSTRACT

Content-aware image resizing techniques allow to take into account the visual content of images during the resizing process. The basic idea beyond these algorithms is the removal of vertical and/or horizontal paths of pixels (i.e., seams) containing low salient information. In this paper, we present a method which exploits the gradient vector flow (GVF) of the image to establish the paths to be considered during the resizing. The relevance of each GVF path is straightforward derived from an energy map related to the magnitude of the GVF associated to the image to be resized. To make more relevant, the visual content of the images during the content-aware resizing, we also propose to select the generated GVF paths based on their visual saliency properties. In this way, visually important image regions are better preserved in the final resized image. The proposed technique has been tested, both qualitatively and quantitatively, by considering a representative data set of 1000 images labeled with corresponding salient objects (i.e., ground-truth maps). Experimental results demonstrate that our method preserves crucial salient regions better than other state-of-the-art algorithms.

SELECTION OF CITATIONS
SEARCH DETAIL
...