Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Invest Ophthalmol Vis Sci ; 65(6): 6, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38833259

ABSTRACT

Purpose: To develop Choroidalyzer, an open-source, end-to-end pipeline for segmenting the choroid region, vessels, and fovea, and deriving choroidal thickness, area, and vascular index. Methods: We used 5600 OCT B-scans (233 subjects, six systemic disease cohorts, three device types, two manufacturers). To generate region and vessel ground-truths, we used state-of-the-art automatic methods following manual correction of inaccurate segmentations, with foveal positions manually annotated. We trained a U-Net deep learning model to detect the region, vessels, and fovea to calculate choroid thickness, area, and vascular index in a fovea-centered region of interest. We analyzed segmentation agreement (AUC, Dice) and choroid metrics agreement (Pearson, Spearman, mean absolute error [MAE]) in internal and external test sets. We compared Choroidalyzer to two manual graders on a small subset of external test images and examined cases of high error. Results: Choroidalyzer took 0.299 seconds per image on a standard laptop and achieved excellent region (Dice: internal 0.9789, external 0.9749), very good vessel segmentation performance (Dice: internal 0.8817, external 0.8703), and excellent fovea location prediction (MAE: internal 3.9 pixels, external 3.4 pixels). For thickness, area, and vascular index, Pearson correlations were 0.9754, 0.9815, and 0.8285 (internal)/0.9831, 0.9779, 0.7948 (external), respectively (all P < 0.0001). Choroidalyzer's agreement with graders was comparable to the intergrader agreement across all metrics. Conclusions: Choroidalyzer is an open-source, end-to-end pipeline that accurately segments the choroid and reliably extracts thickness, area, and vascular index. Especially choroidal vessel segmentation is a difficult and subjective task, and fully automatic methods like Choroidalyzer could provide objectivity and standardization.


Subject(s)
Choroid , Tomography, Optical Coherence , Humans , Choroid/blood supply , Choroid/diagnostic imaging , Tomography, Optical Coherence/methods , Male , Female , Middle Aged , Aged , Deep Learning , Retinal Vessels/diagnostic imaging , Fovea Centralis/diagnostic imaging , Fovea Centralis/blood supply , Adult , Reproducibility of Results
2.
Br J Ophthalmol ; 108(6): 833-839, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38316534

ABSTRACT

BACKGROUND/AIMS: National guidelines of many countries set screening intervals for diabetic retinopathy (DR) based on grading of the last screening retinal images. We explore the potential of deep learning (DL) on images to predict progression to referable DR beyond DR grading, and the potential impact on assigned screening intervals, within the Scottish screening programme. METHODS: We consider 21 346 and 247 233 people with type 1 diabetes mellitus (T1DM) and type 2 diabetes mellitus (T2DM), respectively, each contributing on average 4.8 and 4.4 screening intervals of which 1339 and 4675 intervals concluded with a referable screening episode. Information extracted from fundus images using DL was used to predict referable status at the end of interval and its predictive value in comparison to screening-assigned DR grade was assessed. RESULTS: The DL predictor increased the area under the receiver operating characteristic curve in comparison to a predictor using current DR grades from 0.809 to 0.87 for T1DM and from 0.825 to 0.87 for T2DM. Expected sojourn time-the time from becoming referable to being rescreened-was found to be 3.4 (T1DM) and 2.7 (T2DM) weeks less for a DL-derived policy compared with the current recall policy. CONCLUSIONS: We showed that, compared with using the current retinopathy grade, DL of fundus images significantly improves the prediction of incident referable retinopathy before the next screening episode. This can impact screening recall interval policy positively, for example, by reducing the expected time with referable disease for a fixed workload-which we show as an exemplar. Additionally, it could be used to optimise workload for a fixed sojourn time.


Subject(s)
Deep Learning , Diabetic Retinopathy , Disease Progression , Humans , Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/diagnostic imaging , Scotland , Female , Male , Middle Aged , ROC Curve , Mass Screening/methods , Diabetes Mellitus, Type 2 , Adult , Diabetes Mellitus, Type 1/complications , Predictive Value of Tests , Aged , Retina/diagnostic imaging , Retina/pathology
3.
Transl Vis Sci Technol ; 12(11): 27, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37988073

ABSTRACT

Purpose: To develop an open-source, fully automatic deep learning algorithm, DeepGPET, for choroid region segmentation in optical coherence tomography (OCT) data. Methods: We used a dataset of 715 OCT B-scans (82 subjects, 115 eyes) from three clinical studies related to systemic disease. Ground-truth segmentations were generated using a clinically validated, semiautomatic choroid segmentation method, Gaussian Process Edge Tracing (GPET). We finetuned a U-Net with the MobileNetV3 backbone pretrained on ImageNet. Standard segmentation agreement metrics, as well as derived measures of choroidal thickness and area, were used to evaluate DeepGPET, alongside qualitative evaluation from a clinical ophthalmologist. Results: DeepGPET achieved excellent agreement with GPET on data from three clinical studies (AUC = 0.9994, Dice = 0.9664; Pearson correlation = 0.8908 for choroidal thickness and 0.9082 for choroidal area), while reducing the mean processing time per image on a standard laptop CPU from 34.49 ± 15.09 seconds using GPET to 1.25 ± 0.10 seconds using DeepGPET. Both methods performed similarly according to a clinical ophthalmologist who qualitatively judged a subset of segmentations by GPET and DeepGPET, based on smoothness and accuracy of segmentations. Conclusions: DeepGPET, a fully automatic, open-source algorithm for choroidal segmentation, will enable researchers to efficiently extract choroidal measurements, even for large datasets. As no manual interventions are required, DeepGPET is less subjective than semiautomatic methods and could be deployed in clinical practice without requiring a trained operator. Translational Relevance: DeepGPET addresses the lack of open-source, fully automatic, and clinically relevant choroid segmentation algorithms, and its subsequent public release will facilitate future choroidal research in both ophthalmology and wider systemic health.


Subject(s)
Deep Learning , Ophthalmologists , Humans , Tomography, Optical Coherence , Choroid/diagnostic imaging , Algorithms
4.
Br J Ophthalmol ; 2023 Sep 13.
Article in English | MEDLINE | ID: mdl-37704266

ABSTRACT

BACKGROUND/AIMS: Support vector machine-based automated grading (known as iGradingM) has been shown to be safe, cost-effective and robust in the diabetic retinopathy (DR) screening (DES) programme in Scotland. It triages screening episodes as gradable with no DR versus manual grading required. The study aim was to develop a deep learning-based autograder using images and gradings from DES and to compare its performance with that of iGradingM. METHODS: Retinal images, quality assurance (QA) data and routine DR grades were obtained from national datasets in 179 944 patients for years 2006-2016. QA grades were available for 744 images. We developed a deep learning-based algorithm to detect whether either eye contained ungradable images or any DR. The sensitivity and specificity were evaluated against consensus QA grades and routine grades. RESULTS: Images used in QA which were ungradable or with DR were detected by deep learning with better specificity compared with manual graders (p<0.001) and with iGradingM (p<0.001) at the same sensitivities. Any DR according to the DES final grade was detected with 89.19% (270 392/303 154) sensitivity and 77.41% (500 945/647 158) specificity. Observable disease and referable disease were detected with sensitivities of 96.58% (16 613/17 201) and 98.48% (22 600/22 948), respectively. Overall, 43.84% of screening episodes would require manual grading. CONCLUSION: A deep learning-based system for DR grading was evaluated in QA data and images from 11 years in 50% of people attending a national DR screening programme. The system could reduce the manual grading workload at the same sensitivity compared with the current automated grading system.

5.
Int J Med Inform ; 175: 105072, 2023 07.
Article in English | MEDLINE | ID: mdl-37167840

ABSTRACT

AIMS: This study's objective was to evaluate whether deep learning (DL) on retinal photographs from a diabetic retinopathy screening programme improve prediction of incident cardiovascular disease (CVD). METHODS: DL models were trained to jointly predict future CVD risk and CVD risk factors and used to output a DL score. Poisson regression models including clinical risk factors with and without a DL score were fitted to study cohorts with 2,072 and 38,730 incident CVD events in type 1 (T1DM) and type 2 diabetes (T2DM) respectively. RESULTS: DL scores were independently associated with incident CVD with adjusted standardised incidence rate ratios of 1.14 (P = 3 × 10-04 95 % CI (1.06, 1.23)) and 1.16 (P = 4 × 10-33 95 % CI (1.13, 1.18)) in T1DM and T2DM cohorts respectively. The differences in predictive performance between models with and without a DL score were statistically significant (differences in test log-likelihood 6.7 and 51.1 natural log units) but the increments in C-statistics from 0.820 to 0.822 and from 0.709 to 0.711 for T1DM and T2DM respectively, were small. CONCLUSIONS: These results show that in people with diabetes, retinal photographs contain information on future CVD risk. However for this to contribute appreciably to clinical prediction of CVD further approaches, including exploitation of serial images, need to be evaluated.


Subject(s)
Cardiovascular Diseases , Deep Learning , Diabetes Mellitus, Type 1 , Diabetes Mellitus, Type 2 , Humans , Diabetes Mellitus, Type 2/diagnosis , Diabetes Mellitus, Type 2/epidemiology , Diabetes Mellitus, Type 2/complications , Diabetes Mellitus, Type 1/complications , Diabetes Mellitus, Type 1/diagnosis , Diabetes Mellitus, Type 1/epidemiology , Prospective Studies , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/etiology , Risk Factors , Scotland/epidemiology , Heart Disease Risk Factors
6.
Cell Rep Methods ; 3(1): 100374, 2023 01 23.
Article in English | MEDLINE | ID: mdl-36814835

ABSTRACT

Antibodies are multimeric proteins capable of highly specific molecular recognition. The complementarity determining region 3 of the antibody variable heavy chain (CDRH3) often dominates antigen-binding specificity. Hence, it is a priority to design optimal antigen-specific CDRH3 to develop therapeutic antibodies. The combinatorial structure of CDRH3 sequences makes it impossible to query binding-affinity oracles exhaustively. Moreover, antibodies are expected to have high target specificity and developability. Here, we present AntBO, a combinatorial Bayesian optimization framework utilizing a CDRH3 trust region for an in silico design of antibodies with favorable developability scores. The in silico experiments on 159 antigens demonstrate that AntBO is a step toward practically viable in vitro antibody design. In under 200 calls to the oracle, AntBO suggests antibodies outperforming the best binding sequence from 6.9 million experimentally obtained CDRH3s. Additionally, AntBO finds very-high-affinity CDRH3 in only 38 protein designs while requiring no domain knowledge.


Subject(s)
Antibodies , Complementarity Determining Regions , Bayes Theorem , Antibodies/therapeutic use , Complementarity Determining Regions/genetics , Immunoglobulin Heavy Chains/chemistry , Antigens
7.
Front Comput Neurosci ; 16: 887633, 2022.
Article in English | MEDLINE | ID: mdl-36093418

ABSTRACT

Vast quantities of Magnetic Resonance Images (MRI) are routinely acquired in clinical practice but, to speed up acquisition, these scans are typically of a quality that is sufficient for clinical diagnosis but sub-optimal for large-scale precision medicine, computational diagnostics, and large-scale neuroimaging collaborative research. Here, we present a critic-guided framework to upsample low-resolution (often 2D) MRI full scans to help overcome these limitations. We incorporate feature-importance and self-attention methods into our model to improve the interpretability of this study. We evaluate our framework on paired low- and high-resolution brain MRI structural full scans (i.e., T1-, T2-weighted, and FLAIR sequences are simultaneously input) obtained in clinical and research settings from scanners manufactured by Siemens, Phillips, and GE. We show that the upsampled MRIs are qualitatively faithful to the ground-truth high-quality scans (PSNR = 35.39; MAE = 3.78E-3; NMSE = 4.32E-10; SSIM = 0.9852; mean normal-appearing gray/white matter ratio intensity differences ranging from 0.0363 to 0.0784 for FLAIR, from 0.0010 to 0.0138 for T1-weighted and from 0.0156 to 0.074 for T2-weighted sequences). The automatic raw segmentation of tissues and lesions using the super-resolved images has fewer false positives and higher accuracy than those obtained from interpolated images in protocols represented with more than three sets in the training sample, making our approach a strong candidate for practical application in clinical and collaborative research.

8.
Stroke ; 53(7): 2393-2403, 2022 07.
Article in English | MEDLINE | ID: mdl-35440170

ABSTRACT

There is increasing interest in computer applications, using artificial intelligence methodologies, to perform health care tasks previously performed by humans, particularly in medical imaging for diagnosis. In stroke, there are now commercial artificial intelligence software for use with computed tomography or MR imaging to identify acute ischemic brain tissue pathology, arterial obstruction on computed tomography angiography or as hyperattenuated arteries on computed tomography, brain hemorrhage, or size of perfusion defects. A rapid, accurate diagnosis may aid treatment decisions for individual patients and could improve outcome if it leads to effective and safe treatment; or conversely, to disaster if a delayed or incorrect diagnosis results in inappropriate treatment. Despite this potential clinical impact, diagnostic tools including artificial intelligence methods are not subjected to the same clinical evaluation standards as are mandatory for drugs. Here, we provide an evidence-based review of the pros and cons of commercially available automated methods for medical imaging diagnosis, including those based on artificial intelligence, to diagnose acute brain pathology on computed tomography or magnetic resonance imaging in patients with stroke.


Subject(s)
Brain Ischemia , Stroke , Artificial Intelligence , Brain Ischemia/therapy , Computers , Diagnosis, Computer-Assisted , Humans , Stroke/therapy
9.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 5149-5169, 2022 09.
Article in English | MEDLINE | ID: mdl-33974543

ABSTRACT

The field of meta-learning, or learning-to-learn, has seen a dramatic rise in interest in recent years. Contrary to conventional approaches to AI where tasks are solved from scratch using a fixed learning algorithm, meta-learning aims to improve the learning algorithm itself, given the experience of multiple learning episodes. This paradigm provides an opportunity to tackle many conventional challenges of deep learning, including data and computation bottlenecks, as well as generalization. This survey describes the contemporary meta-learning landscape. We first discuss definitions of meta-learning and position it with respect to related fields, such as transfer learning and hyperparameter optimization. We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods today. We survey promising applications and successes of meta-learning such as few-shot learning and reinforcement learning. Finally, we discuss outstanding challenges and promising areas for future research.


Subject(s)
Algorithms , Neural Networks, Computer
10.
Schizophr Res ; 214: 18-23, 2019 12.
Article in English | MEDLINE | ID: mdl-28935170

ABSTRACT

Early intervention strategies in psychosis would significantly benefit from the identification of reliable prognostic biomarkers. Pattern classification methods have shown the feasibility of an early diagnosis of psychosis onset both in clinical and familial high-risk populations. Here we were interested in replicating our previous classification findings using an independent cohort at clinical high risk for psychosis, drawn from the prospective FePsy (Fruherkennung von Psychosen) study. The same neuroanatomical-based pattern classification pipeline, consisting of a linear Support Vector Machine (SVM) and a Recursive Feature Selection (RFE) achieved 74% accuracy in predicting later onset of psychosis. The discriminative neuroanatomical pattern underlying this finding consisted of many brain areas across all four lobes and the cerebellum. These results provide proof-of-concept that the early diagnosis of psychosis is feasible using neuroanatomical-based pattern recognition.


Subject(s)
Brain/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Psychotic Disorders/diagnostic imaging , Support Vector Machine , Adult , Early Diagnosis , Family , Female , Genetic Predisposition to Disease , Humans , Male , Pattern Recognition, Automated/methods , Proof of Concept Study , Prospective Studies , Psychotic Disorders/drug therapy , Psychotic Disorders/genetics , Risk , Young Adult
11.
Schizophr Res ; 181: 6-12, 2017 03.
Article in English | MEDLINE | ID: mdl-27613509

ABSTRACT

To date, there are no reliable markers for predicting onset of schizophrenia in individuals at high risk (HR). Substantial promise is, however, shown by a variety of pattern classification approaches to neuroimaging data. Here, we examined the predictive accuracy of support vector machine (SVM) in later diagnosing schizophrenia, at a single-subject level, using a cohort of HR individuals drawn from multiply affected families and a combination of neuroanatomical, schizotypal and neurocognitive variables. Baseline structural magnetic resonance imaging (MRI), schizotypal and neurocognitive data from 17 HR subjects, who subsequently developed schizophrenia and a matched group of 17 HR subjects who did not make the transition, yet had psychotic symptoms, were included in the analysis. We employed recursive feature elimination (RFE), in a nested cross-validation scheme to identify the most significant predictors of disease transition and enhance diagnostic performance. Classification accuracy was 94% when a self-completed measure of schizotypy, a declarative memory test and structural MRI data were combined into a single learning algorithm; higher than when either quantitative measure was used alone. The discriminative neuroanatomical pattern involved gray matter volume differences in frontal, orbito-frontal and occipital lobe regions bilaterally as well as parts of the superior, medial temporal lobe and cerebellar regions. Our findings suggest that an early SVM-based prediction of schizophrenia is possible and can be improved by combining schizotypal and neurocognitive features with neuroanatomical variables. However, our predictive model needs to be tested by classifying a new, independent HR cohort in order to estimate its validity.


Subject(s)
Brain/diagnostic imaging , Diagnosis, Computer-Assisted , Memory , Schizophrenia/diagnosis , Schizophrenic Psychology , Schizotypal Personality Disorder/psychology , Adolescent , Adult , Cognition , Family , Feasibility Studies , Female , Follow-Up Studies , Genetic Predisposition to Disease , Humans , Longitudinal Studies , Magnetic Resonance Imaging , Male , Multivariate Analysis , Neuropsychological Tests , Schizophrenia/classification , Schizophrenia/genetics , Support Vector Machine , Young Adult
12.
Int J Med Inform ; 86: 37-42, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26725693

ABSTRACT

PURPOSE: Present and assess clinical protocols and associated automated workflow for pre-surgical functional magnetic resonance imaging in brain tumor patients. METHODS: Protocols were validated using a single-subject reliability approach based on 10 healthy control subjects. Results from the automated workflow were evaluated in 9 patients with brain tumors, comparing fMRI results to direct electrical stimulation (DES) of the cortex. RESULTS: Using a new approach to compute single-subject fMRI reliability in controls, we show that not all tasks are suitable in the clinical context, even if they show meaningful results at the group level. Comparison of the fMRI results from patients to DES showed good correspondence between techniques (odds ratio 36). CONCLUSION: Providing that validated and reliable fMRI protocols are used, fMRI can accurately delineate eloquent areas, thus providing an aid to medical decision regarding brain tumor surgery.


Subject(s)
Brain Mapping/methods , Brain Neoplasms/physiopathology , Brain/physiology , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Workflow , Adult , Aged , Brain Mapping/instrumentation , Brain Neoplasms/surgery , Case-Control Studies , Electric Stimulation , Female , Humans , Male
13.
Brain Struct Funct ; 221(6): 3223-35, 2016 07.
Article in English | MEDLINE | ID: mdl-26254904

ABSTRACT

Cognitive decline, especially the slowing of information processing speed, is associated with normal ageing. This decline may be due to brain cortico-cortical disconnection caused by age-related white matter deterioration. We present results from a large, narrow age range cohort of generally healthy, community-dwelling subjects in their seventies who also had their cognitive ability tested in youth (age 11 years). We investigate associations between older age brain white matter structure, several measures of information processing speed and childhood cognitive ability in 581 subjects. Analysis of diffusion tensor MRI data using Tract-based Spatial Statistics (TBSS) showed that all measures of information processing speed, as well as a general speed factor composed from these tests (g speed), were significantly associated with fractional anisotropy (FA) across the white matter skeleton rather than in specific tracts. Cognitive ability measured at age 11 years was not associated with older age white matter FA, except for the g speed-independent components of several individual processing speed tests. These results indicate that quicker and more efficient information processing requires global connectivity in older age, and that associations between white matter FA and information processing speed (both individual test scores and g speed), unlike some other aspects of later life brain structure, are generally not accounted for by cognitive ability measured in youth.


Subject(s)
Aging , Brain/anatomy & histology , Brain/physiology , Cognition/physiology , White Matter/anatomy & histology , White Matter/physiology , Aged , Diffusion Magnetic Resonance Imaging , Female , Humans , Intelligence/physiology , Male , Neuropsychological Tests
14.
IEEE Trans Pattern Anal Mach Intell ; 37(2): 243-55, 2015 Feb.
Article in English | MEDLINE | ID: mdl-26353239

ABSTRACT

We propose the supervised hierarchical Dirichlet process (sHDP), a nonparametric generative model for the joint distribution of a group of observations and a response variable directly associated with that whole group. We compare the sHDP with another leading method for regression on grouped data, the supervised latent Dirichlet allocation (sLDA) model. We evaluate our method on two real-world classification problems and two real-world regression problems. Bayesian nonparametric regression models based on the Dirichlet process, such as the Dirichlet process-generalised linear models (DP-GLM) have previously been explored; these models allow flexibility in modelling nonlinear relationships. However, until now, hierarchical Dirichlet process (HDP) mixtures have not seen significant use in supervised problems with grouped data since a straightforward application of the HDP on the grouped data results in learnt clusters that are not predictive of the responses. The sHDP solves this problem by allowing for clusters to be learnt jointly from the group structure and from the label assigned to each group.

15.
J Magn Reson Imaging ; 41(5): 1342-52, 2015 May.
Article in English | MEDLINE | ID: mdl-25044733

ABSTRACT

BACKGROUND: To investigate white matter structural connectivity changes associated with amyotrophic lateral sclerosis (ALS) using network analysis and compare the results with those obtained using standard voxel-based methods, specifically Tract-based Spatial Statistics (TBSS). METHODS: MRI data were acquired from 30 patients with ALS and 30 age-matched healthy controls. For each subject, 85 grey matter regions (network nodes) were identified from high resolution structural MRI, and network connections formed from the white matter tracts generated by diffusion MRI and probabilistic tractography. Whole-brain networks were constructed using strong constraints on anatomical plausibility and a weighting reflecting tract-averaged fractional anisotropy (FA). RESULTS: Analysis using Network-based Statistics (NBS), without a priori selected regions, identified an impaired motor-frontal-subcortical subnetwork (10 nodes and 12 bidirectional connections), consistent with upper motor neuron pathology, in the ALS group compared with the controls (P = 0.020). Reduced FA in three of the impaired network connections, which involved fibers of the corticospinal tract, correlated with rate of disease progression (P ≤ 0.024). A novel network-tract comparison revealed that the connections involved in the affected network had a strong correspondence (mean overlap of 86.2%) with white matter tracts identified as having reduced FA compared with the control group using TBSS. CONCLUSION: These findings suggest that white matter degeneration in ALS is strongly linked to the motor cortex, and that impaired structural networks identified using NBS have a strong correspondence to affected white matter tracts identified using more conventional voxel-based methods.


Subject(s)
Amyotrophic Lateral Sclerosis/pathology , Diffusion Tensor Imaging/methods , Motor Cortex/pathology , Nerve Net/pathology , Prefrontal Cortex/pathology , Connectome/methods , Female , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Male , Middle Aged , Neural Pathways/pathology , Reproducibility of Results , Sensitivity and Specificity , White Matter/pathology
16.
Neuroimage ; 86: 231-43, 2014 Feb 01.
Article in English | MEDLINE | ID: mdl-24096127

ABSTRACT

Structural brain networks constructed from diffusion MRI (dMRI) and tractography have been demonstrated in healthy volunteers and more recently in various disorders affecting brain connectivity. However, few studies have addressed the reproducibility of the resulting networks. We measured the test-retest properties of such networks by varying several factors affecting network construction using ten healthy volunteers who underwent a dMRI protocol at 1.5T on two separate occasions. Each T1-weighted brain was parcellated into 84 regions-of-interest and network connections were identified using dMRI and two alternative tractography algorithms, two alternative seeding strategies, a white matter waypoint constraint and three alternative network weightings. In each case, four common graph-theoretic measures were obtained. Network properties were assessed both node-wise and per network in terms of the intraclass correlation coefficient (ICC) and by comparing within- and between-subject differences. Our findings suggest that test-retest performance was improved when: 1) seeding from white matter, rather than grey; and 2) using probabilistic tractography with a two-fibre model and sufficient streamlines, rather than deterministic tensor tractography. In terms of network weighting, a measure of streamline density produced better test-retest performance than tract-averaged diffusion anisotropy, although it remains unclear which is a more accurate representation of the underlying connectivity. For the best performing configuration, the global within-subject differences were between 3.2% and 11.9% with ICCs between 0.62 and 0.76. The mean nodal within-subject differences were between 5.2% and 24.2% with mean ICCs between 0.46 and 0.62. For 83.3% (70/84) of nodes, the within-subject differences were smaller than between-subject differences. Overall, these findings suggest that whilst current techniques produce networks capable of characterising the genuine between-subject differences in connectivity, future work must be undertaken to improve network reliability.


Subject(s)
Brain/cytology , Diffusion Tensor Imaging/methods , Image Interpretation, Computer-Assisted/methods , Nerve Fibers, Myelinated/ultrastructure , Nerve Net/cytology , Neurons/cytology , Female , Humans , Image Enhancement/methods , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity
17.
PLoS Comput Biol ; 9(7): e1003134, 2013.
Article in English | MEDLINE | ID: mdl-23874177

ABSTRACT

Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain.


Subject(s)
Hallucinations/physiopathology , Models, Biological , Homeostasis , Humans , Nerve Net , Probability , Syndrome
18.
Gigascience ; 2(1): 6, 2013 Apr 29.
Article in English | MEDLINE | ID: mdl-23628139

ABSTRACT

BACKGROUND: Since its inception over twenty years ago, functional magnetic resonance imaging (fMRI) has been used in numerous studies probing neural underpinnings of human cognition. However, the between session variance of many tasks used in fMRI remains understudied. Such information is especially important in context of clinical applications. A test-retest dataset was acquired to validate fMRI tasks used in pre-surgical planning. In particular, five task-related fMRI time series (finger, foot and lip movement, overt verb generation, covert verb generation, overt word repetition, and landmark tasks) were used to investigate which protocols gave reliable single-subject results. Ten healthy participants in their fifties were scanned twice using an identical protocol 2-3 days apart. In addition to the fMRI sessions, high-angular resolution diffusion tensor MRI (DTI), and high-resolution 3D T1-weighted volume scans were acquired. FINDINGS: Reliability analyses of fMRI data showed that the motor and language tasks were reliable at the subject level while the landmark task was not, despite all paradigms showing expected activations at the group level. In addition, differences in reliability were found to be mostly related to the tasks themselves while task-by-motion interaction was the major confounding factor. CONCLUSIONS: Together, this dataset provides a unique opportunity to investigate the reliability of different fMRI tasks, as well as methods and algorithms used to analyze, de-noise and combine fMRI, DTI and structural T1-weighted volume data.

19.
Neuroimage ; 69: 231-43, 2013 Apr 01.
Article in English | MEDLINE | ID: mdl-23153967

ABSTRACT

While the fMRI test-retest reliability has been mainly investigated from the point of view of group level studies, here we present analyses and results for single-subject test-retest reliability. One important aspect of group level reliability is that not only does it depend on between-session variance (test-retest), but also on between-subject variance. This has partly led to a debate regarding which reliability metric to use and how different sources of noise contribute to between-session variance. Focusing on single subject reliability allows considering between-session only. In this study, we measured test-retest reliability in four behavioural tasks (motor mapping, covert verb generation, overt word repetition, and a landmark identification task) to ensure generalisation of the results and at three levels of data processing (time-series correlation, t value variance, and overlap of thresholded maps) to understand how each step influences the other and how confounding factors influence reliability at each of these steps. The contributions of confounding factors (scanner noise, subject motion, and coregistration) were investigated using multiple regression and relative importance analyses at each step. Finally, to achieve a fuller picture of what constitutes a reliable task, we introduced a bootstrap technique of within- vs. between-subject variance. Our results show that (i) scanner noise and coregistration errors have little contribution to between-session variance (ii) subject motion (especially correlated with the stimuli) can have detrimental effects on reliability (iii) different tasks lead to different reliability results. This suggests that between-session variance in fMRI is mostly caused by the variability of underlying cognitive processes and motion correlated with the stimuli rather than technical limitations of data processing.


Subject(s)
Brain Mapping/methods , Brain/physiology , Magnetic Resonance Imaging/methods , Reproducibility of Results , Female , Humans , Image Interpretation, Computer-Assisted , Male , Middle Aged
20.
Front Hum Neurosci ; 6: 245, 2012.
Article in English | MEDLINE | ID: mdl-22936908

ABSTRACT

Single subject fMRI has proved to be a useful tool for mapping functional areas in clinical procedures such as tumor resection. Using fMRI data, clinicians assess the risk, plan and execute such procedures based on thresholded statistical maps. However, because current thresholding methods were developed mainly in the context of cognitive neuroscience group studies, most single subject fMRI maps are thresholded manually to satisfy specific criteria related to single subject analyzes. Here, we propose a new adaptive thresholding method which combines Gamma-Gaussian mixture modeling with topological thresholding to improve cluster delineation. In a series of simulations we show that by adapting to the signal and noise properties, the new method performs well in terms of total number of errors but also in terms of the trade-off between false negative and positive cluster error rates. Similarly, simulations show that adaptive thresholding performs better than fixed thresholding in terms of over and underestimation of the true activation border (i.e., higher spatial accuracy). Finally, through simulations and a motor test-retest study on 10 volunteer subjects, we show that adaptive thresholding improves reliability, mainly by accounting for the global signal variance. This in turn increases the likelihood that the true activation pattern can be determined offering an automatic yet flexible way to threshold single subject fMRI maps.

SELECTION OF CITATIONS
SEARCH DETAIL
...