Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38465203

ABSTRACT

Whole-head segmentation from Magnetic Resonance Images (MRI) establishes the foundation for individualized computational models using finite element method (FEM). This foundation paves the path for computer-aided solutions in fields, particularly in non-invasive brain stimulation. Most current automatic head segmentation tools are developed using healthy young adults. Thus, they may neglect the older population that is more prone to age-related structural decline such as brain atrophy. In this work, we present a new deep learning method called GRACE, which stands for General, Rapid, And Comprehensive whole-hEad tissue segmentation. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. To the best of our knowledge, this work contains the largest manually corrected dataset to date in terms of number of MRIs and segmented tissues. GRACE outperforms five freely available software tools and a traditional 3D U-Net on a five-tissue segmentation task. On this task, GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36. GRACE can segment a whole-head MRI in about 3 seconds, while the fastest software tool takes about 3 minutes. In summary, GRACE segments a spectrum of tissue types from older adults T1-MRI scans at favorable accuracy and speed. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders. To support open science, the GRACE code and trained weights are made available online and open to the research community at https://github.com/lab-smile/GRACE.

2.
Front Hum Neurosci ; 17: 1274114, 2023.
Article in English | MEDLINE | ID: mdl-38077189

ABSTRACT

Background: Person-specific computational models can estimate transcranial direct current stimulation (tDCS) current dose delivered to the brain and predict treatment response. Artificially created electrode models derived from virtual 10-20 EEG measurements are typically included in these models as current injection and removal sites. The present study directly compares current flow models generated via artificially placed electrodes ("artificial" electrode models) against those generated using real electrodes acquired from structural MRI scans ("real" electrode models) of older adults. Methods: A total of 16 individualized head models were derived from cognitively healthy older adults (mean age = 71.8 years) who participated in an in-scanner tDCS study with an F3-F4 montage. Visible tDCS electrodes captured within the MRI scans were segmented to create the "real" electrode model. In contrast, the "artificial" electrodes were generated in ROAST. Percentage differences in current density were computed in selected regions of interest (ROIs) as examples of stimulation targets within an F3-F4 montage. Main results: We found significant inverse correlations (p < 0.001) between median current density values and brain atrophy in both electrode pipelines with slightly larger correlations found in the artificial pipeline. The percent difference (PD) of the electrode distances between the two models predicted the median current density values computed in the ROIs, gray, and white matter, with significant correlation between electrode distance PDs and current density. The correlation between PD of the contact areas and the computed median current densities in the brain was found to be non-significant. Conclusions: This study demonstrates potential discrepancies in generated current density models using real versus artificial electrode placement when applying tDCS to an older adult cohort. Our findings strongly suggest that future tDCS clinical work should consider closely monitoring and rigorously documenting electrode location during stimulation to model tDCS montages as closely as possible to actual placement. Detailed physical electrode location data may provide more precise information and thus produce more robust tDCS modeling results.

4.
Multisens Res ; 36(3): 289-311, 2023 02 23.
Article in English | MEDLINE | ID: mdl-37080555

ABSTRACT

In multisensory environments, our brains perform causal inference to estimate which sources produce specific sensory signals. Decades of research have revealed the dynamics which underlie this process of causal inference for multisensory (audiovisual) signals, including how temporal, spatial, and semantic relationships between stimuli influence the brain's decision about whether to integrate or segregate. However, presently, very little is known about the relationship between metacognition and multisensory integration, and the characteristics of perceptual confidence for audiovisual signals. In this investigation, we ask two questions about the relationship between metacognition and multisensory causal inference: are observers' confidence ratings for judgments about Congruent, McGurk, and Rarely Integrated speech similar, or different? And do confidence judgments distinguish between these three scenarios when the perceived syllable is identical? To answer these questions, 92 online participants completed experiments where on each trial, participants reported which syllable they perceived, and rated confidence in their judgment. Results from Experiment 1 showed that confidence ratings were quite similar across Congruent speech, McGurk speech, and Rarely Integrated speech. In Experiment 2, when the perceived syllable for congruent and McGurk videos was matched, confidence scores were higher for congruent stimuli compared to McGurk stimuli. In Experiment 3, when the perceived syllable was matched between McGurk and Rarely Integrated stimuli, confidence judgments were similar between the two conditions. Together, these results provide evidence of the capacities and limitations of metacognition's ability to distinguish between different sources of multisensory information.


Subject(s)
Metacognition , Speech Perception , Humans , Visual Perception , Speech , Auditory Perception , Acoustic Stimulation/methods , Photic Stimulation/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...