Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Front Neurosci ; 18: 1396101, 2024.
Article in English | MEDLINE | ID: mdl-38745932

ABSTRACT

In the context of aging and age-associated neurodegenerative disorders, the brain's extracellular matrix (ECM) serves as a critical regulator for neuronal health and cognitive function. Within the extracellular space, proteoglycans and their glycosaminoglycan attachments play essential roles in forming, stabilizing, and protecting neural circuits throughout neurodevelopment and adulthood. Recent studies in rodents reveal that chondroitin sulfate-glycosaminoglycan (CS-GAG) containing perineuronal nets (PNNs) exhibit both structural and compositional differences throughout the brain. While animal studies are illuminating, additional research is required to translate these interregional PNN/CS-GAG variations to human brain tissue. In this perspective article, we first investigate the translational potential for interregional CS-GAG variances across species as novel targets for region-specific therapeutic development. We specifically focus on the observation that alterations in brain PNN-associated CS-GAGs have been linked with the progression of Alzheimer's disease (AD) neuropathology in humans, but these changes have not been fully recapitulated in rodent models of this disease. A second highlight of this perspective article investigates whether AD-associated shifts in CS-GAGs in humans may be dependent on region-specific baseline differences in CS-GAG sulfation patterning. The current findings begin to disentangle the intricate relationships between the interregional differences in brain PNN/CS-GAG matrices across species, while emphasizing the need to better understand the close relationship between dementia and changes in brain CS-GAG sulfation patterns in patients with AD and related dementias.

2.
bioRxiv ; 2023 May 12.
Article in English | MEDLINE | ID: mdl-37214905

ABSTRACT

Local field potentials (LFP) are low-frequency extracellular voltage fluctuations thought to primarily arise from synaptic activity. However, unlike highly localized neuronal spiking, LFP is spatially less specific. LFP measured at one location is not entirely generated there due to far-field contributions that are passively conducted across volumes of neural tissue. We sought to quantify how much information within the locally generated, near-field low-frequency activity (nfLFP) is masked by volume-conducted far-field signals. To do so, we measured laminar neural activity in primary visual cortex (V1) of monkeys viewing sequences of multifeatured stimuli. We compared information content of regular LFP and nfLFP that was mathematically stripped of volume-conducted far-field contributions. Information content was estimated by decoding stimulus properties from neural responses via spatiotemporal multivariate pattern analysis. Volume-conducted information differed from locally generated information in two important ways: (1) for stimulus features relevant to V1 processing (orientation and eye-of-origin), nfLFP contained more information. (2) in contrast, the volume-conducted signal was more informative regarding temporal context (relative stimulus position in a sequence), a signal likely to be coming from elsewhere. Moreover, LFP and nfLFP differed both spectrally as well as spatially, urging caution regarding the interpretations of individual frequency bands and/or laminar patterns of LFP. Most importantly, we found that population spiking of local neurons was less informative than either the LFP or nfLFP, with nfLFP containing most of the relevant information regarding local stimulus processing. These findings suggest that the optimal way to read out local computational processing from neural activity is to decode the local contributions to LFP, with significant information loss hampering both regular LFP and local spiking. Author's Contributions: Conceptualization, D.A.T., J.A.W, and A.M.; Data Collection, J.A.W., M.A.C., K.D.; Formal Analysis, D.A.T. and J.A.W.; Data Visualization, D.A.T. and J.A.W.; Original Draft, D.A.T., J.A.W., and A.M.; Revisions and Final Draft, D.A.T., J.A.W., M.A.C., K.D., M.T.W., A.M.B., and A.M. Competing Interests: The authors declare no conflicts of interest.

3.
J Assoc Res Otolaryngol ; 22(4): 365-386, 2021 07.
Article in English | MEDLINE | ID: mdl-34014416

ABSTRACT

In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.


Subject(s)
Auditory Perception , Visual Perception , Acoustic Stimulation , Animals , Hearing , Humans , Photic Stimulation
4.
Front Syst Neurosci ; 14: 600601, 2020.
Article in English | MEDLINE | ID: mdl-33328912

ABSTRACT

Most of the mammalian neocortex is comprised of a highly similar anatomical structure, consisting of a granular cell layer between superficial and deep layers. Even so, different cortical areas process different information. Taken together, this suggests that cortex features a canonical functional microcircuit that supports region-specific information processing. For example, the primate primary visual cortex (V1) combines the two eyes' signals, extracts stimulus orientation, and integrates contextual information such as visual stimulation history. These processes co-occur during the same laminar stimulation sequence that is triggered by the onset of visual stimuli. Yet, we still know little regarding the laminar processing differences that are specific to each of these types of stimulus information. Univariate analysis techniques have provided great insight by examining one electrode at a time or by studying average responses across multiple electrodes. Here we focus on multivariate statistics to examine response patterns across electrodes instead. Specifically, we applied multivariate pattern analysis (MVPA) to linear multielectrode array recordings of laminar spiking responses to decode information regarding the eye-of-origin, stimulus orientation, and stimulus repetition. MVPA differs from conventional univariate approaches in that it examines patterns of neural activity across simultaneously recorded electrode sites. We were curious whether this added dimensionality could reveal neural processes on the population level that are challenging to detect when measuring brain activity without the context of neighboring recording sites. We found that eye-of-origin information was decodable for the entire duration of stimulus presentation, but diminished in the deepest layers of V1. Conversely, orientation information was transient and equally pronounced along all layers. More importantly, using time-resolved MVPA, we were able to evaluate laminar response properties beyond those yielded by univariate analyses. Specifically, we performed a time generalization analysis by training a classifier at one point of the neural response and testing its performance throughout the remaining period of stimulation. Using this technique, we demonstrate repeating (reverberating) patterns of neural activity that have not previously been observed using standard univariate approaches.

5.
J Neurosci ; 40(29): 5604-5615, 2020 07 15.
Article in English | MEDLINE | ID: mdl-32499378

ABSTRACT

Objects are the fundamental building blocks of how we create a representation of the external world. One major distinction among objects is between those that are animate versus those that are inanimate. In addition, many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of male and female human EEG signals, we show enhanced encoding of audiovisual objects when compared with their corresponding visual and auditory objects. Surprisingly, we discovered that the often-found processing advantages for animate objects were not evident under multisensory conditions. This was due to a greater neural enhancement of inanimate objects-which are more weakly encoded under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that the enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a Go/No-Go animate categorization task. Links between neural activity and behavioral measures were most evident at intervals of 100-200 ms and 350-500 ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize the information it captures across sensory systems to perform object recognition.SIGNIFICANCE STATEMENT Our world is filled with ever-changing sensory information that we are able to seamlessly transform into a coherent and meaningful perceptual experience. We accomplish this feat by combining different stimulus features into objects. However, despite the fact that these features span multiple senses, little is known about how the brain combines the various forms of sensory information into object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that nonliving (i.e., inanimate) objects, which are more difficult to process with one sense alone, benefited the most from engaging multiple senses.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Recognition, Psychology/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography , Female , Humans , Male , Photic Stimulation , Young Adult
6.
J Vis ; 19(14): 1, 2019 12 02.
Article in English | MEDLINE | ID: mdl-31790554

ABSTRACT

Continuous flash suppression (CFS) entails presentation of a stationary target to one eye and an animated sequence of arrays of geometric figures, the mask, to the other eye. The prototypical CFS sequence comprises different sized rectangles of various colors, dubbed Mondrians. Presented as a rapid, changing sequence to one eye, Mondrians or other similarly constructed textured arrays can abolish awareness of the target viewed by the other eye for many seconds at a time, producing target suppression durations much longer than those associated with conventional binocular rivalry. We have devised an animation technique that replaces meaningless Mondrian figures with recognizable visual objects and scenes as inducers of CFS, allowing explicit manipulation of the visual semantic content of those masks. By converting each image of these CFS sequences into successively presented objects or scenes each comprised of many small, circular patches of color, we create pointillist CFS sequences closely matched in terms of their spatio-temporal power spectra. Randomly rearranging the positions of the pointillist patches scrambles the images so they are no longer recognizable. CFS sequences comprising a stream of different objects produces more robust interocular suppression than do sequences comprising a stream of different scenes, even when the two categories of CFS are matched in root mean square contrast and spatial frequency content. Factors promoting these differences in CFS potency could range from low-level, image-based features to high-level factors including attention and recognizability. At the same time, object- and scene-based CFS sequences, when themselves suppressed from awareness, do not differ in their durations of suppression, implying that semantic content of those images comprising CFS sequences are not registered during suppression. The pointillist technique itself offers a potentially useful means for examining the impact of high-level image meaning on aspects of visual perception other than interocular suppression.


Subject(s)
Photic Stimulation/methods , Vision Disparity/physiology , Visual Perception/physiology , Adaptation, Physiological , Adult , Female , Humans , Male , Sensory Thresholds
7.
PLoS One ; 10(12): e0143172, 2015.
Article in English | MEDLINE | ID: mdl-26625264

ABSTRACT

PURPOSE: A novel phantom for image quality testing for functional magnetic resonance imaging (fMRI) scans is described. METHODS: The cylindrical, rotatable, ~4.5L phantom, with eight wedge-shaped compartments, is used to simulate rest and activated states. The compartments contain NiCl2 doped agar gel with alternating concentrations of agar (1.4%, 1.6%) to produce T1 and T2 values approximating brain grey matter. The Jacard index was used to compare the image distortions for echo planar imaging (EPI) and gradient recalled echo (GRE) scans. Contrast to noise ratio (CNR) was compared across the imaging volume for GRE and EPI. RESULTS: The mean T2 for the two agar concentrations were found to be 106.5±4.8, 94.5±4.7 ms, and T1 of 1500±40 and 1485±30 ms, respectively. The Jacard index for GRE was generally found to be higher than for EPI (0.95 versus 0.8). The CNR varied from 20 to 50 across the slices and echo times used for EPI scans, and from 20 to 40 across the slices for the GRE scans. The phantom provided a reproducible CNR over 25 days. CONCLUSIONS: The phantom provides a quantifiable signal change over a head-size imaging volume with EPI and GRE sequences, which was used for image quality assessment.


Subject(s)
Magnetic Resonance Imaging/instrumentation , Phantoms, Imaging , Rotation , Artifacts , Quality Control , Signal-To-Noise Ratio , Time Factors
8.
PLoS Comput Biol ; 11(6): e1004316, 2015 Jun.
Article in English | MEDLINE | ID: mdl-26107634

ABSTRACT

Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or "brain decoding", methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain's transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to "read out" category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition.


Subject(s)
Attention/physiology , Reaction Time/physiology , Visual Cortex/physiology , Adult , Computational Biology , Decision Making/physiology , Humans , Magnetoencephalography , Male , Young Adult
9.
J Vis ; 13(10)2013 Aug 01.
Article in English | MEDLINE | ID: mdl-23908380

ABSTRACT

Human object recognition is remarkably efficient. In recent years, significant advancements have been made in our understanding of how the brain represents visual objects and organizes them into categories. Recent studies using pattern analyses methods have characterized a representational space of objects in human and primate inferior temporal cortex in which object exemplars are discriminable and cluster according to category (e.g., faces and bodies). In the present study we examined how category structure in object representations emerges in the first 1000 ms of visual processing. In the study, participants viewed 24 object exemplars with a planned categorical structure comprised of four levels ranging from highly specific (individual exemplars) to highly abstract (animate vs. inanimate), while their brain activity was recorded with magnetoencephalography (MEG). We used a sliding time window decoding approach to decode the exemplar and the exemplar's category that participants were viewing on a moment-to-moment basis. We found exemplar and category membership could be decoded from the neuromagnetic recordings shortly after stimulus onset (<100 ms) with peak decodability following thereafter. Latencies for peak decodability varied systematically with the level of category abstraction with more abstract categories emerging later, indicating that the brain hierarchically constructs category representations. In addition, we examined the stationarity of patterns of activity in the brain that encode object category information and show these patterns vary over time, suggesting the brain might use flexible time varying codes to represent visual object categories.


Subject(s)
Cerebral Cortex/physiology , Pattern Recognition, Visual/physiology , Vision, Ocular/physiology , Adult , Brain Mapping , Female , Humans , Magnetoencephalography , Male , Photic Stimulation/methods , Time Factors , Visual Pathways/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...