Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Light Sci Appl ; 12(1): 270, 2023 Nov 13.
Article in English | MEDLINE | ID: mdl-37953294

ABSTRACT

The resolution and contrast of microscope imaging is often affected by aberrations introduced by imperfect optical systems and inhomogeneous refractive structures in specimens. Adaptive optics (AO) compensates these aberrations and restores diffraction limited performance. A wide range of AO solutions have been introduced, often tailored to a specific microscope type or application. Until now, a universal AO solution - one that can be readily transferred between microscope modalities - has not been deployed. We propose versatile and fast aberration correction using a physics-based machine learning assisted wavefront-sensorless AO control (MLAO) method. Unlike previous ML methods, we used a specially constructed neural network (NN) architecture, designed using physical understanding of the general microscope image formation, that was embedded in the control loop of different microscope systems. The approach means that not only is the resulting NN orders of magnitude simpler than previous NN methods, but the concept is translatable across microscope modalities. We demonstrated the method on a two-photon, a three-photon and a widefield three-dimensional (3D) structured illumination microscope. Results showed that the method outperformed commonly-used modal-based sensorless AO methods. We also showed that our ML-based method was robust in a range of challenging imaging conditions, such as 3D sample structures, specimen motion, low signal to noise ratio and activity-induced fluorescence fluctuations. Moreover, as the bespoke architecture encapsulated physical understanding of the imaging process, the internal NN configuration was no-longer a "black box", but provided physical insights on internal workings, which could influence future designs.

2.
Brain ; 145(5): 1610-1623, 2022 06 03.
Article in English | MEDLINE | ID: mdl-35348621

ABSTRACT

The claustrum is the most densely interconnected region in the human brain. Despite the accumulating data from clinical and experimental studies, the functional role of the claustrum remains unknown. Here, we systematically review claustrum lesion studies and discuss their functional implications. Claustral lesions are associated with an array of signs and symptoms, including changes in cognitive, perceptual and motor abilities; electrical activity; mental state; and sleep. The wide range of symptoms observed following claustral lesions do not provide compelling evidence to support prominent current theories of claustrum function such as multisensory integration or salience computation. Conversely, the lesions studies support the hypothesis that the claustrum regulates cortical excitability. We argue that the claustrum is connected to, or part of, multiple brain networks that perform both fundamental and higher cognitive functions. As a multifunctional node in numerous networks, this may explain the manifold effects of claustrum damage on brain and behaviour.


Subject(s)
Claustrum , Animals , Basal Ganglia , Humans , Pain , Perception , Sleep
3.
eNeuro ; 9(2)2022.
Article in English | MEDLINE | ID: mdl-35168951

ABSTRACT

In a competitive game involving an animal and an opponent, the outcome is contingent on the choices of both players. To succeed, the animal must continually adapt to competitive pressure, or else risk being exploited and lose out on rewards. In this study, we demonstrate that head-fixed male mice can be trained to play the iterative competitive game "matching pennies" against a virtual computer opponent. We find that the animals' performance is well described by a hybrid computational model that includes Q-learning and choice kernels. Comparing between matching pennies and a non-competitive two-armed bandit task, we show that the tasks encourage animals to operate at different regimes of reinforcement learning. To understand the involvement of neuromodulatory mechanisms, we measure fluctuations in pupil size and use multiple linear regression to relate the trial-by-trial transient pupil responses to decision-related variables. The analysis reveals that pupil responses are modulated by observable variables, including choice and outcome, as well as latent variables for value updating, but not action selection. Collectively, these results establish a paradigm for studying competitive decision-making in head-fixed mice and provide insights into the role of arousal-linked neuromodulation in the decision process.


Subject(s)
Decision Making , Pupil , Animals , Decision Making/physiology , Learning/physiology , Male , Mice , Pupil/physiology , Reinforcement, Psychology , Reward
4.
Cognition ; 208: 104529, 2021 03.
Article in English | MEDLINE | ID: mdl-33373937

ABSTRACT

The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox, Atilgan, Bizley, & Lee, 2015). Participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when the visual stimulus was temporally coherent with the masker, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this changed the way in which they were able to exploit visual information in the auditory selective attention task. We observed that after training, participants were able to benefit from temporal coherence between the visual stimulus and both the target and masker streams, relative to the condition in which the visual stimulus was coherent with neither sound. However, we did not observe such changes in a second group that were trained to discriminate modulation rate differences between temporally coherent audiovisual streams, although they did show an improvement in their overall performance. A control group did not change their performance between pretest and post-test and did not change how they exploited visual information. These results provide insights into how crossmodal experience may optimize multisensory integration.


Subject(s)
Auditory Perception , Visual Perception , Acoustic Stimulation , Attention , Humans , Photic Stimulation
5.
Nat Neurosci ; 21(12): 1648-1650, 2018 12.
Article in English | MEDLINE | ID: mdl-30420733
6.
Neuron ; 97(3): 640-655.e4, 2018 02 07.
Article in English | MEDLINE | ID: mdl-29395914

ABSTRACT

How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis.


Subject(s)
Auditory Perception/physiology , Neurons/physiology , Visual Cortex/physiology , Visual Perception/physiology , Acoustic Stimulation , Action Potentials , Animals , Female , Ferrets , Photic Stimulation
7.
PLoS One ; 12(1): e0170264, 2017.
Article in English | MEDLINE | ID: mdl-28099489

ABSTRACT

The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Ferrets/physiology , Functional Laterality/physiology , Sound Localization/physiology , Animals , Brain Mapping , Cold Temperature , Photic Stimulation
8.
J Acoust Soc Am ; 137(5): 2870-83, 2015 May.
Article in English | MEDLINE | ID: mdl-25994714

ABSTRACT

Timbre distinguishes sounds of equal loudness, pitch, and duration; however, little is known about the neural mechanisms underlying timbre perception. Such understanding requires animal models such as the ferret in which neuronal and behavioral observation can be combined. The current study asked what spectral cues ferrets use to discriminate between synthetic vowels. Ferrets were trained to discriminate vowels differing in the position of the first (F1) and second formants (F2), inter-formant distance, and spectral centroid. In experiment 1, ferrets responded to probe trials containing novel vowels in which the spectral cues of trained vowels were mismatched. Regression models fitted to behavioral responses determined that F2 and spectral centroid were stronger predictors of ferrets' behavior than either F1 or inter-formant distance. Experiment 2 examined responses to single formant vowels and found that individual spectral peaks failed to account for multi-formant vowel perception. Experiment 3 measured responses to unvoiced vowels and showed that ferrets could generalize vowel identity across voicing conditions. Experiment 4 employed the same design as experiment 1 but with human participants. Their responses were also predicted by F2 and spectral centroid. Together these findings further support the ferret as a model for studying the neural processes underlying timbre perception.


Subject(s)
Behavior, Animal , Cues , Discrimination, Psychological , Ferrets/psychology , Loudness Perception , Pitch Discrimination , Acoustic Stimulation , Acoustics , Adult , Animals , Auditory Pathways/physiology , Female , Ferrets/physiology , Humans , Male , Psychoacoustics , Sound Spectrography , Species Specificity , Young Adult
9.
Elife ; 42015 Feb 05.
Article in English | MEDLINE | ID: mdl-25654748

ABSTRACT

In noisy settings, listening is aided by correlated dynamic visual cues gleaned from a talker's face-an improvement often attributed to visually reinforced linguistic information. In this study, we aimed to test the effect of audio-visual temporal coherence alone on selective listening, free of linguistic confounds. We presented listeners with competing auditory streams whose amplitude varied independently and a visual stimulus with varying radius, while manipulating the cross-modal temporal relationships. Performance improved when the auditory target's timecourse matched that of the visual stimulus. The fact that the coherence was between task-irrelevant stimulus features suggests that the observed improvement stemmed from the integration of auditory and visual streams into cross-modal objects, enabling listeners to better attend the target. These findings suggest that in everyday conditions, where listeners can often see the source of a sound, temporal cues provided by vision can help listeners to select one sound source from a mixture.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Photic Stimulation , Task Performance and Analysis , Acoustic Stimulation , Adolescent , Adult , Behavior , Female , Humans , Male , Models, Biological , Pitch Discrimination , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...