Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters










Publication year range
1.
Hum Brain Mapp ; 44(17): 5906-5918, 2023 12 01.
Article in English | MEDLINE | ID: mdl-37800366

ABSTRACT

Age-related variations in many regions and/or networks of the human brain have been uncovered using resting-state functional magnetic resonance imaging. However, these findings did not account for the dynamical effect the brain's global activity (global signal [GS]) causes on local characteristics, which is measured by GS topography. To address this gap, we tested GS topography including its correlation with age using a large-scale cross-sectional adult lifespan dataset (n = 492). Both GS topography and its variation with age showed frequency-specific patterns, reflecting the spatiotemporal characteristics of the dynamic change of GS topography with age. A general trend toward dedifferentiation of GS topography with age was observed in both spatial (i.e., less differences of GS between different regions) and temporal (i.e., less differences of GS between different frequencies) dimensions. Further, methodological control analyses suggested that although most age-related dedifferentiation effects remained across different preprocessing strategies, some were triggered by neuro-vascular coupling and physiological noises. Together, these results provide the first evidence for age-related effects on global brain activity and its topographic-dynamic representation in terms of spatiotemporal dedifferentiation.


Subject(s)
Brain Mapping , Longevity , Humans , Adult , Brain Mapping/methods , Cross-Sectional Studies , Magnetic Resonance Imaging/methods , Brain/physiology
2.
Sci Rep ; 13(1): 4696, 2023 Mar 22.
Article in English | MEDLINE | ID: mdl-36949180

ABSTRACT

Continuous flash suppression (CFS) has become one of the most popular tools in the study of visual processing in the absence of conscious awareness. Studies use different kinds of masks, like colorful Mondrians or random noise. Even though the use of CFS is widespread, little is known about some of the underlying neuronal mechanisms, such as the interactions between masks and stimuli. We designed a b-CFS experiment with feature-reduced targets and masks in order to investigate possible effects of feature-similarity or -orthogonality between masks and targets. Masks were pink noise patterns filtered with an orientation band pass to generate a strong directionality. Target stimuli were Gabors varying systematically in their orientational alignment with the masks. We found that stimuli whose orientational alignment was more similar to that of the masks are suppressed significantly longer. This feature-similarity (here: orientation) based enhancement of suppression duration can be overcome by feature orthogonality in another feature dimension (here: color). We conclude that mask-target interactions exist in continuous flash suppression, and the human visual system can use orthogonality within a feature dimension or across feature dimensions to facilitate the breaking of the CFS.

3.
Cereb Cortex ; 32(23): 5455-5466, 2022 11 21.
Article in English | MEDLINE | ID: mdl-35137008

ABSTRACT

Although sensory input is continuous, information must be combined over time to guide action and cognition, leading to the proposal of temporal sampling windows. A number of studies have suggested that a 10-Hz sampling window might be involved in the "frame rate" of visual processing. To investigate this, we tested the ability of participants to localize and enumerate 1 or 2 visual flashes presented either at near-threshold or full-contrast intensities, while recording magnetoencephalography. The inter-stimulus interval (ISI) between the 2 flashes was varied across trials. Performance in distinguishing between 1 and 2 flashes was linked to the alpha frequency, both at the individual level and trial-by-trial. Participants with a higher resting-state alpha peak frequency showed the greatest improvement in performance as a function of ISI within a 100-ms time window, while those with slower alpha improved more when ISI exceeded 100 ms. On each trial, correct enumeration (1 vs. 2) performance was paired with faster pre-stimulus instantaneous alpha frequency. Our results suggest that visual sampling/processing speed, linked to peak alpha frequency, is both an individual trait and can vary in a state-dependent manner.


Subject(s)
Time Perception , Visual Perception , Humans , Magnetoencephalography , Time
4.
Front Neurosci ; 15: 656913, 2021.
Article in English | MEDLINE | ID: mdl-34108857

ABSTRACT

How vision guides gaze in realistic settings has been researched for decades. Human gaze behavior is typically measured in laboratory settings that are well controlled but feature-reduced and movement-constrained, in sharp contrast to real-life gaze control that combines eye, head, and body movements. Previous real-world research has shown environmental factors such as terrain difficulty to affect gaze; however, real-world settings are difficult to control or replicate. Virtual reality (VR) offers the experimental control of a laboratory, yet approximates freedom and visual complexity of the real world (RW). We measured gaze data in 8 healthy young adults during walking in the RW and simulated locomotion in VR. Participants walked along a pre-defined path inside an office building, which included different terrains such as long corridors and flights of stairs. In VR, participants followed the same path in a detailed virtual reconstruction of the building. We devised a novel hybrid control strategy for movement in VR: participants did not actually translate: forward movements were controlled by a hand-held device, rotational movements were executed physically and transferred to the VR. We found significant effects of terrain type (flat corridor, staircase up, and staircase down) on gaze direction, on the spatial spread of gaze direction, and on the angular distribution of gaze-direction changes. The factor world (RW and VR) affected the angular distribution of gaze-direction changes, saccade frequency, and head-centered vertical gaze direction. The latter effect vanished when referencing gaze to a world-fixed coordinate system, and was likely due to specifics of headset placement, which cannot confound any other analyzed measure. Importantly, we did not observe a significant interaction between the factors world and terrain for any of the tested measures. This indicates that differences between terrain types are not modulated by the world. The overall dwell time on navigational markers did not differ between worlds. The similar dependence of gaze behavior on terrain in the RW and in VR indicates that our VR captures real-world constraints remarkably well. High-fidelity VR combined with naturalistic movement control therefore has the potential to narrow the gap between the experimental control of a lab and ecologically valid settings.

5.
Sci Rep ; 10(1): 6943, 2020 04 24.
Article in English | MEDLINE | ID: mdl-32332984

ABSTRACT

A basic question in cognitive neuroscience is how sensory stimuli are processed within and outside of conscious awareness. In the past decade, CFS has become the most popular tool for investigating unconscious visual processing, although the exact nature of some of the underlying mechanisms remains unclear. Here, we investigate which kind of random noise is optimal for CFS masking, and whether the addition of visible edges to noise patterns affects suppression duration. We tested noise patterns of various density as well as composite patterns with added edges, and classic Mondrian masks as well as phase scrambled (edgeless) Mondrian masks for comparison. We find that spatial pink noise (1/F noise) achieved the longest suppression of the tested random noises, however classic Mondrian masks are still significantly more effective in terms of suppression duration. Further analysis reveals that global contrast and general spectral similarity between target and mask cannot account for this difference in effectiveness.


Subject(s)
Cognition/physiology , Consciousness/physiology , Noise , Perception/physiology , Adult , Female , Humans , Male , Young Adult
6.
J Vis ; 18(1): 12, 2018 01 01.
Article in English | MEDLINE | ID: mdl-29362805

ABSTRACT

The study of how visual processing functions in the absence of visual awareness has become a major research interest in the vision-science community. One of the main sources of evidence that stimuli that do not reach conscious awareness-and are thus "invisible"-are still processed to some degree by the visual system comes from studies using continuous flash suppression (CFS). Why and how CFS works may provide more general insight into how stimuli access awareness. As spatial and temporal properties of stimuli are major determinants of visual perception, we hypothesized that these properties of the CFS masks would be of significant importance to the achieved suppression depth. In previous studies however, the spatial and temporal properties of the masks themselves have received little study, and masking parameters vary widely across studies, making a metacomparison difficult. To investigate the factors that determine the effectiveness of CFS, we varied both the temporal frequency and the spatial density of Mondrian-style masks. We consistently found the longest suppression duration for a mask temporal frequency of around 6 Hz. In trials using masks with reduced spatial density, suppression was weaker and frequency tuning was less precise. In contrast, removing color reduced mask effectiveness but did not change the pattern of suppression strength as a function of frequency. Overall, this pattern of results stresses the importance of CFS mask parameters and is consistent with the idea that CFS works by disrupting the spatiotemporal mechanisms that underlie conscious access to visual input.


Subject(s)
Awareness , Perceptual Masking/physiology , Retina/radiation effects , Spatial Processing/physiology , Visual Perception/physiology , Adult , Color , Female , Humans , Male , Masks , Photic Stimulation/methods , Time Factors , Visual Pathways/physiology , Young Adult
7.
Glia ; 65(6): 990-1004, 2017 06.
Article in English | MEDLINE | ID: mdl-28317180

ABSTRACT

Astrocytes are the most abundant cell type of the central nervous system and cover a broad range of functionalities. We report here the generation of a novel monoclonal antibody, anti-astrocyte cell surface antigen-2 (Anti-ACSA-2). Flow cytometry, immunohistochemistry and immunocytochemistry revealed that Anti-ACSA-2 reacted specifically with a not yet identified glycosylated surface molecule of murine astrocytes at all developmental stages. It did not show any labeling of non-astroglial cells such as neurons, oligodendrocytes, NG2+ cells, microglia, endothelial cells, leukocytes, or erythrocytes. Co-labeling studies of GLAST and ACSA-2 showed largely overlapping expression. However, there were also notable differences in protein expression levels and frequencies of single-positive subpopulations of cells in some regions of the CNS such as cerebellum, most prominently at early postnatal stages. In the neurogenic niches, the dentate gyrus of the hippocampus and the subventricular zone (SVZ), again a general overlap with slight differences in expression levels were observed. ACSA-2 was unlike GLAST not sensitive to papain-based tissue dissociation and allowed for a highly effective, acute, specific, and prospective purification of viable astrocytes based on a new rapid sorting procedure using Anti-ACSA-2 directly coupled to superparamagnetic MicroBeads. In conclusion, ACSA-2 appears to be a new surface marker for astrocytes, radial glia, neural stem cells and bipotent glial progenitor cells which opens up the possibility of further dissecting the characteristics of astroglial subpopulations and lineages.


Subject(s)
Antibodies, Monoclonal/immunology , Antigens, Surface/analysis , Antigens, Surface/immunology , Astrocytes/cytology , Astrocytes/immunology , Immunomagnetic Separation/methods , Animals , Animals, Newborn , Antibody Specificity , Antigens, Surface/metabolism , Brain/cytology , Brain/growth & development , Cells, Cultured , Endothelial Cells/cytology , Endothelial Cells/immunology , Erythrocytes/cytology , Erythrocytes/metabolism , Excitatory Amino Acid Transporter 1/analysis , Leukocytes/cytology , Leukocytes/immunology , Mice, Inbred BALB C , Mice, Inbred C57BL , Mice, Transgenic , Microglia/cytology , Microglia/immunology , Neural Stem Cells/immunology , Neurons/cytology , Neurons/metabolism , Oligodendroglia/cytology , Oligodendroglia/immunology , Rats, Wistar
8.
Front Hum Neurosci ; 10: 513, 2016.
Article in English | MEDLINE | ID: mdl-27790106

ABSTRACT

The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that "invisible" stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the "unseen" condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.

9.
J Vis ; 16(10): 3, 2016 08 01.
Article in English | MEDLINE | ID: mdl-27494545

ABSTRACT

Early, feed-forward visual processing is organized in a retinotopic reference frame. In contrast, visual feature integration on longer time scales can involve object-based or spatiotopic coordinates. For example, in the Ternus-Pikler (T-P) apparent motion display, object identity is mapped across the object motion path. Here, we report evidence from three experiments supporting nonretinotopic feature integration even for the most paradigmatic example of retinotopically-defined features: orientation. We presented observers with a repeated series of T-P displays in which the perceived rotation of Gabor gratings indicates processing in either retinotopic or object-based coordinates. In Experiment 1, the frequency of perceived retinotopic rotations decreased exponentially for longer interstimulus intervals (ISIs) between T-P display frames, with object-based percepts dominating after about 150-250 ms. In a second experiment, we show that motion and rotation judgments depend on the perception of a moving object during the T-P display ISIs rather than only on temporal factors. In Experiment 3, we cued the observers' attentional state either toward a retinotopic or object motion-based reference frame and then tracked both the observers' eye position and the time course of the perceptual bias while viewing identical T-P display sequences. Overall, we report novel evidence for spatiotemporal integration of even basic visual features such as orientation in nonretinotopic coordinates, in order to support perceptual constancy across self- and object motion.


Subject(s)
Attention/physiology , Motion Perception/physiology , Orientation/physiology , Space Perception/physiology , Visual Cortex/physiology , Female , Humans , Male , Motion , Retina/physiology , Visual Fields/physiology , Young Adult
10.
PLoS One ; 11(7): e0159206, 2016.
Article in English | MEDLINE | ID: mdl-27416317

ABSTRACT

Visual processing is not instantaneous, but instead our conscious perception depends on the integration of sensory input over time. In the case of Continuous Flash Suppression (CFS), masks are flashed to one eye, suppressing awareness of stimuli presented to the other eye. One potential explanation of CFS is that it depends, at least in part, on the flashing mask continually interrupting visual processing before the stimulus reaches awareness. We investigated the temporal features of masks in two ways. First, we measured the suppression effectiveness of a wide range of masking frequencies (0-32Hz), using both complex (faces/houses) and simple (closed/open geometric shapes) stimuli. Second, we varied whether the different frequencies were interleaved within blocks or separated in homogenous blocks, in order to see if suppression was stronger or weaker when the frequency remained constant across trials. We found that break-through contrast differed dramatically between masking frequencies, with mask effectiveness following a skewed-normal curve peaking around 6Hz and little or no masking for low and high temporal frequencies. Peak frequency was similar for trial-randomized and block randomized conditions. In terms of type of stimulus, we found no significant difference in peak frequency between the stimulus groups (complex/simple, face/house, closed/open). These findings suggest that temporal factors play a critical role in perceptual awareness, perhaps due to interactions between mask frequency and the time frame of visual processing.


Subject(s)
Perceptual Masking/physiology , Visual Perception/physiology , Adult , Awareness/physiology , Female , Humans , Male , Photic Stimulation , Time Factors , Time Perception/physiology , Young Adult
11.
Phys Rev Lett ; 116(17): 175301, 2016 Apr 29.
Article in English | MEDLINE | ID: mdl-27176527

ABSTRACT

The subtle interplay between kinetic energy, interactions, and dimensionality challenges our comprehension of strongly correlated physics observed, for example, in the solid state. In this quest, the Hubbard model has emerged as a conceptually simple, yet rich model describing such physics. Here we present an experimental determination of the equation of state of the repulsive two-dimensional Hubbard model over a broad range of interactions 0≲U/t≲20 and temperatures, down to k_{B}T/t=0.63(2) using high-resolution imaging of ultracold fermionic atoms in optical lattices. We show density profiles, compressibilities, and double occupancies over the whole doping range, and, hence, our results constitute benchmarks for state-of-the-art theoretical approaches.

12.
J Neurosci ; 36(1): 185-92, 2016 Jan 06.
Article in English | MEDLINE | ID: mdl-26740660

ABSTRACT

The human visual system must extract reliable object information from cluttered visual scenes several times per second, and this temporal constraint has been taken as evidence that the underlying cortical processing must be strictly feedforward. Here we use a novel rapid reinforcement paradigm to probe the temporal dynamics of the neural circuit underlying rapid object shape perception and thus test this feedforward assumption. Our results show that two shape stimuli are optimally reinforcing when separated in time by ∼60 ms, suggesting an underlying recurrent circuit with a time constant (feedforward + feedback) of 60 ms. A control experiment demonstrates that this is not an attentional cueing effect. Instead, it appears to reflect the time course of feedback processing underlying the rapid perceptual organization of shape. SIGNIFICANCE STATEMENT: Human and nonhuman primates can spot an animal shape in complex natural scenes with striking speed, and this has been taken as evidence that the underlying cortical mechanisms are strictly feedforward. Using a novel paradigm to probe the dynamics of shape perception, we find that two shape stimuli are optimally reinforcing when separated in time by 60 ms, suggesting a fast but recurrent neural circuit. This work (1) introduces a novel method for probing the temporal dynamics of cortical circuits underlying perception, (2) provides direct evidence against the feedforward assumption for rapid shape perception, and (3) yields insight into the role of feedback connections in the object pathway.


Subject(s)
Feedback, Physiological/physiology , Form Perception/physiology , Neuronal Plasticity/physiology , Pattern Recognition, Visual/physiology , Reaction Time/physiology , Visual Cortex/physiology , Adult , Female , Humans , Male , Middle Aged , Young Adult
13.
Sci Rep ; 5: 16290, 2015 Nov 06.
Article in English | MEDLINE | ID: mdl-26542183

ABSTRACT

Perceptual systems must create discrete objects and events out of a continuous flow of sensory information. Previous studies have demonstrated oscillatory effects in the behavioral outcome of low-level visual tasks, suggesting a cyclic nature of visual processing as the solution. To investigate whether these effects extend to more complex tasks, a stream of "neutral" photographic images (not containing targets) was rapidly presented (20 ms/image). Embedded were one or two presentations of a randomly selected target image (vehicles and animals). Subjects reported the perceived target category. On dual-presentation trials, the ISI varied systematically from 0 to 600 ms. At randomized timing before first target presentation, the screen was flashed with the intent of creating a phase reset in the visual system. Sorting trials by temporal distance between flash and first target presentation revealed strong oscillations in behavioral performance, peaking at 5 Hz. On dual-target trials, longer ISIs led to reduced performance, implying a temporal integration window for object category discrimination. The "animal" trials exhibited a significant oscillatory component around 5 Hz. Our results indicate that oscillatory effects are not mere fringe effects relevant only with simple stimuli, but are resultant from the core mechanisms of visual processing and may well extend into real-life scenarios.


Subject(s)
Behavior , Visual Perception , Humans
14.
PLoS One ; 9(10): e111197, 2014.
Article in English | MEDLINE | ID: mdl-25338168

ABSTRACT

Camera-based eye trackers are the mainstay of eye movement research and countless practical applications of eye tracking. Recently, a significant impact of changes in pupil size on gaze position as measured by camera-based eye trackers has been reported. In an attempt to improve the understanding of the magnitude and population-wise distribution of the pupil-size dependent shift in reported gaze position, we present the first collection of binocular pupil drift measurements recorded from 39 subjects. The pupil-size dependent shift varied greatly between subjects (from 0.3 to 5.2 deg of deviation, mean 2.6 deg), but also between the eyes of individual subjects (0.1 to 3.0 deg difference, mean difference 1.0 deg). We observed a wide range of drift direction, mostly downward and nasal. We demonstrate two methods to partially compensate the pupil-based shift using separate calibrations in pupil-constricted and pupil-dilated conditions, and evaluate an improved method of compensation based on individual look-up-tables, achieving up to 74% of compensation.


Subject(s)
Eye Movements , Pupil , Adult , Female , Humans , Male , Young Adult
15.
Vision Res ; 105: 21-8, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25220538

ABSTRACT

The visual system constructs a percept of the world across multiple spatial and temporal scales. This raises the questions of whether different scales involve separate integration mechanisms and whether spatial and temporal factors are linked via spatio-temporal reference frames. We investigated this using Vernier fusion, a phenomenon in which the features of two Vernier stimuli presented in close spatio-temporal proximity are fused into a single percept. With increasing spatial offset, perception changes dramatically from a single percept into apparent motion and later, at larger offsets, into two separately perceived stimuli. We tested the link between spatial and temporal integration by presenting two successive Vernier stimuli presented at varying spatial and temporal offsets. The second Vernier either had the same or the opposite offset as the first. We found that the type of percept depended not only on spatial offset, as reported previously, but interacted with the temporal parameter as well. At temporal separations around 30-40 ms the majority of trials were perceived as motion, while above 70 ms predominantly two separate stimuli were reported. The dominance of the second Vernier varied systematically with temporal offset, peaking around 40 ms ISI. Same-offset conditions showed increasing amounts of perceived separation at large ISIs, but little dependence on spatial offset. As subjects did not always completely fuse stimuli, we separated trials by reported percept (single/fusion, motion, double/segregation). We found systematic indications of spatial fusion even on trials in which subjects perceived temporal segregation. These findings imply that spatial integration/fusion may occur even when the stimuli are perceived as temporally separate entities, suggesting that the mechanisms responsible for temporal segregation and spatial integration may not be mutually exclusive.


Subject(s)
Space Perception/physiology , Time Perception/physiology , Analysis of Variance , Humans , Motion Perception/physiology , Photic Stimulation , Vision, Ocular
16.
PLoS One ; 8(10): e75816, 2013.
Article in English | MEDLINE | ID: mdl-24130744

ABSTRACT

The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used.


Subject(s)
Color , Saccades/physiology , Adolescent , Adult , Animals , Evoked Potentials/physiology , Female , Humans , Male , Pattern Recognition, Visual/physiology , Reaction Time/physiology , Young Adult
17.
J Vis ; 11(2)2011 Feb 25.
Article in English | MEDLINE | ID: mdl-21367757

ABSTRACT

Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Recent studies found human response times to be as fast as 120 ms in a dual-presentation (2-AFC) setup (H. Kirchner & S. J. Thorpe, 2005). In most previous experiments, pairs of randomly chosen images were presented, frequently from very different contexts (e.g., a zebra in Africa vs. the New York Skyline). Here, we tested the effect of background size and contiguity on human performance by using a new, contiguous background image set. Individual images contained a single animal surrounded by a large, animal-free image area. The image could be positioned and cropped in such a manner that the animal could occur in one of eight evenly spaced positions on an imaginary circle (radius 10-deg visual angle). In the first (8-Choice) experiment, all eight positions were used, whereas in the second (2-Choice) and third (2-Image) experiments, the animals were only presented on the two positions to the left and right of the screen center. In the third experiment, additional rectangular frames were used to mimic the conditions of earlier studies. Average latencies on successful trials differed only slightly between conditions, indicating that the number of possible animal locations within the display does not affect decision latency. Detailed analysis of saccade targets revealed a preference toward both the head and the center of gravity of the target animal, affecting hit ratio, latency, and the number of saccades required to reach the target. These results illustrate that rapid animal detection operates scene-wide and is fast and efficient even when the animals are embedded in their natural backgrounds.


Subject(s)
Form Perception/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Saccades/physiology , Adult , Animals , Female , Field Dependence-Independence , Humans , Male , Reaction Time/physiology , Young Adult
18.
J Neurosci ; 31(12): 4698-708, 2011 Mar 23.
Article in English | MEDLINE | ID: mdl-21430168

ABSTRACT

Motor reaction times in humans are highly variable from one trial to the next, even for simple and automatic tasks, such as shifting your gaze to a suddenly appearing target. Although classic models of reaction time generation consider this variability to reflect intrinsic noise, some portion of it could also be attributed to ongoing neuronal processes. For example, variations of alpha rhythm frequency (8-12 Hz) across individuals, or alpha amplitude across trials, have been related previously to manual reaction time variability. Here we investigate the trial-by-trial influence of oscillatory phase, a dynamic marker of ongoing activity, on saccadic reaction time in three paradigms of increasing cognitive demand (simple reaction time, choice reaction time, and visual discrimination tasks). The phase of ongoing prestimulus activity in the high alpha/low beta range (11-17 Hz) at frontocentral locations was strongly associated with saccadic response latencies. This relation, present in all three paradigms, peaked for phases recorded ∼50 ms before fixation point offset and 250 ms before target onset. Reaction times in the most demanding discrimination task fell into two distinct modes reflecting a fast but inaccurate strategy or a slow and efficient one. The phase effect was markedly stronger in the group of subjects using the faster strategy. We conclude that periodic fluctuations of electrical activity attributable to neuronal oscillations can modulate the efficiency of the oculomotor system on a rapid timescale; however, this relation may be obscured when cognitive load also adds a significant contribution to response time variability.


Subject(s)
Electroencephalography , Ocular Physiological Phenomena , Reaction Time/physiology , Saccades/physiology , Adult , Algorithms , Cognition/physiology , Data Interpretation, Statistical , Discrimination, Psychological/physiology , Female , Fixation, Ocular , Humans , Male , Photic Stimulation , Psychomotor Performance/physiology , Visual Perception/physiology , Young Adult
19.
J Vis ; 10(4): 6.1-27, 2010 Apr 15.
Article in English | MEDLINE | ID: mdl-20465326

ABSTRACT

S. J. Thorpe, D. Fize, and C. Marlot (1996) showed how rapidly observers can detect animals in images of natural scenes, but it is still unclear which image features support this rapid detection. A. B. Torralba and A. Oliva (2003) suggested that a simple image statistic based on the power spectrum allows the absence or presence of objects in natural scenes to be predicted. We tested whether human observers make use of power spectral differences between image categories when detecting animals in natural scenes. In Experiments 1 and 2 we found performance to be essentially independent of the power spectrum. Computational analysis revealed that the ease of classification correlates with the proposed spectral cue without being caused by it. This result is consistent with the hypothesis that in commercial stock photo databases a majority of animal images are pre-segmented from the background by the photographers and this pre-segmentation causes the power spectral differences between image categories and may, furthermore, help rapid animal detection. Data from a third experiment are consistent with this hypothesis. Together, our results make it exceedingly unlikely that human observers make use of power spectral differences between animal- and no-animal images during rapid animal detection. In addition, our results point to potential confounds in the commercially available "natural image" databases whose statistics may be less natural than commonly presumed.


Subject(s)
Field Dependence-Independence , Form Perception/physiology , Pattern Recognition, Visual/physiology , Psychophysics , Algorithms , Animals , Contrast Sensitivity/physiology , Cues , Humans , Lighting , Photic Stimulation/methods , Reaction Time/physiology
20.
Toxicol In Vitro ; 20(5): 736-747, 2005 Dec 27.
Article in English | MEDLINE | ID: mdl-16384686

ABSTRACT

This article has been retracted consistent with Articles in Press Policy. Please see . The Publisher apologises for any inconvenience this may cause.

SELECTION OF CITATIONS
SEARCH DETAIL
...