Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Sci Rep ; 7(1): 13954, 2017 10 24.
Article in English | MEDLINE | ID: mdl-29066760

ABSTRACT

Advances in Virtual Reality (VR) technologies allow the investigation of simulated moral actions in visually immersive environments. Using a robotic manipulandum and an interactive sculpture, we now also incorporate realistic haptic feedback into virtual moral simulations. In two experiments, we found that participants responded with greater utilitarian actions in virtual and haptic environments when compared to traditional questionnaire assessments of moral judgments. In experiment one, when incorporating a robotic manipulandum, we found that the physical power of simulated utilitarian responses (calculated as the product of force and speed) was predicted by individual levels of psychopathy. In experiment two, which integrated an interactive and life-like sculpture of a human into a VR simulation, greater utilitarian actions continued to be observed. Together, these results support a disparity between simulated moral action and moral judgment. Overall this research combines state-of-the-art virtual reality, robotic movement simulations, and realistic human sculptures, to enhance moral paradigms that are often contextually impoverished. As such, this combination provides a better assessment of simulated moral action, and illustrates the embodied nature of morally-relevant actions.


Subject(s)
Models, Theoretical , Morals , Adolescent , Adult , Female , Humans , Judgment , Male , Personality , Surveys and Questionnaires , Young Adult
2.
Cereb Cortex ; 13(8): 830-6, 2003 Aug.
Article in English | MEDLINE | ID: mdl-12853369

ABSTRACT

Deception is a complex cognitive activity, and different types of lies could arise from different neural systems. We investigated this possibility by first classifying lies according to two dimensions, whether they fit into a coherent story and whether they were previously memorized. fMRI revealed that well-rehearsed lies that fit into a coherent story elicit more activation in right anterior frontal cortices than spontaneous lies that do not fit into a story, whereas the opposite pattern occurs in the anterior cingulate and in posterior visual cortex. Furthermore, both types of lies elicited more activation than telling the truth in anterior prefrontal cortices (bilaterally), the parahippocampal gyrus (bilaterally), the right precuneus, and the left cerebellum. At least in part, distinct neural networks support different types of deception.


Subject(s)
Brain/physiology , Deception , Magnetic Resonance Imaging/methods , Nerve Net/physiology , Adult , Analysis of Variance , Female , Humans , Male
4.
Neuropsychologia ; 38(7): 1047-53, 2000.
Article in English | MEDLINE | ID: mdl-10775715

ABSTRACT

Evidence has indicated that the right frontal cortex is preferentially involved in self-face recognition. To test this further, we employed a face identification task and examined hand response differences (N=10). Pictures of famous faces were combined with pictures of the participants' faces (self) and their co-workers' faces (familiar). These images were presented as a 'movie' in which one face transformed into another. Under the first instruction set, the movies began with either the participant's face or a co-worker's face, and the sequences gradually morphed into a famous face. When told to stop the movie when the face in the sequence became famous, a significantly later 'frame' was identified when the movies were composed of self-faces and the participants responded with their left hand. When the movies started with the famous faces and participants had to stop the movie when it became their own or their familiar co-worker's image (Instruction set 2), a significantly earlier frame was identified in the 'Self: Left hand' condition. The data suggest that participants are inclined to identify images as their own when the right hemisphere is preferentially accessed.


Subject(s)
Cognition/physiology , Face , Functional Laterality/physiology , Self Concept , Adult , Computer Graphics , Female , Hand/physiology , Humans , Male
5.
Cereb Cortex ; 10(2): 175-80, 2000 Feb.
Article in English | MEDLINE | ID: mdl-10667985

ABSTRACT

Neuroimaging studies have shown that motor structures are activated not only during overt motor behavior but also during tasks that require no overt motor behavior, such as motor imagery and mental rotation. We tested the hypothesis that activation of the primary motor cortex is needed for mental rotation by using single- pulse transcranial magnetic stimulation (TMS). Single-pulse TMS was delivered to the representation of the hand in left primary motor cortex while participants performed mental rotation of pictures of hands and feet. Relative to a peripheral magnetic stimulation control condition, response times (RTs) were slower when TMS was delivered at 650 ms but not at 400 ms after stimulus onset. The magnetic stimulation effect at 650 ms was larger for hands than for feet. These findings demonstrate that (i) activation of the left primary motor cortex has a causal role in the mental rotation of pictures of hands; (ii) this role is stimulus-specific because disruption of neural activity in the hand area slowed RTs for pictures of hands more than feet; and (iii) left primary motor cortex is involved relatively late in the mental rotation process.


Subject(s)
Mental Processes/physiology , Motor Cortex/physiology , Transcranial Magnetic Stimulation , Adult , Female , Foot , Functional Laterality , Hand , Humans , Male , Motor Cortex/radiation effects , Movement , Rotation
6.
Laterality ; 5(3): 259-68, 2000 Jul.
Article in English | MEDLINE | ID: mdl-15513146

ABSTRACT

Evidence suggests that autobiographical memory, self-related semantic category judgements, and self-identification tasks may be lateralised, with preferential activity in the right anterior temporal and prefrontal cortex. To test this hypothesis, participants (N=10) were presented with morphed images of themselves (self) combined with a famous face. A further set of images was generated in which the face of one of the participant's co-workers (familiar) was combined with a famous face. When compared to morphed images composed of a familiar face, the participants identified images less often as being famous if the images were composed of self, but only when responding with their left hands. This greater "self-effect" found in left-hand responses may imply that when the right hemisphere is preferentially active, participants have a tendency to refer images to self. These data provide further support for a preferential role of the right hemisphere in processing self-related material.

7.
Science ; 284(5411): 167-70, 1999 Apr 02.
Article in English | MEDLINE | ID: mdl-10102821

ABSTRACT

Visual imagery is used in a wide range of mental activities, ranging from memory to reasoning, and also plays a role in perception proper. The contribution of early visual cortex, specifically Area 17, to visual mental imagery was examined by the use of two convergent techniques. In one, subjects closed their eyes during positron emission tomography (PET) while they visualized and compared properties (for example, relative length) of sets of stripes. The results showed that when people perform this task, Area 17 is activated. In the other, repetitive transcranial magnetic stimulation (rTMS) was applied to medial occipital cortex before presentation of the same task. Performance was impaired after rTMS compared with a sham control condition; similar results were obtained when the subjects performed the task by actually looking at the stimuli. In sum, the PET results showed that when patterns of stripes are visualized, Area 17 is activated, and the rTMS results showed that such activation underlies information processing.


Subject(s)
Brain Mapping , Imagination/physiology , Visual Cortex/physiology , Adult , Humans , Magnetics , Male , Memory/physiology , Tomography, Emission-Computed , Visual Cortex/diagnostic imaging , Visual Perception/physiology
8.
Perception ; 28(1): 89-108, 1999.
Article in English | MEDLINE | ID: mdl-10627855

ABSTRACT

A series of experiments was conducted to determine whether apparent motion tends to follow the similarity rule (i.e. is attribute-specific) and to investigate the underlying mechanism. Stimulus duration thresholds were measured during a two-alternative forced-choice task in which observers detected either the location or the motion direction of target groups defined by the conjunction of size and orientation. Target element positions were randomly chosen within a nominally defined rectangular subregion of the display (target region). The target region was presented either statically (followed by a 250 ms duration mask) or dynamically, displaced by a small distance (18 min of arc) from frame to frame. In the motion display, the position of both target and background elements was changed randomly from frame to frame within the respective areas to abolish spatial correspondence over time. Stimulus duration thresholds were lower in the motion than in the static task, indicating that target detection in the dynamic condition does not rely on the explicit identification of target elements in each static frame. Increasing the distractor-to-target ratio was found to reduce detectability in the static, but not in the motion task. This indicates that the perceptual segregation of the target is effortless and parallel with motion but not with static displays. The pattern of results holds regardless of the task or search paradigm employed. The detectability in the motion condition can be improved by increasing the number of frames and/or by reducing the width of the target area. Furthermore, parallel search in the dynamic condition can be conducted with both short-range and long-range motion stimuli. Finally, apparent motion of conjunctions is insufficient on its own to support location decision and is disrupted by random visual noise. Overall, these findings show that (i) the mechanism underlying apparent motion is attribute-specific; (ii) the motion system mediates temporal integration of feature conjunctions before they are identified by the static system; and (iii) target detectability in these stimuli relies upon a nonattentive, cooperative, directionally selective motion mechanism that responds to high-level attributes (conjunction of size and orientation).


Subject(s)
Motion Perception , Optical Illusions , Adult , Analysis of Variance , Contrast Sensitivity , Eye Movements , Humans , Psychophysiology , Sensory Thresholds
9.
Psychophysiology ; 35(3): 240-51, 1998 May.
Article in English | MEDLINE | ID: mdl-9564744

ABSTRACT

The nature and early time course of the initial processing differences between visually matched linguistic and nonlinguistic images were studied with event-related potentials (ERPs). The first effect began at 90 ms when ERPs to written words diverged from other objects, including faces. By 125 ms, ERPs to words and faces were more positive than those to other objects, effects identified with the P150. The amplitude and scalp distribution of P150s to words and faces were similar. The P150 seemed to be elicited selectively by images resembling any well-learned category of visual patterns. We propose that (a) visual perceptual categorization based on long-term experience begins by 125 ms, (b) P150 amplitude varies with the cumulative experience people have discriminating among instances of specific categories of visual objects (e.g., words, faces), and (c) the P150 is a scalp reflection of letterstring and face intracranial ERPs in posterior fusiform gyrus.


Subject(s)
Reading , Social Perception , Visual Perception/physiology , Adolescent , Adult , Electroencephalography , Evoked Potentials , Face , Humans
10.
J Cogn Neurosci ; 8(2): 89-106, 1996.
Article in English | MEDLINE | ID: mdl-23971417

ABSTRACT

Event-related brain potentials (ERPs) from 26 scalp sites were used to investigate whether or not and, if so, the extent to which the brain processes subserving the understanding of imageable written words and line drawings are identical. Sentences were presented one word at a time to 28 undergraduates for comprehension. Each sentence ended with either a written word (regular sentences) or with a line drawing (rebus sentences) that rendered it semantically congruous or semantically incongruous. For half of the subjects regular and rebus sentences were randomly intermixed whereas for the remaining half the regular and rebus sentences were presented in separate blocks (affording within-subject comparisons in both cases). In both presentation formats, words and line drawings generated greater negativity between 325 and 475 msec post-stimulus in ERPs to incongruous relative to congruous sentence endings (i.e., an N400-like effect). While the time course of this negativity was remarkably similar for words and pictures, there were notable differences in their scalp distributions; specifically, the classic N400 effect for words was larger posteriorly than it was for pictures. The congruity effect for pictures but not for words was also associated with a longer duration (lower frequency) negativity over frontal sites. In addition, under the mixed presentation mode, the N400 effect peaked about 30 msec earlier for pictures than for words. All in all, the data suggest that written words and pictures when they terminate sentences are processed similarly, but by at least partially nonoverlapping brain areas.

11.
Psychol Res ; 55(1): 1-9, 1993.
Article in English | MEDLINE | ID: mdl-8480001

ABSTRACT

To examine the conditions in which human observers fail to recover the rigid structure of a three-dimensional object in motion we used simulations of discrete helices with various pitches undergoing either pure rotation in depth (rigid stimuli) or rotation plus stretching (non-rigid stimuli). Subjects had either to rate stimuli on a rigidity scale (Experiments 1 and 2) or to judge the amount of rotation of the helices (Experiments 3 and 4). We found that perceived rigidity depended on the pitch of the helix rather than on objective non-rigidity. Furthermore, we found that helices with a large pitch/radius ratio were perceived as highly non-rigid and that their rotation was underestimated. Experiment 5 showed that the detection of a pair of dots rigidly related (located on the helix) against a background of randomly moving dots is easier at small phases in which the change of orientation across frames is also small. We suggest that this is because at small phases the grouping of dots in virtual lines does occur and that this may be an important factor in the perceived nonrigidity of the helices.


Subject(s)
Attention , Depth Perception , Optical Illusions , Pattern Recognition, Visual , Adult , Female , Humans , Male , Orientation
12.
Perception ; 22(1): 23-34, 1993.
Article in English | MEDLINE | ID: mdl-8474832

ABSTRACT

Stroboscopic simulations of three-dimensional rotating rigid structures can be perceived as highly nonrigid. To investigate this nonrigidity effect a sequence of either three (experiment 2 and 3) or thirty six frames (experiment 4) was used, each consisting of a set of dots with location on the horizontal axis corresponding to the parallel projection of a nominally defined helix. Observers were asked to judge the angle of rotation of eighty helices defined by the factorial combination of eight phase (phi) values (ie difference between the sinusoidal path of one dot and its neighbours) and ten different angular displacement values (alpha). When in each static frame the dots can be organized into curved dotted line (small values of phi), the perceived 3-D helices are highly nonrigid. But when shape information is not available in each static frame (high values of phi), the helices are perceived as rigid and rotation judgement is possible providing that alpha < 15 degrees. It appears that at small values of phi observers fail to recover the rigid structure of the helices since the input to the structure from the motion process may be distorted.


Subject(s)
Form Perception , Rotation , Visual Perception , Adult , Depth Perception , Female , Humans , Male , Motion
13.
Perception ; 22(2): 215-28, 1993.
Article in English | MEDLINE | ID: mdl-8474846

ABSTRACT

When a plaid pattern composed of a stationary vertical grating and a horizontally drifting diagonal grating is shown behind a circular aperture, the pattern appears to move coherently in a vertical direction. When the bars of the stationary grating are narrower than those of the moving grating, only the latter is seen to move, in a direction orthogonal to its orientation (ie diagonal); but when the bars of the stationary grating are wider than those of the moving grating, vertical motion of the whole plaid predominates. It is argued that, in the absence of occlusion information, the motion of a plaid within an aperture depends on the unambiguous displacement of inner line terminators at the crossings of the two gratings. Relative motion and differences in bar width between the two gratings provide information about which set of bars is in front of the other. When these sources of information are consistent with each other, separation of the two gratings in depth occurs: inner line terminators no longer perceptually exist and the direction of motion becomes determined only by terminators at the edges, which causes a shift from vertical to orthogonal motion. Differences in luminance also provide (asymmetrical) information about depth relationships: when darker bars occlude lighter bars, the probability of orthogonal motion increases as a function of the difference in luminance, whereas when lighter bars are over darker bars, vertical motion prevails.


Subject(s)
Depth Perception , Visual Perception , Female , Humans , Male , Movement , Vertical Dimension
SELECTION OF CITATIONS
SEARCH DETAIL
...