Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Behav Res Methods ; 52(2): 544-560, 2020 04.
Article in English | MEDLINE | ID: mdl-31161427

ABSTRACT

Lexical-gustatory (LG) synesthesia is an intriguing neurological condition in which individuals experience phantom tastes when hearing, speaking, reading, or thinking about words. For example, the word "society" might flood the mouth of an LG synesthete with the flavor of fried onion. The condition is usually verified in individuals by obtaining verbal descriptions of their word-flavor associations on more than one occasion, separated by several months. Their flavor associations are significantly more consistent over time than are those of controls (who are asked to invent associations by intuition and to recall them from memory). Although this test reliably dissociates synesthetes from nonsynesthetes, it suffers from practical and methodological limitations. Here we present a novel, automated, online consistency test, which can be administered in just 30 min in order to instantly and objectively verify LG synesthesia. We present data from two versions of our diagnostic test, in which synesthetes report their synesthetic flavors either from a hierarchical set of food categories (Exp. 1) or by specifying their basic component tastes (sweet, salty, bitter, etc.). We tested the largest sample of self-declared LG synesthetes studied to date and used receiver operating characteristic analysis to assess the discriminant power of our tests. Although both our methods discriminated synesthetes from controls, our second test (Exp. 2) has greater discriminatory power with a threshold cutoff. We suggest that our novel diagnostic for LG synesthesia has unprecedented benefits in its automated and objective scoring, its ease of use for participants and researchers, its short testing time, and its online platform.


Subject(s)
Synesthesia , Automation , Color Perception , Computers , Humans , Reading , Taste
2.
J Exp Psychol Hum Percept Perform ; 44(8): 1283-1293, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29733674

ABSTRACT

Sight and sound are out of synch in different people by different amounts for different tasks. But surprisingly, different concurrent measures of perceptual asynchrony correlate negatively (Freeman et al., 2013). Thus, if vision subjectively leads audition in one individual, the same individual might show a visual lag in other measures of audiovisual integration (e.g., McGurk illusion, Stream-Bounce illusion). This curious negative correlation was first observed between explicit temporal order judgments and implicit phoneme identification tasks, performed concurrently as a dual task, using incongruent McGurk stimuli. Here we used a new set of explicit and implicit tasks and congruent stimuli, to test whether this negative correlation persists across testing sessions, and whether it might be an artifact of using specific incongruent stimuli. None of these manipulations eliminated the negative correlation between explicit and implicit measures. This supports the generalizability and validity of the phenomenon, and offers new theoretical insights into its explanation. Our previously proposed "temporal renormalization" theory assumes that the timings of sensory events registered within the brain's different multimodal subnetworks are each perceived relative to a representation of the typical average timing of such events across the wider network. Our new data suggest that this representation is stable and generic, rather than dependent on specific stimuli or task contexts, and that it may be acquired through experience with a variety of simultaneous stimuli. Our results also add further evidence that speech comprehension may be improved in some individuals by artificially delaying voices relative to lip-movements. (PsycINFO Database Record


Subject(s)
Individuality , Pattern Recognition, Visual/physiology , Speech Perception/physiology , Adult , Female , Humans , Male , Psychological Theory , Time Factors , Young Adult
3.
Conscious Cogn ; 61: 79-93, 2018 05.
Article in English | MEDLINE | ID: mdl-29673773

ABSTRACT

People with sequence-space synaesthesia visualize sequential concepts such as numbers and time as an ordered pattern extending through space. Unlike other types of synaesthesia, there is no generally agreed objective method for diagnosing this variant or separating it from potentially related aspects of cognition. We use a recently-developed spatial consistency test together with a novel questionnaire on naïve samples and estimate the prevalence of sequence-space synaesthesia to be around 8.1% (Study 1) to 12.8% (Study 2). We validate our test by showing that participants classified as having sequence-space synaesthesia perform differently on lab-based tasks. They show a spatial Stroop-like interference response, they show enhanced detection of low visibility Gabor stimuli, they report more use of visual imagery, and improved memory for certain types of public events. We suggest that sequence-space synaesthesia develops from a particular neurocognitive profile linked both to greater visual imagery and enhanced visual perception.


Subject(s)
Imagination/physiology , Perceptual Disorders/diagnosis , Space Perception/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Middle Aged , Perceptual Disorders/diet therapy , Perceptual Disorders/epidemiology , Prevalence , Synesthesia , Young Adult
4.
Cortex ; 105: 74-82, 2018 08.
Article in English | MEDLINE | ID: mdl-28732750

ABSTRACT

In this study we show that personality traits predict the physical qualities of mentally generated colours, using the case of synaesthesia. Developmental grapheme-colour synaesthetes have the automatic lifelong association of colours paired to letters or digits. Although these colours are internal mental constructs, they can be measured along physical dimensions such as saturation and luminance. The personality of synaesthetes can also be quantified using self-report questionnaires relating, for example, to the five major traits of Conscientiousness, Extraversion, Agreeableness, Neuroticism, and Openness to Experience. In this paper, we bring together both types of quality by examining whether the personality of individual synaesthetes predicts their synaesthetic colours. Twenty grapheme-colour synaesthetes were tested with the Big Five Inventory (BFI) personality questionnaire. Their synaesthesia was also tested in terms of consistency and average colour saturation and luminance. Two major results were found: although personality did not influence the overall robustness (i.e., consistency) of synaesthesia, it predicted the nature of synaesthetes' colours: the trait of Openness was positively correlated with the saturation of synaesthetic colours. Our study provides evidence that personality and internal perception are intertwined, and suggests future avenues of research for investigating the associations between the two.


Subject(s)
Color Perception/physiology , Color , Imagery, Psychotherapy , Personality/physiology , Adolescent , Adult , Female , Humans , Imagery, Psychotherapy/methods , Male , Middle Aged , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Synesthesia/physiopathology , Young Adult
5.
Neuropsychologia ; 106: 407-416, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28919244

ABSTRACT

Developmental grapheme-colour synaesthesia is a rare condition in which colours become automatically paired with letters or digits in the minds of certain individuals during childhood, and remain paired into adulthood. Although synaesthesia is well understood in younger adults almost nothing is known about synaesthesia in aging. We present the first evidence that aging desaturates synaesthetic colours in the minds of older synaesthetes, and we show for the first time that aging affects the key diagnostic measure of synaesthesia (consistency of colours over time). We screened ~ 4000 members of the general population to identify grapheme-colour synaesthetes, targeting both younger and older adults. We found proportionally fewer older than younger synaesthetes, not only because fewer older people self-reported the condition, but because fewer also passed the objective diagnostic test. We examined the roots of this apparent decline in grapheme-colour synaesthesia, finding that the internal mental colours of synaesthetes become less saturated in older subjects, and importantly, that low-saturated colours are linked with test-failure. We discuss what these findings mean for a novel field of aging and synaesthesia research, in terms of the lifespan development of synaesthesia and how best to diagnose synaesthesia in later life.


Subject(s)
Aging , Perceptual Disorders/psychology , Adolescent , Adult , Aged , Aged, 80 and over , Color Perception , Female , Humans , Male , Middle Aged , Pattern Recognition, Visual , Self Report , Synesthesia , Young Adult
6.
Sci Rep ; 7: 46413, 2017 04 21.
Article in English | MEDLINE | ID: mdl-28429784

ABSTRACT

Are sight and sound out of synch? Signs that they are have been dismissed for over two centuries as an artefact of attentional and response bias, to which traditional subjective methods are prone. To avoid such biases, we measured performance on objective tasks that depend implicitly on achieving good lip-synch. We measured the McGurk effect (in which incongruent lip-voice pairs evoke illusory phonemes), and also identification of degraded speech, while manipulating audiovisual asynchrony. Peak performance was found at an average auditory lag of ~100 ms, but this varied widely between individuals. Participants' individual optimal asynchronies showed trait-like stability when the same task was re-tested one week later, but measures based on different tasks did not correlate. This discounts the possible influence of common biasing factors, suggesting instead that our different tasks probe different brain networks, each subject to their own intrinsic auditory and visual processing latencies. Our findings call for renewed interest in the biological causes and cognitive consequences of individual sensory asynchronies, leading potentially to fresh insights into the neural representation of sensory timing. A concrete implication is that speech comprehension might be enhanced, by first measuring each individual's optimal asynchrony and then applying a compensatory auditory delay.


Subject(s)
Attention/physiology , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Adolescent , Female , Humans , Individuality , Male , Photic Stimulation , Voice , Young Adult
7.
Neuropsychologia ; 85: 169-76, 2016 05.
Article in English | MEDLINE | ID: mdl-27001029

ABSTRACT

Considerable research has addressed whether the cognitive and neural representations recruited by faces are similar to those engaged by other types of visual stimuli. For example, research has examined the extent to which objects of expertise recruit holistic representation and engage the fusiform face area. Little is known, however, about the domain-specificity of the exemplar pooling processes thought to underlie the acquisition of familiarity with particular facial identities. In the present study we sought to compare observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars. Crucially, while handwritten words and faces differ considerably in their topographic form, both learning tasks share a common exemplar pooling component. In our first experiment, we find that typical observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars correlates closely. In our second experiment, we show that observers with Autism Spectrum Disorder (ASD) are impaired at both learning tasks. Our findings suggest that similar exemplar pooling processes are recruited when learning facial identities and handwriting styles. Models of exemplar pooling originally developed to explain face learning, may therefore offer valuable insights into exemplar pooling across a range of domains, extending beyond faces. Aberrant exemplar pooling, possibly resulting from structural differences in the inferior longitudinal fasciculus, may underlie difficulties recognising familiar faces often experienced by individuals with ASD, and leave observers overly reliant on local details present in particular exemplars.


Subject(s)
Autistic Disorder/physiopathology , Face , Handwriting , Pattern Recognition, Visual/physiology , Recognition, Psychology/physiology , Adult , Brain Mapping , Female , Humans , Learning/physiology , Male , Middle Aged , Photic Stimulation , Reaction Time/physiology , Young Adult
8.
J Exp Psychol Hum Percept Perform ; 42(5): 706-18, 2016 May.
Article in English | MEDLINE | ID: mdl-26618622

ABSTRACT

Motor theories of expression perception posit that observers simulate facial expressions within their own motor system, aiding perception and interpretation. Consistent with this view, reports have suggested that blocking facial mimicry induces expression labeling errors and alters patterns of ratings. Crucially, however, it is unclear whether changes in labeling and rating behavior reflect genuine perceptual phenomena (e.g., greater internal noise associated with expression perception or interpretation) or are products of response bias. In an effort to advance this literature, the present study introduces a new psychophysical paradigm for investigating motor contributions to expression perception that overcomes some of the limitations inherent in simple labeling and rating tasks. Observers were asked to judge whether smiles drawn from a morph continuum were sincere or insincere, in the presence or absence of a motor load induced by the concurrent production of vowel sounds. Having confirmed that smile sincerity judgments depend on cues from both eye and mouth regions (Experiment 1), we demonstrated that vowel production reduces the precision with which smiles are categorized (Experiment 2). In Experiment 3, we replicated this effect when observers were required to produce vowels, but not when they passively listened to the same vowel sounds. In Experiments 4 and 5, we found that gender categorizations, equated for difficulty, were unaffected by vowel production, irrespective of the presence of a smiling expression. These findings greatly advance our understanding of motor contributions to expression perception and represent a timely contribution in light of recent high-profile challenges to the existing evidence base.


Subject(s)
Facial Expression , Facial Recognition/physiology , Social Perception , Speech/physiology , Adult , Female , Humans , Male , Smiling/physiology
9.
J Exp Psychol Hum Percept Perform ; 41(3): 577-81, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25867504

ABSTRACT

Differences in the visual processing of familiar and unfamiliar faces have prompted considerable interest in face learning, the process by which unfamiliar faces become familiar. Previous work indicates that face learning is determined in part by exposure duration; unsurprisingly, viewing faces for longer affords superior performance on subsequent recognition tests. However, there has been further speculation that exemplar variation, experience of different exemplars of the same facial identity, contributes to face learning independently of viewing time. Several leading accounts of face learning, including the averaging and pictorial coding models, predict an exemplar variation advantage. Nevertheless, the exemplar variation hypothesis currently lacks empirical support. The present study therefore sought to test this prediction by comparing the effects of unique exemplar face learning--a condition rich in exemplar variation--and repeated exemplar face learning--a condition that equates viewing time, but constrains exemplar variation. Crucially, observers who received unique exemplar learning displayed better recognition of novel exemplars of the learned identities at test, than observers in the repeated exemplar condition. These results have important theoretical and substantive implications for models of face learning and for approaches to face training in applied contexts.


Subject(s)
Facial Recognition , Recognition, Psychology , Adult , Female , Humans , Learning , Male , Photic Stimulation
10.
Cortex ; 49(10): 2875-87, 2013.
Article in English | MEDLINE | ID: mdl-23664001

ABSTRACT

The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PH's timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream-Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing estimates in PH and healthy participants, and how they can still perceive the timing of external events correctly, on average.


Subject(s)
Auditory Perception/physiology , Cognition Disorders/psychology , Illusions/psychology , Visual Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Aged , Aging/psychology , Algorithms , Attention/physiology , Basal Ganglia/pathology , Cognition Disorders/pathology , Computer Simulation , Diffusion Tensor Imaging , Female , Humans , Image Processing, Computer-Assisted , Intelligence Tests , Magnetic Resonance Imaging , Male , Middle Aged , Myasthenia Gravis/complications , Myasthenia Gravis/psychology , Photic Stimulation , Pons/pathology , Psychometrics , Space Perception/physiology , Speech Perception/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...