Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 72
Filter
1.
Schizophr Bull ; 2024 Jun 02.
Article in English | MEDLINE | ID: mdl-38824450

ABSTRACT

BACKGROUND: Sensory suppression occurs when hearing one's self-generated voice, as opposed to passively listening to one's own voice. Quality changes in sensory feedback to the self-generated voice can increase attentional control. These changes affect the self-other voice distinction and might lead to hearing voices in the absence of an external source (ie, auditory verbal hallucinations). However, it is unclear how changes in sensory feedback processing and attention allocation interact and how this interaction might relate to hallucination proneness (HP). STUDY DESIGN: Participants varying in HP self-generated (via a button-press) and passively listened to their voice that varied in emotional quality and certainty of recognition-100% neutral, 60%-40% neutral-angry, 50%-50% neutral-angry, 40%-60% neutral-angry, 100% angry, during electroencephalography (EEG) recordings. STUDY RESULTS: The N1 auditory evoked potential was more suppressed for self-generated than externally generated voices. Increased HP was associated with (1) an increased N1 response to the self- compared with externally generated voices, (2) a reduced N1 response for angry compared with neutral voices, and (3) a reduced N2 response to unexpected voice quality in sensory feedback (60%-40% neutral-angry) compared with neutral voices. CONCLUSIONS: The current study highlights an association between increased HP and systematic changes in the emotional quality and certainty in sensory feedback processing (N1) and attentional control (N2) in self-voice production in a nonclinical population. Considering that voice hearers also display these changes, these findings support the continuum hypothesis.

2.
3.
J Neurosci Methods ; 407: 110138, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38648892

ABSTRACT

BACKGROUND: Resting state (RS) brain activity is inherently non-stationary. Hidden semi-Markov Models (HsMM) can characterize continuous RS data as a sequence of recurring and distinct brain states along with their spatio-temporal dynamics. NEW METHOD: Recent explorations suggest that HsMM state dynamics in the alpha frequency band link to auditory hallucination proneness (HP) in non-clinical individuals. The present study aimed to replicate these findings to elucidate robust neural correlates of hallucinatory vulnerability. Specifically, we aimed to investigate the reproducibility of HsMM states across different data sets and within-data set variants as well as the replicability of the association between alpha brain state dynamics and HP. RESULTS: We found that most brain states are reproducible in different data sets, confirming that the HsMM characterized robust and generalizable EEG RS dynamics on a sub-second timescale. Brain state topographies and temporal dynamics of different within-data set variants showed substantial similarities and were robust against reduced data length and number of electrodes. However, the association with HP was not directly reproducible across data sets. COMPARISON WITH EXISTING METHODS: The HsMM optimally leverages the high temporal resolution of EEG data and overcomes time-domain restrictions of other state allocation methods. CONCLUSION: The results indicate that the sensitivity of brain state dynamics to capture individual variability in HP may depend on the data recording characteristics and individual variability in RS cognition, such as mind wandering. Future studies should consider that the order in which eyes-open and eyes-closed RS data are acquired directly influences an individual's attentional state and generation of spontaneous thoughts, and thereby might mediate the link to hallucinatory vulnerability.


Subject(s)
Alpha Rhythm , Hallucinations , Humans , Alpha Rhythm/physiology , Hallucinations/physiopathology , Adult , Male , Female , Electroencephalography/methods , Young Adult , Brain/physiology , Rest/physiology , Reproducibility of Results
4.
Emotion ; 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38512197

ABSTRACT

Although emotional mimicry is ubiquitous in social interactions, its mechanisms and roles remain disputed. A prevalent view is that imitating others' expressions facilitates emotional understanding, but the evidence is mixed and almost entirely based on facial emotions. In a preregistered study, we asked whether inhibiting orofacial mimicry affects authenticity perception in vocal emotions. Participants listened to authentic and posed laughs and cries, while holding a pen between the teeth and lips to inhibit orofacial responses (n = 75), or while responding freely without a pen (n = 75). They made authenticity judgments and rated how much they felt the conveyed emotions (emotional contagion). Mimicry inhibition decreased the accuracy of authenticity perception in laughter and crying, and in posed and authentic vocalizations. It did not affect contagion ratings, however, nor performance in a cognitive control task, ruling out the effort of holding the pen as an explanation for the decrements in authenticity perception. Laughter was more contagious than crying, and authentic vocalizations were more contagious than posed ones, regardless of whether mimicry was inhibited or not. These findings confirm the role of mimicry in emotional understanding and extend it to auditory emotions. They also imply that perceived emotional contagion can be unrelated to mimicry. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

6.
Cortex ; 172: 254-270, 2024 03.
Article in English | MEDLINE | ID: mdl-38123404

ABSTRACT

The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.


Subject(s)
Laughter , Voice , Humans , Emotions/physiology , Blindness , Laughter/physiology , Social Perception , Electroencephalography , Evoked Potentials/physiology
7.
Q J Exp Psychol (Hove) ; 76(7): 1585-1598, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36114609

ABSTRACT

Good musical abilities are typically considered to be a consequence of music training, such that they are studied in samples of formally trained individuals. Here, we asked what predicts musical abilities in the absence of music training. Participants with no formal music training (N = 190) completed the Goldsmiths Musical Sophistication Index, measures of personality and cognitive ability, and the Musical Ear Test (MET). The MET is an objective test of musical abilities that provides a Total score and separate scores for its two subtests (Melody and Rhythm), which require listeners to determine whether standard and comparison auditory sequences are identical. MET scores had no associations with personality traits. They correlated positively, however, with informal musical experience and cognitive abilities. Informal musical experience was a better predictor of Melody than of Rhythm scores. Some participants (12%) had Total scores higher than the mean from a sample of musically trained individuals (⩾6 years of formal training), tested previously by Correia et al. Untrained participants with particularly good musical abilities (top 25%, n = 51) scored higher than trained participants on the Rhythm subtest and similarly on the Melody subtest. High-ability untrained participants were also similar to trained ones in cognitive ability, but lower in the personality trait openness-to-experience. These results imply that formal music training is not required to achieve musician-like performance on tests of musical and cognitive abilities. They also suggest that informal music practice and music-related predispositions should be considered in studies of musical expertise.


Subject(s)
Music , Humans , Adult , Music/psychology , Individuality , Cognition , Personality , Aptitude , Auditory Perception
8.
Emotion ; 23(6): 1740-1763, 2023 Sep.
Article in English | MEDLINE | ID: mdl-36480404

ABSTRACT

The current meta-analysis examined the effects of valence and arousal on source memory accuracy, including the identification of variables that moderate the magnitude and direction of those effects. Fifty-three studies, comprising 85 individual experiments (N = 3,040 participants), were selected. Three separate analyses focusing on valence effects (valence-based: negative-neutral; positive-neutral; negative-positive) and other three focusing exclusively on arousal (arousal-based: high-low; medium-low; high-medium) were considered. Effect sizes varied from very small to medium. For the valence-based analyses, source memory accuracy was impaired for emotional compared with neutral stimuli (dunb = -.14 for negative-neutral; dunb = -.11 for positive-neutral), with a similar performance found for the negative-positive comparison (dunb = -.04). In the case of arousal-based analyses, source memory was improved for stimuli with high and medium arousal versus low arousal (dunb = .27, dunb = .49, respectively), with no statistically significant difference between high and medium arousal stimuli (dunb = -.12). Emotion effects on source memory were modulated by methodological factors. These factors may account for the variety findings typically found in emotion-related source memory research and could be systematically addressed in future studies. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Arousal , Emotions , Humans
9.
Cortex ; 158: 83-95, 2023 01.
Article in English | MEDLINE | ID: mdl-36473276

ABSTRACT

Both self-voice and emotional speech are salient signals that are prioritized in perception. Surprisingly, self-voice perception has been investigated to a lesser extent than the self-face. Therefore, it remains to be clarified whether self-voice prioritization is boosted by emotion, and whether self-relevance and emotion interact differently when attention is focused on who is speaking vs. what is being said. Thirty participants listened to 210 prerecorded words spoken in one's own or an unfamiliar voice and differing in emotional valence in two tasks, manipulating the attention focus on either speaker identity or speech emotion. Event-related potentials (ERP) of the electroencephalogram (EEG) informed on the temporal dynamics of self-relevance, emotion, and attention effects. Words spoken in one's own voice elicited a larger N1 and Late Positive Potential (LPP), but smaller N400. Identity and emotion interactively modulated the P2 (self-positivity bias) and LPP (self-negativity bias). Attention to speaker identity modulated more strongly ERP responses within 600 ms post-word onset (N1, P2, N400), whereas attention to speech emotion altered the late component (LPP). However, attention did not modulate the interaction of self-relevance and emotion. These findings suggest that the self-voice is prioritized for neural processing at early sensory stages, and that both emotion and attention shape self-voice prioritization in speech processing. They also confirm involuntary processing of salient signals (self-relevance and emotion) even in situations in which attention is deliberately directed away from those cues. These findings have important implications for a better understanding of symptoms thought to arise from aberrant self-voice monitoring such as auditory verbal hallucinations.


Subject(s)
Speech Perception , Voice , Humans , Male , Female , Speech , Electroencephalography , Evoked Potentials/physiology , Voice/physiology , Emotions/physiology , Hallucinations/psychology , Speech Perception/physiology
10.
Behav Res Methods ; 55(7): 3504-3512, 2023 10.
Article in English | MEDLINE | ID: mdl-36131196

ABSTRACT

The study of action observation and imagery, separately and combined, is expanding in diverse research areas (e.g., sports psychology, neurosciences), making clear the need for action-related stimuli (i.e., action statements, videos, and pictures). Although several databases of object and action pictures are available, norms on action videos are scarce. In this study, we validated a set of 60 object-related everyday actions in three different formats: action-statements, and corresponding dynamic (action videos) and static (object photos) stimuli. In Study 1, ratings of imageability, image agreement, action familiarity, action frequency, and action valence were collected from 161 participants. In Study 2, a different sample of 115 participants rated object familiarity, object valence, and object-action prototypicality. Most actions were rated as easy to imagine, familiar, and neutral or positive in valence. However, there was variation in the frequency with which participants perform these actions on a daily basis. High agreement between participants' mental image and action videos was also found, showing that the videos depict a conventional way of performing the actions. Objects were considered familiar and positive in valence. High ratings on object-action prototypicality indicate that the actions correspond to prototypical actions for most objects. 3ActStimuli is a comprehensive set of stimuli that can be useful in several research areas, allowing the combined study of action observation and imagery.


Subject(s)
Recognition, Psychology , Humans
11.
Front Hum Neurosci ; 16: 859731, 2022.
Article in English | MEDLINE | ID: mdl-35966990

ABSTRACT

Voices are a complex and rich acoustic signal processed in an extensive cortical brain network. Specialized regions within this network support voice perception and production and may be differentially affected in pathological voice processing. For example, the experience of hallucinating voices has been linked to hyperactivity in temporal and extra-temporal voice areas, possibly extending into regions associated with vocalization. Predominant self-monitoring hypotheses ascribe a primary role of voice production regions to auditory verbal hallucinations (AVH). Alternative postulations view a generalized perceptual salience bias as causal to AVH. These theories are not mutually exclusive as both ascribe the emergence and phenomenology of AVH to unbalanced top-down and bottom-up signal processing. The focus of the current study was to investigate the neurocognitive mechanisms underlying predisposition brain states for emergent hallucinations, detached from the effects of inner speech. Using the temporal voice area (TVA) localizer task, we explored putative hypersalient responses to passively presented sounds in relation to hallucination proneness (HP). Furthermore, to avoid confounds commonly found in in clinical samples, we employed the Launay-Slade Hallucination Scale (LSHS) for the quantification of HP levels in healthy people across an experiential continuum spanning the general population. We report increased activation in the right posterior superior temporal gyrus (pSTG) during the perception of voice features that positively correlates with increased HP scores. In line with prior results, we propose that this right-lateralized pSTG activation might indicate early hypersensitivity to acoustic features coding speaker identity that extends beyond own voice production to perception in healthy participants prone to experience AVH.

12.
Cogn Affect Behav Neurosci ; 22(5): 1044-1062, 2022 10.
Article in English | MEDLINE | ID: mdl-35501427

ABSTRACT

Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.


Subject(s)
Music , Singing , Voice , Acoustic Stimulation , Auditory Perception/physiology , Electroencephalography , Humans
13.
Cortex ; 151: 116-132, 2022 06.
Article in English | MEDLINE | ID: mdl-35405538

ABSTRACT

Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested whether the neural encoding of authenticity is influenced by attention, by manipulating task focus (authenticity versus emotional category) and visual condition (with versus without visual deprivation). ERPs were recorded from 43 participants while they listened to vocalizations and evaluated their authenticity (volitional versus spontaneous) or emotional meaning (sad versus amused). Twenty-two of the participants were blindfolded and tested in a dark room, and 21 were tested in standard visual conditions. As compared to volitional vocalizations, spontaneous ones were associated with reduced N1 amplitude in the case of laughter, and increased P2 in the case of crying. At later cognitive processing stages, more positive amplitudes were observed for spontaneous (versus volitional) laughs and cries (1000-1400 msec), with earlier effects for laughs (700-1000 msec). Visual condition affected brain responses to emotional authenticity at early (P2 range) and late processing stages (middle and late LPP ranges). Task focus did not influence neural responses to authenticity. Our findings suggest that authenticity information is encoded early and automatically during vocal emotional processing. They also point to a potentially faster encoding of authenticity in laughter compared to crying.


Subject(s)
Laughter , Voice , Auditory Perception/physiology , Emotions/physiology , Evoked Potentials , Humans , Laughter/physiology
14.
Behav Res Methods ; 54(2): 955-969, 2022 04.
Article in English | MEDLINE | ID: mdl-34382202

ABSTRACT

We sought to determine whether an objective test of musical ability could be successfully administered online. A sample of 754 participants was tested with an online version of the Musical Ear Test (MET), which had Melody and Rhythm subtests. Both subtests had 52 trials, each of which required participants to determine whether standard and comparison auditory sequences were identical. The testing session also included the Goldsmiths Musical Sophistication Index (Gold-MSI), a test of general cognitive ability, and self-report questionnaires that measured basic demographics (age, education, gender), mind-wandering, and personality. Approximately 20% of the participants were excluded for incomplete responding or failing to finish the testing session. For the final sample (N = 608), findings were similar to those from in-person testing in many respects: (1) the internal reliability of the MET was maintained, (2) construct validity was confirmed by strong associations with Gold-MSI scores, (3) correlations with other measures (e.g., openness to experience, cognitive ability, mind-wandering) were as predicted, (4) mean levels of performance were similar for individuals with no music training, and (5) musical sophistication was a better predictor of performance on the Melody than on the Rhythm subtest. In sum, online administration of the MET proved to be a reliable and valid way to measure musical ability.


Subject(s)
Music , Cognition , Humans , Music/psychology , Personality , Reproducibility of Results
15.
J ECT ; 38(1): 39-44, 2022 03 01.
Article in English | MEDLINE | ID: mdl-34739421

ABSTRACT

OBJECTIVES: Dementia with Lewy bodies (DLB) is a debilitating disorder associated with a number of distressing neuropsychiatric symptoms. There is currently limited guidance regarding the most effective strategies of managing these symptoms, and both pharmacologic and nonpharmacologic strategies are often used. Electroconvulsive therapy (ECT) has been reported as a potential nonpharmacologic method to alleviate some of these debilitating neuropsychiatric symptoms. However, there remains a paucity of evidence in current literature. This report aims to add to existing literature regarding ECT in DLB by highlighting successful treatment in seven cases. METHODS: Our study is a retrospective case series of 7 patients with DLB who received treatment with ultrabrief (UB) right unilateral (RUL) ECT for the treatment of agitation and depressive symptoms. Participants included patients with a diagnosis of DLB who were admitted to Emory University Hospital at Wesley Woods from 2011 to 2020 presenting with agitation and/or depressive symptoms after failing pharmacologic intervention. Patients underwent UB RUL ECT administered by a board-certified psychiatrist. After treatment, Pittsburg Agitation Scale and Clinical Global Impression-Improvement scales were recorded as measures of agitation and clinical improvement, respectively. RESULTS: All 7 patients responded to UB RUL ECT with marked improvement in their presenting symptoms of agitation and/or depression without significant adverse effects from treatment. CONCLUSIONS: Ultrabrief RUL ECT seems to be a safe and effective treatment of the agitative and depressive features of DLB.


Subject(s)
Electroconvulsive Therapy , Lewy Body Disease , Electroconvulsive Therapy/methods , Humans , Lewy Body Disease/therapy , Retrospective Studies , Treatment Outcome
16.
Cogn Neuropsychiatry ; 27(2-3): 169-182, 2022.
Article in English | MEDLINE | ID: mdl-34261424

ABSTRACT

Introduction: Auditory verbal hallucinations (AVH) are a cardinal symptom of schizophrenia but are also reported in the general population without need for psychiatric care. Previous evidence suggests that AVH may reflect an imbalance of prior expectation and sensory information, and that altered salience processing is characteristic of both psychotic and non-clinical voice hearers. However, it remains to be shown how such an imbalance affects the categorisation of vocal emotions in perceptual ambiguity.Methods: Neutral and emotional nonverbal vocalisations were morphed along two continua differing in valence (anger; pleasure), each including 11 morphing steps at intervals of 10%. College students (N = 234) differing in AVH proneness (measured with the Launay-Slade Hallucination Scale) evaluated the emotional quality of the vocalisations.Results: Increased AVH proneness was associated with more frequent categorisation of ambiguous vocalisations as 'neutral', irrespective of valence. Similarly, the perceptual boundary for emotional classification was shifted by AVH proneness: participants needed more emotional information to categorise a voice as emotional.Conclusions: These findings suggest that emotional salience in vocalisations is dampened as a function of increased AVH proneness. This could be related to changes in the acoustic representations of emotions or reflect top-down expectations of less salient information in the social environment.


Subject(s)
Schizophrenia , Voice , Anger , Emotions , Hallucinations/psychology , Humans
17.
J ECT ; 38(1): 2-9, 2022 03 01.
Article in English | MEDLINE | ID: mdl-34699395

ABSTRACT

ABSTRACT: Electroconvulsive therapy (ECT) remains stigmatized in the broader medical community because of misunderstandings about treatment procedures, mortality rates, and cardiovascular complications. Electroconvulsive therapy causes periprocedural hemodynamic variability because of the surges in parasympathetic and sympathetic nervous systems after the administration of the electrical charge. Patients experience an increase in cardiac workload, which is potentially dangerous for patients with preexisting heart disease. Several findings suggest that cardiac complications occur most frequently in patients with underlying cardiovascular disease. We describe the cardiovascular complications that may result from ECT treatment and offer insight on how to mitigate these concerns if they occur. PubMed was queried using terms "electroconvulsive therapy" and "cardiovascular adverse effects." A table is provided with the common cardiovascular side effects of ECT and the most recent evidence-based treatment strategies to manage them. Generally, ECT is a safe procedure in which complications are minor and manageable. Most major complications caused by ECT are related to the cardiovascular system; however, with an appropriate pre-ECT evaluation and a comprehensive multidisciplinary team approach, the cardiovascular complications can be well managed and minimized. Providing proper cardiac clearance can prevent cardiac complications and provide timely care to treatment-resistant populations who are at risk for excessive morbidity and suicide.


Subject(s)
Cardiovascular Diseases , Cardiovascular System , Electroconvulsive Therapy , Cardiovascular Diseases/etiology , Electroconvulsive Therapy/adverse effects , Hemodynamics , Humans
18.
Philos Trans R Soc Lond B Biol Sci ; 376(1840): 20200402, 2021 12 20.
Article in English | MEDLINE | ID: mdl-34719249

ABSTRACT

The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.


Subject(s)
Laughter , Voice , Bayes Theorem , Emotions , Humans , Laughter/psychology , Sociological Factors
19.
Neuropsychologia ; 162: 108030, 2021 11 12.
Article in English | MEDLINE | ID: mdl-34563552

ABSTRACT

Alterations in the processing of vocal emotions have been associated with both clinical and non-clinical auditory verbal hallucinations (AVH), suggesting that changes in the mechanisms underpinning voice perception contribute to AVH. These alterations seem to be more pronounced in psychotic patients with AVH when attention demands increase. However, it remains to be clarified how attention modulates the processing of vocal emotions in individuals without clinical diagnoses who report hearing voices but no related distress. Using an active auditory oddball task, the current study clarified how emotion and attention interact during voice processing as a function of AVH proneness, and examined the contributions of stimulus valence and intensity. Participants with vs. without non-clinical AVH were presented with target vocalizations differing in valence (neutral; positive; negative) and intensity (55 decibels (dB); 75 dB). The P3b amplitude was larger in response to louder (vs. softer) vocal targets irrespective of valence, and in response to negative (vs. neutral) vocal targets irrespective of intensity. Of note, the P3b amplitude was globally increased in response to vocal targets in participants reporting AVH, and failed to be modulated by valence and intensity in these participants. These findings suggest enhanced voluntary attention to changes in vocal expressions but reduced discrimination of salient and non-salient cues. A decreased sensitivity to salience cues of vocalizations could contribute to increased cognitive control demands, setting the stage for an AVH.


Subject(s)
Hallucinations , Voice , Cues , Emotions , Humans
20.
Oncologist ; 26(11): 934-940, 2021 11.
Article in English | MEDLINE | ID: mdl-34369626

ABSTRACT

BACKGROUND: The use of molecular testing in oncology is rapidly expanding. The aim of this study was to determine how oncologists describe molecular testing and whether patients understand the terminology being used. MATERIALS AND METHODS: Sixty conversations between oncologists and patients about molecular testing were observed, and the used technical terms were noted by the researcher. Patients were interviewed post-conversation to assess their understanding of the noted technical terms. A patient understanding score was calculated for each participant. Comparisons of the terms were conducted using χ2 tests, Fisher's exact tests, or ANOVA when appropriate. RESULTS: Sixty-one unique technical terms were used by oncologists, to describe seven topics. "Mutation" was a challenging term for patients to understand with 48.8% (21/43 mentions) of participants correctly defining the term. "Genetic testing" and "Gene" were understood a little more than half the time (53.3%; 8/15 and 56.4%; 22/39 respectively). "DNA" was well understood (80%; 12/15). There was no correlation between the terms being defined by the oncologist in the conversation, and the likelihood of the patient providing a correct definition. White participants were significantly more likely to understand both "mutation" and "genetic testing" than non-White participants. Forty-two percent (n = 25) of participants had an understanding score below 50%, and a higher family income was significantly correlated with a higher score. CONCLUSION: Our results show that oncologists use variable terminology to describe molecular testing, which is often not understood. Because oncologists defining the terms did not correlate with understanding, it is imperative to develop new, improved methods to explain molecular testing. IMPLICATIONS FOR PRACTICE: The use of molecular testing is expanding in oncology, yet little is known about how effectively clinicians are communicating information about molecular testing and whether patients understand the terminology used. The results of this study indicate that patients do not understand some of the terminology used by their clinicians and that clinicians tend to use highly variable terminology to describe molecular testing. These results highlight the need to develop and implement effective methods to explain molecular testing terminology to patients to ensure that patients have the tools to make autonomous and informed decisions about their treatment.


Subject(s)
Communication , Physicians , Humans , Molecular Diagnostic Techniques
SELECTION OF CITATIONS
SEARCH DETAIL
...