Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Autism ; 21(4): 412-422, 2017 05.
Article in English | MEDLINE | ID: mdl-27178994

ABSTRACT

Recent studies have examined non-suicidal self-injury in community and clinical samples, but there is no published research on non-suicidal self-injury in individuals with autism spectrum disorder. This lack of research is surprising, since individuals with autism spectrum disorder have high rates of risk factors for non-suicidal self-injury, including depression and poor emotion regulation skills. Using an online survey, we examined non-suicidal self-injury methods, frequency, severity, functions, and initial motivations in adults with autism spectrum disorder ( n = 42). We also compared their non-suicidal self-injury characteristics to those of a gender-matched group of adults without autism spectrum disorder ( n = 42). Of the participants with autism spectrum disorder, 50% reported a history of non-suicidal self-injury. This proportion is higher than non-suicidal self-injury rates previously reported for college students, adult community samples, and adolescents with autism spectrum disorder, which suggests that adults with autism spectrum disorder have increased risk for engaging in non-suicidal self-injury. Women with autism spectrum disorder were significantly more likely to endorse non-suicidal self-injury, relative to men with autism spectrum disorder. A history of non-suicidal self-injury was not related to current depression or emotion dysregulation for the participants with autism spectrum disorder. Non-suicidal self-injury characteristics among the adults with autism spectrum disorder were similar to non-suicidal self-injury in adults without autism spectrum disorder. These preliminary findings highlight the need for increased awareness and further research about non-suicidal self-injury within autism spectrum disorder.


Subject(s)
Autism Spectrum Disorder/psychology , Self-Injurious Behavior/epidemiology , Adolescent , Adult , Cross-Sectional Studies , Depression/epidemiology , Depression/psychology , Female , Humans , Male , Middle Aged , Psychiatric Status Rating Scales , Self-Injurious Behavior/psychology , Surveys and Questionnaires , Young Adult
2.
Curr Res Psychol ; 6(2): 22-30, 2016.
Article in English | MEDLINE | ID: mdl-28105290

ABSTRACT

Impairment in the ability to detect certain emotions, such as fear, is linked to multiple disorders and follows a pattern of inter-individual variability and intra-individual stability over time. Deficits in fear recognition are often related to social and interpersonal difficulties but the mechanisms by which this processing deficit might occur are not well understood. One potential mechanism through which impaired fear detection may influence social competency is through diminished perspective-taking, the ability to perceive and consider the point of view of another individual. In the current study, we hypothesized that intra-individual variability in the accuracy of facial emotion recognition is linked to perspective-taking abilities in a well-characterized, non-clinical adult sample. Results indicated that the ability to accurately detect fear in the faces of others was positively correlated with perspective-taking, consistent with initial hypotheses. This relationship appeared to be unique to recognition of fear, as perspective-taking was not significantly associated with recognition of the other basic emotions. Results from this study represent an initial step towards establishing a potential mechanism between some processes of FER and perspective-taking difficulties. It is important to establish the relationship between these processes in a non-clinical adult sample so that we can consider the possibility of a developmental or pathological influence of impoverished perspective-taking on fear perception.

3.
Clin Psychol Sci ; 3(5): 797-815, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26504676

ABSTRACT

Neurotechnology is broadly defined as a set of devices used to understand neural processes and applications that can potentially facilitate the brain's ability to repair itself. In the past decade, an increasingly explicit understanding of basic biological mechanisms of brain-related illnesses has produced applications that allow a direct yet noninvasive method to index and manipulate the functioning of the human nervous system. Clinical scientists are poised to apply this technology to assess, treat, and better understand complex socioemotional processes that underlie many forms of psychopathology. In this review, we describe the potential benefits and hurdles, both technical and methodological, of neurotechnology in the context of clinical dysfunction. We also offer a framework for developing and evaluating neurotechnologies that is intended to expedite progress at the nexus of clinical science and neural interface designs by providing a comprehensive vocabulary to describe the necessary features of neurotechnology in the clinic.

4.
Int J Methods Psychiatr Res ; 24(4): 275-86, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26359940

ABSTRACT

Attention to faces is a fundamental psychological process in humans, with atypical attention to faces noted across several clinical disorders. Although many clinical disorders onset in adolescence, there is a lack of well-validated stimulus sets containing adolescent faces available for experimental use. Further, the images comprising most available sets are not controlled for high- and low-level visual properties. Here, we present a cross-site validation of the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS), comprised of 257 photographs of adolescent faces displaying angry, fearful, happy, sad, and neutral expressions. All of the direct facial images from the NIMH-ChEFS set were adjusted in terms of location of facial features and standardized for luminance, size, and smoothness. Although overall agreement between raters in this study and the original development-site raters was high (89.52%), this differed by group such that agreement was lower for adolescents relative to mental health professionals in the current study. These results suggest that future research using this face set or others of adolescent/child faces should base comparisons on similarly-aged validation data. Copyright © 2015 John Wiley & Sons, Ltd.


Subject(s)
Emotions/physiology , Face , Health Occupations , Parents/psychology , Pattern Recognition, Visual/physiology , Adolescent , Age Factors , Analysis of Variance , Child , Female , Humans , Male , Middle Aged , National Institute of Mental Health (U.S.) , Photic Stimulation/methods , Reproducibility of Results , Sex Characteristics , United States
5.
PLoS Comput Biol ; 7(9): e1002165, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21998576

ABSTRACT

Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a "race" model failed to account for their behavior patterns. Conversely, a "superposition model", positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.


Subject(s)
Macaca fascicularis/physiology , Macaca fascicularis/psychology , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Animals , Computational Biology , Face , Female , Humans , Male , Models, Neurological , Models, Psychological , Photic Stimulation , Reaction Time/physiology , Species Specificity , Vocalization, Animal/physiology
6.
PLoS Comput Biol ; 5(7): e1000436, 2009 Jul.
Article in English | MEDLINE | ID: mdl-19609344

ABSTRACT

Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.


Subject(s)
Image Processing, Computer-Assisted/methods , Mouth/physiology , Speech/physiology , Voice/physiology , Databases, Factual , Fourier Analysis , Humans , Models, Statistical , Signal Processing, Computer-Assisted , Speech Perception/physiology , Time Factors , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...