Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 62
Filter
1.
Front Psychol ; 15: 1350631, 2024.
Article in English | MEDLINE | ID: mdl-38966733

ABSTRACT

Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants' self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies-many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.

2.
Cogn Emot ; : 1-17, 2024 Jul 07.
Article in English | MEDLINE | ID: mdl-38973174

ABSTRACT

Previous research has demonstrated that individuals from Western cultures exhibit categorical perception (CP) in their judgments of emotional faces. However, the extent to which this phenomenon characterises the judgments of facial expressions among East Asians remains relatively unexplored. Building upon recent findings showing that East Asians are more likely than Westerners to see a mixture of emotions in facial expressions of anger and disgust, the present research aimed to investigate whether East Asians also display CP for angry and disgusted faces. To address this question, participants from Canada and China were recruited to discriminate pairs of faces along the anger-disgust continuum. The results revealed the presence of CP in both cultural groups, as participants consistently exhibited higher accuracy and faster response latencies when discriminating between-category pairs of expressions compared to within-category pairs. Moreover, the magnitude of CP did not vary significantly across cultures. These findings provide novel evidence supporting the existence of CP for facial expressions in both East Asian and Western cultures, suggesting that CP is a perceptual phenomenon that transcends cultural boundaries. This research contributes to the growing literature on cross-cultural perceptions of facial expressions by deepening our understanding of how facial expressions are perceived categorically across cultures.

3.
Proc Biol Sci ; 291(2027): 20240958, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39013420

ABSTRACT

Darwin proposed that blushing-the reddening of the face owing to heightened self-awareness-is 'the most human of all expressions'. Yet, relatively little is known about the underlying mechanisms of blushing. Theories diverge on whether it is a rapid, spontaneous emotional response that does not involve reflection upon the self or whether it results from higher-order socio-cognitive processes. Investigating the neural substrates of blushing can shed light on the mental processes underlying blushing and the mechanisms involved in self-awareness. To reveal neural activity associated with blushing, 16-20 year-old participants (n = 40) watched pre-recorded videos of themselves (versus other people as a control condition) singing karaoke in a magnetic resonance imaging scanner. We measured participants' cheek temperature increase-an indicator of blushing-and their brain activity. The results showed that blushing is higher when watching oneself versus others sing. Those who blushed more while watching themselves sing had, on average, higher activation in the cerebellum (lobule V) and the left paracentral lobe and exhibited more time-locked processing of the videos in early visual cortices. These findings show that blushing is associated with the activation of brain areas involved in emotional arousal, suggesting that it may occur independently of higher-order socio-cognitive processes. Our results provide new avenues for future research on self-awareness in infants and non-human animals.


Subject(s)
Cheek , Emotions , Magnetic Resonance Imaging , Humans , Male , Young Adult , Adolescent , Female , Cheek/physiology , Brain/physiology , Singing
4.
Emotion ; 2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38884970

ABSTRACT

When in distress, people often seek help in regulating their emotions by sharing them with others. Paradoxically, although people perceive such social sharing as beneficial, it often fails to promote emotional recovery. This may be explained by people seeking-and eliciting-emotional support, which offers only momentary relief. We hypothesized that (1) the type of support sharers seek shapes corresponding support provided by listeners, (2) the intensity of sharers' emotions increases their desire for emotional support and decreases their desire for cognitive support, and (3) listeners' empathic accuracy promotes support provision that matches sharers' desires. In 8-min interactions, participants (N = 208; data collected in 2016-2017) were randomly assigned to the role of sharer (asked to discuss an upsetting situation) or listener (instructed to respond naturally). Next, participants watched their video-recorded interaction in 20-s fragments. Sharers rated their emotional intensity and support desires, and listeners rated the sharer's emotional intensity and their own support provision. First, we found that the desire for support predicted corresponding support provision. Second, the intensity of sharers' emotions was associated with an increased desire for emotional and cognitive support. Third, the more accurately listeners judged sharers' emotional intensity, the more they fulfilled sharers' emotional (but not cognitive) support desire. These findings suggest that people have partial control over the success of their social sharing in bringing about effective interpersonal emotion regulation. People elicit the support they desire at that moment, explaining why they perceive sharing as beneficial even though it may not engender emotional recovery. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

5.
Cogn Emot ; : 1-19, 2023 Nov 24.
Article in English | MEDLINE | ID: mdl-37997898

ABSTRACT

When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.

6.
Proc Natl Acad Sci U S A ; 120(37): e2218593120, 2023 09 12.
Article in English | MEDLINE | ID: mdl-37676911

ABSTRACT

Despite the variability of music across cultures, some types of human songs share acoustic characteristics. For example, dance songs tend to be loud and rhythmic, and lullabies tend to be quiet and melodious. Human perceptual sensitivity to the behavioral contexts of songs, based on these musical features, suggests that basic properties of music are mutually intelligible, independent of linguistic or cultural content. Whether these effects reflect universal interpretations of vocal music, however, is unclear because prior studies focus almost exclusively on English-speaking participants, a group that is not representative of humans. Here, we report shared intuitions concerning the behavioral contexts of unfamiliar songs produced in unfamiliar languages, in participants living in Internet-connected industrialized societies (n = 5,516 native speakers of 28 languages) or smaller-scale societies with limited access to global media (n = 116 native speakers of three non-English languages). Participants listened to songs randomly selected from a representative sample of human vocal music, originally used in four behavioral contexts, and rated the degree to which they believed the song was used for each context. Listeners in both industrialized and smaller-scale societies inferred the contexts of dance songs, lullabies, and healing songs, but not love songs. Within and across cohorts, inferences were mutually consistent. Further, increased linguistic or geographical proximity between listeners and singers only minimally increased the accuracy of the inferences. These results demonstrate that the behavioral contexts of three common forms of music are mutually intelligible cross-culturally and imply that musical diversity, shaped by cultural evolution, is nonetheless grounded in some universal perceptual phenomena.


Subject(s)
Cultural Evolution , Music , Humans , Language , Linguistics , Acoustics
7.
Emotion ; 23(1): 243-260, 2023 Feb.
Article in English | MEDLINE | ID: mdl-35266776

ABSTRACT

People do not always show how they feel; norms often dictate when to display emotions and to whom. Norms about emotional expressions-known as display rules-are weaker for happiness than for negative emotions, suggesting that expressing positive emotions is generally seen as acceptable. But does it follow that all positive emotions can always be shown to everyone? To answer this question, we mapped out context-specific display rules for 8 positive emotions: gratitude, admiration, interest, relief, amusement, feeling moved, sensory pleasure, and triumph. In four studies with participants from five countries (n = 1,181), two consistent findings emerged. First, display rules differed between positive emotions. Weaker display rules were found for gratitude, interest, and amusement, whereas stronger display rules were found for sensory pleasure, feeling moved, and to some degree triumph. Second, contextual features-such as expresser location and perceiver relationship-both substantially influenced display rules for positive emotions, with perceiver relationship having a greater impact on display rules than expresser location. Our findings demonstrate that some positive emotions are less acceptable to express than others and highlight the central role of context in influencing display rules even for emotions that feel good. In so doing, we provide the first map of expression norms for specific positive emotions. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Emotions , Happiness , Humans , Pleasure , Data Management
8.
BMC Psychol ; 10(1): 257, 2022 Nov 08.
Article in English | MEDLINE | ID: mdl-36348466

ABSTRACT

BACKGROUND: Syrian refugees comprise the vast majority of refugees in the Netherlands. Although some research has been carried out on factors promoting refugee resilience, there have been few empirical studies on the resilience of Syrian refugees. METHOD: We used a qualitative method to understand adversity, emotion, and the factors contributing to resilience in Syrian refugees. We interviewed eighteen adult Syrian refugees residing in the Netherlands and used thematic analysis to identify the themes. RESULTS: We identified themes and organized them into three main parts describing the challenges (pre and post-resettlement), key emotions pertaining to those experiences, and resilience factors. We found six primary protective factors internally and externally promoting participants' resilience: future orientation, coping strategies, social support, opportunities, religiosity, and cultural identity. In addition, positive emotions constituted a key feature of refugees' resilience. CONCLUSION: The results highlight the challenges and emotions in each stage of the Syrian refugees' journey and the multitude of factors affecting their resilience. Our findings on religiosity and maintaining cultural identity suggest that resilience can be enhanced on a cultural level. So it is worth noting these aspects when designing prevention or intervention programs for Syrian refugees.


Subject(s)
Refugees , Adult , Humans , Refugees/psychology , Syria , Netherlands , Emotions , Adaptation, Psychological
9.
Cogn Emot ; 36(3): 388-401, 2022 05.
Article in English | MEDLINE | ID: mdl-35639090

ABSTRACT

Social Functionalist Theory (SFT) emerged 20 years ago to orient emotion science to the social nature of emotion. Here we expand upon SFT and make the case for how emotions, relationships, and culture constitute one another. First, we posit that emotions enable the individual to meet six "relational needs" within social interactions: security, commitment, status, trust, fairness, and belongingness. Building upon this new theorising, we detail four principles concerning emotional experience, cognition, expression, and the cultural archiving of emotion. We conclude by considering the bidirectional influences between culture, relationships, and emotion, outlining areas of future inquiry.


Subject(s)
Cognition , Emotions , Humans
10.
Philos Trans R Soc Lond B Biol Sci ; 377(1841): 20200404, 2022 01 03.
Article in English | MEDLINE | ID: mdl-34775822

ABSTRACT

Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these laughs across cultures. Dutch (n = 273) and Japanese (n = 131) participants listened to decontextualized laughter clips and judged (i) whether the laughing person was from their cultural in-group or an out-group; and (ii) whether they thought the laughter was produced spontaneously or volitionally. They also rated the positivity of each laughter clip. Using frequentist and Bayesian analyses, we show that listeners were able to infer group membership from both spontaneous and volitional laughter, and that performance was equivalent for both types of laughter. Spontaneous laughter was rated as more positive than volitional laughter across the two cultures, and in-group laughs were perceived as more positive than out-group laughs by Dutch but not Japanese listeners. Our results demonstrate that both spontaneous and volitional laughter can be used by listeners to infer laughers' cultural group identity. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.


Subject(s)
Laughter , Auditory Perception , Bayes Theorem , Emotions , Group Processes , Humans
11.
Philos Trans R Soc Lond B Biol Sci ; 376(1840): 20200386, 2021 12 20.
Article in English | MEDLINE | ID: mdl-34719255

ABSTRACT

Research on within-individual modulation of vocal cues is surprisingly scarce outside of human speech. Yet, voice modulation serves diverse functions in human and nonhuman nonverbal communication, from dynamically signalling motivation and emotion, to exaggerating physical traits such as body size and masculinity, to enabling song and musicality. The diversity of anatomical, neural, cognitive and behavioural adaptations necessary for the production and perception of voice modulation make it a critical target for research on the origins and functions of acoustic communication. This diversity also implicates voice modulation in numerous disciplines and technological applications. In this two-part theme issue comprising 21 articles from leading and emerging international researchers, we highlight the multidisciplinary nature of the voice sciences. Every article addresses at least two, if not several, critical topics: (i) development and mechanisms driving vocal control and modulation; (ii) cultural and other environmental factors affecting voice modulation; (iii) evolutionary origins and adaptive functions of vocal control including cross-species comparisons; (iv) social functions and real-world consequences of voice modulation; and (v) state-of-the-art in multidisciplinary methodologies and technologies in voice modulation research. With this collection of works, we aim to facilitate cross-talk across disciplines to further stimulate the burgeoning field of voice modulation. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.


Subject(s)
Social Change , Voice , Emotions , Humans , Male , Nonverbal Communication , Speech
12.
J Nonverbal Behav ; 45(4): 419-454, 2021.
Article in English | MEDLINE | ID: mdl-34744232

ABSTRACT

The human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners' (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalizations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10919-021-00375-1.

13.
Psychol Sci ; 32(12): 2035-2041, 2021 12.
Article in English | MEDLINE | ID: mdl-34788164

ABSTRACT

Older age is characterized by more positive and less negative emotional experience. Recent work by Carstensen et al. (2020) demonstrated that the age advantages in emotional experience have persisted during the COVID-19 pandemic. In two studies, we replicated and extended this work. In Study 1, we conducted a large-scale test of the robustness of Carstensen and colleagues' findings using data from 23,350 participants in 63 countries. Our results confirm that age advantages in emotions have persisted during the COVID-19 pandemic. In Study 2, we directly compared the age advantages before and during the COVID-19 pandemic in a within-participants study (N = 4,370). We found that the age advantages in emotions decreased during the pandemic. These findings are consistent with theoretical proposals that the age advantages reflect older adults' ability to avoid situations that are likely to cause negative emotions, which is challenging under conditions of sustained unavoidable stress.


Subject(s)
COVID-19 , Pandemics , Aged , Aging , Emotions , Humans , SARS-CoV-2
14.
Biol Lett ; 17(9): 20210319, 2021 09.
Article in English | MEDLINE | ID: mdl-34464539

ABSTRACT

Human adult laughter is characterized by vocal bursts produced predominantly during exhalation, yet apes laugh while exhaling and inhaling. The current study investigated our hypothesis that laughter of human infants changes from laughter similar to that of apes to increasingly resemble that of human adults over early development. We further hypothesized that the more laughter is produced on the exhale, the more positively it is perceived. To test these predictions, novice (n = 102) and expert (phonetician, n = 15) listeners judged the extent to which human infant laughter (n = 44) was produced during inhalation or exhalation, and the extent to which they found the laughs pleasant and contagious. Support was found for both hypotheses, which were further confirmed in two pre-registered replication studies. Likely through social learning and the anatomical development of the vocal production system, infants' initial ape-like laughter transforms into laughter similar to that of adult humans over the course of ontogeny.


Subject(s)
Hominidae , Laughter , Voice , Adult , Animals , Emotions , Humans , Infant
15.
J Intell ; 9(2)2021 May 07.
Article in English | MEDLINE | ID: mdl-34067013

ABSTRACT

Individual differences in understanding other people's emotions have typically been studied with recognition tests using prototypical emotional expressions. These tests have been criticized for the use of posed, prototypical displays, raising the question of whether such tests tell us anything about the ability to understand spontaneous, non-prototypical emotional expressions. Here, we employ the Emotional Accuracy Test (EAT), which uses natural emotional expressions and defines the recognition as the match between the emotion ratings of a target and a perceiver. In two preregistered studies (Ntotal = 231), we compared the performance on the EAT with two well-established tests of emotion recognition ability: the Geneva Emotion Recognition Test (GERT) and the Reading the Mind in the Eyes Test (RMET). We found significant overlap (r > 0.20) between individuals' performance in recognizing spontaneous emotions in naturalistic settings (EAT) and posed (or enacted) non-verbal measures of emotion recognition (GERT, RMET), even when controlling for individual differences in verbal IQ. On average, however, participants reported enjoying the EAT more than the other tasks. Thus, the current research provides a proof-of-concept validation of the EAT as a useful measure for testing the understanding of others' emotions, a crucial feature of emotional intelligence. Further, our findings indicate that emotion recognition tests using prototypical expressions are valid proxies for measuring the understanding of others' emotions in more realistic everyday contexts.

16.
Front Psychol ; 12: 579474, 2021.
Article in English | MEDLINE | ID: mdl-34122207

ABSTRACT

Positive emotions are linked to numerous benefits, but not everyone appreciates the same kinds of positive emotional experiences. We examine how distinct positive emotions are perceived and whether individuals' perceptions are linked to how societies evaluate those emotions. Participants from Hong Kong and Netherlands rated 23 positive emotions based on their individual perceptions (positivity, arousal, and socially engaging) and societal evaluations (appropriate, valued, and approved of). We found that (1) there were cultural differences in judgments about all six aspects of positive emotions; (2) positivity, arousal, and social engagement predicted emotions being positively regarded at the societal level in both cultures; and (3) that positivity mattered more for the Dutch participants, although arousal and social engagement mattered more in Hong Kong for societal evaluations. These findings provide a granular map of the perception and evaluation of distinct positive emotions in two cultures and highlight the role of cultures in the understanding how positive emotions are perceived and evaluated.

17.
Cogn Emot ; 35(6): 1175-1186, 2021 09.
Article in English | MEDLINE | ID: mdl-34000966

ABSTRACT

The perception of multisensory emotion cues is affected by culture. For example, East Asians rely more on vocal, as compared to facial, affective cues compared to Westerners. However, it is unknown whether these cultural differences exist in childhood, and if not, which processing style is exhibited in children. The present study tested East Asian and Western children, as well as adults from both cultural backgrounds, to probe cross-cultural similarities and differences at different ages, and to establish the weighting of each modality at different ages. Participants were simultaneously shown a face and a voice expressing either congruent or incongruent emotions, and were asked to judge whether the person was happy or angry. Replicating previous research, East Asian adults relied more on vocal cues than did Western adults. Young children from both cultural groups, however, behaved like Western adults, relying primarily on visual information. The proportion of responses based on vocal cues increased with age in East Asian, but not Western, participants. These results suggest that culture is an important factor in developmental changes in the perception of facial and vocal affective information.


Subject(s)
Facial Expression , Voice , Adult , Anger , Child , Child, Preschool , Emotions , Humans , Perception
18.
Proc Biol Sci ; 287(1929): 20201148, 2020 06 24.
Article in English | MEDLINE | ID: mdl-32546102

ABSTRACT

Vocalizations linked to emotional states are partly conserved among phylogenetically related species. This continuity may allow humans to accurately infer affective information from vocalizations produced by chimpanzees. In two pre-registered experiments, we examine human listeners' ability to infer behavioural contexts (e.g. discovering food) and core affect dimensions (arousal and valence) from 155 vocalizations produced by 66 chimpanzees in 10 different positive and negative contexts at high, medium or low arousal levels. In experiment 1, listeners (n = 310), categorized the vocalizations in a forced-choice task with 10 response options, and rated arousal and valence. In experiment 2, participants (n = 3120) matched vocalizations to production contexts using yes/no response options. The results show that listeners were accurate at matching vocalizations of most contexts in addition to inferring arousal and valence. Judgments were more accurate for negative as compared to positive vocalizations. An acoustic analysis demonstrated that, listeners made use of brightness and duration cues, and relied on noisiness in making context judgements, and pitch to infer core affect dimensions. Overall, the results suggest that human listeners can infer affective information from chimpanzee vocalizations beyond core affect, indicating phylogenetic continuity in the mapping of vocalizations to behavioural contexts.


Subject(s)
Auditory Perception , Pan troglodytes , Acoustics , Affect , Animals , Cues , Emotions , Female , Humans , Male , Noise
19.
J Exp Soc Psychol ; 87: 103912, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32127724

ABSTRACT

Empathizing with others is widely presumed to increase our understanding of their emotions. Little is known, however, about which empathic process actually help people recognize others' feelings more accurately. Here, we probed the relationship between emotion recognition and two empathic processes: spontaneously felt similarity (having had a similar experience) and deliberate perspective taking (focus on the other vs. oneself). We report four studies in which participants (total N = 803) watched videos of targets sharing genuine negative emotional experiences. Participants' multi-scalar ratings of the targets' emotions were compared with the targets' own emotion ratings. In Study 1 we found that having had a similar experience to what the target was sharing was associated with lower recognition of the target's emotions. Study 2 replicated the same pattern and in addition showed that making participants' own imagined reaction to the described event salient resulted in further reduced accuracy. Studies 3 and 4 were preregistered replications and extensions of Studies 1 and 2, in which we observed the same outcome using a different stimulus set, indicating the robustness of the finding. Moreover, Study 4 directly investigated the underlying mechanism of the observed effect. Findings showed that perceivers who have had a negative life experience similar to the emotional event described in the video felt greater personal distress after watching the video, which in part explained their reduced accuracy. These results provide the first demonstration that spontaneous empathy, evoked by similarity in negative experiences, may inhibit rather than increase our understanding of others' emotions.

20.
Cogn Emot ; 34(6): 1112-1122, 2020 09.
Article in English | MEDLINE | ID: mdl-32046586

ABSTRACT

Theories on empathy have argued that feeling empathy for others is related to accurate recognition of their emotions. Previous research that tested this assumption, however, has reported inconsistent findings. We suggest that this inconsistency may be due to a lack of consideration of the fact that empathy has two facets: empathic concern, namely the compassion for unfortunate others, and personal distress, the experience of discomfort in response to others' distress. We test the hypothesis that empathic concern is positively related to emotion recognition, whereas personal distress is negatively related to emotion recognition. Individual tendencies to respond with concern or distress were measured with the standard IRI (Interpersonal Reactivity Index) self-report questionnaire. Emotion recognition performance was assessed with three standard tests of nonverbal emotion recognition. Across two studies (total N = 431) anddifferent emotion recognition tests, we found that these two facets of affective empathy have opposite relations to recognition of facial expressions of emotions: empathic concern was positively related, while personal distress was negatively related, to accurate emotion recognition. These findings fit with existing motivational models of empathy, suggesting that empathic concern and personal distress have opposing impacts on the likelihood that empathy makes one a better emotion observer.


Subject(s)
Emotions , Empathy , Recognition, Psychology , Adult , Female , Humans , Male , Self Report , Surveys and Questionnaires , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...