Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Water Res ; 257: 121702, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38749337

ABSTRACT

While online monitoring of physicochemical parameters has widely been incorporated into drinking water treatment systems, online microbial monitoring has lagged behind, resulting in the use of surrogate parameters (disinfectant residual, applied dose, concentration × time, CT) to assess disinfection system performance. Online flow cytometry (online FCM) allows for automated quantification of total and intact microbial cells. This study sought to investigate the feasibility of online FCM for full-scale drinking water ozone disinfection system performance monitoring. A water treatment plant with high lime solids turbidity in the ozone contactor influent was selected to evaluate the online FCM in challenging conditions. Total and intact cell counts were monitored for 40 days and compared to surrogate parameters (ozone residual, ozone dose, and CT) and grab sample assay results for cellular adenosine triphosphate (cATP), heterotrophic plate counts (HPC), impedance flow cytometry, and 16S rRNA gene sequencing. Online FCM provided insight into the dynamics of the full-scale ozone system, including offering early warning of increased contactor effluent cell concentrations, which was not observed using surrogate measures. Positive correlations were observed between online FCM intact cell counts and cATP levels (Kendall's tau=0.40), HPC (Kendall's tau=0.20), and impedance flow cytometry results (Kendall's tau=0.30). Though a strong correlation between log intact cell removal and CT was not observed, 16S rRNA gene sequencing results showed that passage through the ozone contactor significantly changed the microbial community (p < 0.05). Potential causes of the low overall cell inactivation in the contactor and the significant changes in the microbial community after ozonation include regrowth in the later chambers of the contactor and varied ozone resistance of drinking water microorganisms. This study demonstrates the suitability of direct, online microbial analysis for monitoring full-scale disinfection systems.


Subject(s)
Disinfection , Drinking Water , Flow Cytometry , Ozone , Water Purification , Flow Cytometry/methods , Disinfection/methods , Drinking Water/microbiology , Water Purification/methods
2.
Aggress Behav ; 50(3): e22148, 2024 May.
Article in English | MEDLINE | ID: mdl-38747497

ABSTRACT

Although there is a large research base on the psychological impacts of violent and prosocial visual media, there is little research addressing the impacts of violent and prosocial music, and which facets of the music have the greatest impact. Four experiments tested the impact of lyrics and/or musical tone on aggressive and prosocial behavior, and on underlying psychological processes, using purpose-built songs to avoid the effect of music-related confounds. In study one, where mildly aggressive, overtly aggressive and violent lyrics were compared to neutral lyrics, any level of lyrical aggression caused an increase in behavioral aggression, which plateaued for all three aggression conditions. Violent lyrics were better recalled than other lyrics one week later. In studies two-three no significant effects of lyrics, or of aggressive versus nonaggressive musical tone, were found on aggressive or prosocial behavior. In terms of internal states, violent lyrics increased hostility/hostile cognitions in all studies, and negatively impacted affective state in three studies. Prosocial lyrics decreased hostility/hostile cognitions in three studies, but always in tandem with another factor. Aggressive musical tone increased physiological arousal in two studies and increased negative affect in one. In study four those who listened to violent lyrics drove more aggressively on a simulated drive that included triggers for aggression. Overall, violent lyrics consistently elicited hostility/hostile cognitions and negative affect, but these did not always translate to aggressive behavior. Violent music seems more likely to elicit behavioral aggression when there are aggression triggers and a clear way to aggress. Implications are discussed.


Subject(s)
Aggression , Music , Humans , Music/psychology , Aggression/psychology , Male , Female , Adult , Young Adult , Violence/psychology , Hostility , Social Behavior , Adolescent , Emotions/physiology , Thinking/physiology
3.
Article in English | MEDLINE | ID: mdl-36982066

ABSTRACT

Many people listen to music that conveys challenging emotions such as sadness and anger, despite the commonly assumed purpose of media being to elicit pleasure. We propose that eudaimonic motivation, the desire to engage with aesthetic experiences to be challenged and facilitate meaningful experiences, can explain why people listen to music containing such emotions. However, it is unknown whether music containing violent themes can facilitate such meaningful experiences. In this investigation, three studies were conducted to determine the implications of eudaimonic and hedonic (pleasure-seeking) motivations for fans of music with violent themes. In Study 1, we developed and tested a new scale and showed that fans exhibit high levels of both types of motivation. Study 2 further validated the new scale and provided evidence that the two types of motivations are associated with different affective outcomes. Study 3 revealed that fans of violently themed music exhibited higher levels of eudaimonic motivation and lower levels of hedonic motivation than fans of non-violently themed music. Taken together, the findings support the notion that fans of music with violent themes are driven to engage with this music to be challenged and to pursue meaning, as well as to experience pleasure. Implications for fans' well-being and future applications of the new measure are discussed.


Subject(s)
Music , Pleasure , Humans , Motivation , Music/psychology , Emotions , Anger
4.
Article in English | MEDLINE | ID: mdl-36767286

ABSTRACT

Rich intercultural music engagement (RIME) is an embodied form of engagement whereby individuals immerse themselves in foreign musical practice, for example, by learning a traditional instrument from that culture. The present investigation evaluated whether RIME with Chinese or Middle Eastern music can nurture intercultural understanding. White Australian participants were randomly assigned to one of two plucked-string groups: Chinese pipa (n = 29) or Middle Eastern oud (n = 29). Before and after the RIME intervention, participants completed measures of ethnocultural empathy, tolerance, social connectedness, explicit and implicit attitudes towards ethnocultural groups, and open-ended questions about their experience. Following RIME, White Australian participants reported a significant increase in ethnocultural empathy, tolerance, feelings of social connection, and improved explicit and implicit attitudes towards Chinese and Middle Eastern people. However, these benefits differed between groups. Participants who learned Chinese pipa reported reduced bias and increased social connectedness towards Chinese people, but not towards Middle Eastern people. Conversely, participants who learned Middle Eastern oud reported a significant increase in social connectedness towards Middle Eastern people, but not towards Chinese people. This is the first experimental evidence that participatory RIME is an effective tool for understanding a culture other than one's own, with the added potential to reduce cultural bias.


Subject(s)
Culture , Music , Humans , Australia , Empathy , Learning
5.
Appl Ergon ; 108: 103954, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36566527

ABSTRACT

BACKGROUND: Ensuring that pool lifeguards develop the skills necessary to detect drowning victims is challenging given that these situations are relatively rare, unpredictable and are difficult to simulate accurately and safely. Virtual reality potentially provides a safe and ecologically valid approach to training since it offers a near-to-real visual experience, together with the opportunity to practice task-related skills and receive feedback. As a prelude to the development of a training intervention, the aim of this research was to establish the construct validity of virtual reality drowning detection tasks. METHOD: Using a repeated measures design, a total of 38 qualified lifeguards and 33 non-lifeguards completed 13 min and 23 min simulated drowning detection tasks that were intended to reflect different levels of sustained attention. During the simulated tasks, participants were asked to monitor a virtual pool and identify any drowning targets with accuracy, response latency, and dwell time recorded. RESULTS: During the simulated scenarios, pool lifeguards detected drowning targets more frequently and spent less time than non-lifeguards fixating on the drowning target prior to the drowning onset. No significant differences in response latency were evident between lifeguards and non-lifeguards nor for first fixations on the drowning target. CONCLUSION: The results provide support for the construct validity of virtual reality lifeguarding scenarios, thereby providing the basis for their development and introduction as a potential training approach for developing and maintaining performance in lifeguarding and drowning detection. APPLICATION: This research provides support for the construct validity of virtual reality simulations as a potential training tool, enabling improvements in the fidelity of training solutions to improve pool lifeguard competency in drowning detection.


Subject(s)
Drowning , Humans , Drowning/diagnosis , Drowning/prevention & control , Attention , Reaction Time
6.
Behav Sci (Basel) ; 12(12)2022 Nov 30.
Article in English | MEDLINE | ID: mdl-36546969

ABSTRACT

While the benefits to mood and well-being from passionate engagement with music are well-established, far less is known about the relationship between passion for explicitly violently themed music and psychological well-being. The present study employed the Dualistic Model of Passion to investigate whether harmonious passion (i.e., passionate engagement that is healthily balanced with other life activities) predicts positive music listening experiences and/or psychological well-being in fans of violently themed music. We also investigated whether obsessive passion (i.e., uncontrollable passionate engagement with an activity) predicts negative music listening experiences and/or psychological ill-being. Fans of violently themed music (N = 177) completed the passion scale, scale of positive and negative affective experiences, and various psychological well- and ill-being measures. As hypothesised, harmonious passion for violently themed music significantly predicted positive affective experiences which, in turn, predicted psychological well-being. Obsessive passion for violently themed music significantly predicted negative affective experiences which, in turn, predicted ill-being. Findings support the Dualistic Model of Passion, and suggest that even when music engagement includes violent content, adaptive outcomes are often experienced. We propose that the nature of one's passion for music is more influential in predicting well-being than the content or valence of the lyrical themes.

7.
Environ Microbiol ; 23(3): 1422-1435, 2021 03.
Article in English | MEDLINE | ID: mdl-33264477

ABSTRACT

Diatoms are among the few eukaryotes known to store nitrate (NO3 - ) and to use it as an electron acceptor for respiration in the absence of light and O2 . Using microscopy and 15 N stable isotope incubations, we studied the relationship between dissimilatory nitrate/nitrite reduction to ammonium (DNRA) and diel vertical migration of diatoms in phototrophic microbial mats and the underlying sediment of a sinkhole in Lake Huron (USA). We found that the diatoms rapidly accumulated NO3 - at the mat-water interface in the afternoon and 40% of the population migrated deep into the sediment, where they were exposed to dark and anoxic conditions for ~75% of the day. The vertical distribution of DNRA rates and diatom abundance maxima coincided, suggesting that DNRA was the main energy generating metabolism of the diatom population. We conclude that the illuminated redox-dynamic ecosystem selects for migratory diatoms that can store nitrate for respiration in the absence of light. A major implication of this study is that the dominance of DNRA over denitrification is not explained by kinetics or thermodynamics. Rather, the dynamic conditions select for migratory diatoms that perform DNRA and can outcompete sessile denitrifiers.


Subject(s)
Ammonium Compounds , Diatoms , Denitrification , Diatoms/metabolism , Ecosystem , Geologic Sediments , Nitrates/analysis , Nitrogen , Respiration
8.
Front Psychol ; 9: 1758, 2018.
Article in English | MEDLINE | ID: mdl-30327622

ABSTRACT

Neuroscientific research has revealed interconnected brain networks implicated in musical creativity, such as the executive control network, the default mode network, and premotor cortices. The present study employed brain stimulation to evaluate the role of the primary motor cortex (M1) in creative and technically fluent jazz piano improvisations. We implemented transcranial direct current stimulation (tDCS) to alter the neural activation patterns of the left hemispheric M1 whilst pianists performed improvisations with their right hand. Two groups of expert jazz pianists (n = 8 per group) performed five improvisations in each of two blocks. In Block 1, they improvised in the absence of brain stimulation. In Block 2, one group received inhibitory tDCS and the second group received excitatory tDCS while performing five new improvisations. Three independent expert-musicians judged the 160 performances on creativity and technical fluency using a 10-point Likert scale. As the M1 is involved in the acquisition and consolidation of motor skills and the control of hand orientation and velocity, we predicted that excitatory tDCS would increase the quality of improvisations relative to inhibitory tDCS. Indeed, improvisations under conditions of excitatory tDCS were rated as significantly more creative than those under conditions of inhibitory tDCS. A music analysis indicated that excitatory tDCS elicited improvisations with greater pitch range and number/variety of notes. Ratings of technical fluency did not differ significantly between tDCS groups. We discuss plausible mechanisms by which the M1 region contributes to musical creativity.

9.
Q J Exp Psychol (Hove) ; 71(6): 1367-1381, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29808767

ABSTRACT

In a continuous recognition paradigm, most stimuli elicit superior recognition performance when the item to be recognized is the most recent stimulus (a recency-in-memory effect). Furthermore, increasing the number of intervening items cumulatively disrupts memory in most domains. Memory for melodies composed in familiar tuning systems also shows superior recognition for the most recent melody, but no disruptive effects from the number of intervening melodies. A possible explanation has been offered in a novel regenerative multiple representations (RMR) conjecture. The RMR assumes that prior knowledge informs perception and perception influences memory representations. It postulates that melodies are perceived, thus also represented, simultaneously as integrated entities and also as their components (such as pitches, pitch intervals, short phrases and rhythm). Multiple representations of the melody components and melody as a whole can restore one another, thus providing resilience against disruptive effects from intervening items. The conjecture predicts that melodies in an unfamiliar tuning system are not perceived as integrated melodies and should (a) disrupt recency-in-memory advantages and (b) facilitate disruptive effects from the number of intervening items. We test these two predictions in three experiments. Experiments 1 and 2 show that no recency-in-memory effects emerge for melodies in an unfamiliar tuning system. In Experiment 3, disruptive effects occurred as the number of intervening items and unfamiliarity of the stimuli increased. Overall, results are coherent with the predictions of the RMR conjecture. Further investigation of the conjecture's predictions may lead to greater understanding of the fundamental relationships between memory, perception and behavior.


Subject(s)
Auditory Perception/physiology , Memory/physiology , Music , Recognition, Psychology/physiology , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Male , Psychoacoustics , Young Adult
11.
Q J Exp Psychol (Hove) ; 71(5): 1150-1171, 2018 May.
Article in English | MEDLINE | ID: mdl-28403694

ABSTRACT

In many memory domains, a decrease in recognition performance between the first and second presentation of an object is observed as the number of intervening items increases. However, this effect is not universal. Within the auditory domain, this form of interference has been demonstrated in word and single-note recognition, but has yet to be substantiated using relatively complex musical material such as a melody. Indeed, it is becoming clear that music shows intriguing properties when it comes to memory. This study investigated how the number of intervening items influences memory for melodies. In Experiments 1, 2 and 3, one melody was presented per trial in a continuous recognition paradigm. After each melody, participants indicated whether they had heard the melody in the experiment before by responding "old" or "new." In Experiment 4, participants rated perceived familiarity for every melody without being told that melodies reoccur. In four experiments using two corpora of music, two different memory tasks, transposed and untransposed melodies and up to 195 intervening melodies, no sign of a disruptive effect from the number of intervening melodies beyond the first was observed. We propose a new "regenerative multiple representations" conjecture to explain why intervening items increase interference in recognition memory for most domains but not music. This conjecture makes several testable predictions and has the potential to strengthen our understanding of domain specificity in human memory, while moving one step closer to explaining the "paradox" that is memory for melody.


Subject(s)
Memory/physiology , Music , Pitch Perception/physiology , Recognition, Psychology/physiology , Acoustic Stimulation , Adolescent , Adult , Awareness/physiology , Female , Humans , Male , Young Adult
12.
Q J Exp Psychol (Hove) ; : 1-45, 2017 May 26.
Article in English | MEDLINE | ID: mdl-28548562

ABSTRACT

In a continuous recognition paradigm, most stimuli elicit superior recognition performance when the item to be recognised is the most recent stimulus (a recency-in-memory effect). Furthermore, increasing the number of intervening items cumulatively disrupts memory in most domains. Memory for melodies composed in familiar tuning systems also shows superior recognition for the most recent melody, but no disruptive effects from the number of intervening melodies. A possible explanation has been offered in a novel regenerative multiple representations (RMR) conjecture. The RMR assumes that prior knowledge informs perception and perception influences memory representations. It postulates that melodies are perceived, thus also represented, simultaneously as integrated entities and also their components (such as pitches, pitch intervals, short phrases, and rhythm). Multiple representations of the melody components and melody as a whole can restore one another, thus providing resilience against disruptive effects from intervening items. The conjecture predicts that melodies in an unfamiliar tuning system are not perceived as integrated melodies and should: a) disrupt recency-in-memory advantages; and b) facilitate disruptive effects from the number of intervening items. We test these two predictions in three experiments. Experiments 1 and 2 show that no recency-in-memory effects emerge for melodies in an unfamiliar tuning system. In Experiment 3, disruptive effects occurred as the number of intervening items and unfamiliarity of the stimuli increased. Overall, results are coherent with the predictions of the RMR conjecture. Further investigation of the conjecture's predictions may lead to greater understanding of the fundamental relationships between memory, perception, and behavior.

13.
Atten Percept Psychophys ; 79(1): 352-362, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27631632

ABSTRACT

Continuous increases of acoustic intensity (up-ramps) can indicate a looming (approaching) sound source in the environment, whereas continuous decreases of intensity (down-ramps) can indicate a receding sound source. From psychoacoustic experiments, an "adaptive perceptual bias" for up-ramp looming tonal stimuli has been proposed (Neuhoff, 1998). This theory postulates that (1) up-ramps are perceptually salient because of their association with looming and potentially threatening stimuli in the environment; (2) tonal stimuli are perceptually salient because of an association with single and potentially threatening biological sound sources in the environment, relative to white noise, which is more likely to arise from dispersed signals and nonthreatening/nonbiological sources (wind/ocean). In the present study, we extrapolated the "adaptive perceptual bias" theory and investigated its assumptions by measuring sound source localization in response to acoustic stimuli presented in azimuth to imply looming, stationary, and receding motion in depth. Participants (N = 26) heard three directions of intensity change (up-ramps, down-ramps, and steady state, associated with looming, receding, and stationary motion, respectively) and three levels of acoustic spectrum (a 1-kHz pure tone, the tonal vowel /ә/, and white noise) in a within-subjects design. We first hypothesized that if up-ramps are "perceptually salient" and capable of eliciting adaptive responses, then they would be localized faster and more accurately than down-ramps. This hypothesis was supported. However, the results did not support the second hypothesis. Rather, the white-noise and vowel conditions were localized faster and more accurately than the pure-tone conditions. These results are discussed in the context of auditory and visual theories of motion perception, auditory attentional capture, and the spectral causes of spatial ambiguity.


Subject(s)
Attention/physiology , Motion Perception/physiology , Sound Localization/physiology , Space Perception/physiology , Adolescent , Adult , Female , Humans , Male , Psychoacoustics , Young Adult
14.
PLoS One ; 11(12): e0167643, 2016.
Article in English | MEDLINE | ID: mdl-27997625

ABSTRACT

Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous sounds, rather than instrumental discontinuous notes. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event 'hazard' analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral measures linked to timbre strengthened the models. Overall, results show that even when little of the pitch and rhythm information important for phrasing in note-based music is available, phrasing is still perceived, primarily in response to changes of intensity and timbre. Implications for electroacoustic music composition and music recommender systems are discussed.


Subject(s)
Acoustics , Music , Pitch Perception/physiology , Adolescent , Adult , Female , Humans , Male
15.
Comput Cogn Sci ; 1(1): 7, 2015.
Article in English | MEDLINE | ID: mdl-27980889

ABSTRACT

BACKGROUND: Virtual humans have become part of our everyday life (movies, internet, and computer games). Even though they are becoming more and more realistic, their speech capabilities are, most of the time, limited and not coherent and/or not synchronous with the corresponding acoustic signal. METHODS: We describe a method to convert a virtual human avatar (animated through key frames and interpolation) into a more naturalistic talking head. In fact, speech articulation cannot be accurately replicated using interpolation between key frames and talking heads with good speech capabilities are derived from real speech production data. Motion capture data are commonly used to provide accurate facial motion for visible speech articulators (jaw and lips) synchronous with acoustics. To access tongue trajectories (partially occluded speech articulator), electromagnetic articulography (EMA) is often used. We recorded a large database of phonetically-balanced English sentences with synchronous EMA, motion capture data, and acoustics. An articulatory model was computed on this database to recover missing data and to provide 'normalized' animation (i.e., articulatory) parameters. In addition, semi-automatic segmentation was performed on the acoustic stream. A dictionary of multimodal Australian English diphones was created. It is composed of the variation of the articulatory parameters between all the successive stable allophones. RESULTS: The avatar's facial key frames were converted into articulatory parameters steering its speech articulators (jaw, lips and tongue). The speech production database was used to drive the Embodied Conversational Agent (ECA) and to enhance its speech capabilities. A Text-To-Auditory Visual Speech synthesizer was created based on the MaryTTS software and on the diphone dictionary derived from the speech production database. CONCLUSIONS: We describe a method to transform an ECA with generic tongue model and animation by key frames into a talking head that displays naturalistic tongue, jaw and lip motions. Thanks to a multimodal speech production database, a Text-To-Auditory Visual Speech synthesizer drives the ECA's facial movements enhancing its speech capabilities.

16.
Acta Psychol (Amst) ; 149: 117-28, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24809252

ABSTRACT

The aim of this work was to investigate perceived loudness change in response to melodies that increase (up-ramp) or decrease (down-ramp) in acoustic intensity, and the interaction with other musical factors such as melodic contour, tempo, and tonality (tonal/atonal). A within-subjects design manipulated direction of linear intensity change (up-ramp, down-ramp), melodic contour (ascending, descending), tempo, and tonality, using single ramp trials and paired ramp trials, where single up-ramps and down-ramps were assembled to create continuous up-ramp/down-ramp or down-ramp/up-ramp pairs. Twenty-nine (Exp 1) and thirty-six (Exp 2) participants rated loudness continuously in response to trials with monophonic 13-note piano melodies lasting either 6.4s or 12s. Linear correlation coefficients >.89 between loudness and time show that time-series loudness responses to dynamic up-ramp and down-ramp melodies are essentially linear across all melodies. Therefore, 'indirect' loudness change derived from the difference in loudness at the beginning and end points of the continuous response was calculated. Down-ramps were perceived to change significantly more in loudness than up-ramps in both tonalities and at a relatively slow tempo. Loudness change was also greater for down-ramps presented with a congruent descending melodic contour, relative to an incongruent pairing (down-ramp and ascending melodic contour). No differential effect of intensity ramp/melodic contour congruency was observed for up-ramps. In paired ramp trials assessing the possible impact of ramp context, loudness change in response to up-ramps was significantly greater when preceded by down-ramps, than when not preceded by another ramp. Ramp context did not affect down-ramp perception. The contribution to the fields of music perception and psychoacoustics are discussed in the context of real-time perception of music, principles of music composition, and performance of musical dynamics.


Subject(s)
Auditory Perception/physiology , Music , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Loudness Perception/physiology , Male , Psychoacoustics , Time Perception , Young Adult
17.
Perception ; 41(5): 594-605, 2012.
Article in English | MEDLINE | ID: mdl-23025162

ABSTRACT

Overestimation of loudness change typically occurs in response to up-ramp auditory stimuli (increasing intensity) relative to down-ramps (decreasing intensity) matched on frequency, duration, and end-level. In the experiment reported, forward masking is used to investigate a sensory component of up-ramp overestimation: persistence of excitation after stimulus presentation. White-noise and synthetic vowel 3.6 s up-ramp and down-ramp maskers were presented over two regions of intensity change (40-60 dB SPL, 60-80 dB SPL). Three participants detected 10 ms 1.5 kHz pure tone signals presented at masker-offset to signal-offset delays of 10, 20, 30, 50, 90, 170 ms. Masking magnitude was significantly greater in response to up-ramps compared with down-ramps for masker-signal delays up to and including 50 ms. When controlling for an end-level recency bias (40-60 dB SPL up-ramp vs 80-60 dB SPL down-ramp), the difference in masking magnitude between up-ramps and down-ramps was not significant at each masker-signal delay. Greater sensory persistence in response to up-ramps is argued to have minimal effect on perceptual overestimation of loudness change when response biases are controlled. An explanation based on sensory adaptation is discussed.


Subject(s)
Loudness Perception , Perceptual Masking , Sound Spectrography , Acoustic Stimulation , Adult , Attention , Discrimination, Psychological , Female , Humans , Male , Pitch Perception , Psychoacoustics , Speech Perception
18.
Q J Exp Psychol (Hove) ; 65(10): 2054-72, 2012.
Article in English | MEDLINE | ID: mdl-22650967

ABSTRACT

In two experiments, we examined the effect of intensity and intensity change on judgements of pitch differences or interval size. In Experiment 1, 39 musically untrained participants rated the size of the interval spanned by two pitches within individual gliding tones. Tones were presented at high intensity, low intensity, looming intensity (up-ramp), and fading intensity (down-ramp) and glided between two pitches spanning either 6 or 7 semitones (a tritone or a perfect fifth interval). The pitch shift occurred in either ascending or descending directions. Experiment 2 repeated the conditions of Experiment 1 but the shifts in pitch and intensity occurred across two discrete tones (i.e., a melodic interval). Results indicated that participants were sensitive to the differences in interval size presented: Ratings were significantly higher when two pitches differed by 7 semitones than when they differed by 6 semitones. However, ratings were also dependent on whether the interval was high or low in intensity, whether it increased or decreased in intensity across the two pitches, and whether the interval was ascending or descending in pitch. Such influences illustrate that the perception of pitch relations does not always adhere to a logarithmic function as implied by their musical labels, but that identical intervals are perceived as substantially different in size depending on other attributes of the sound source.


Subject(s)
Judgment/physiology , Pitch Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Analysis of Variance , Female , Humans , Male , Music , Professional Competence , Psychoacoustics , Reaction Time , Time Factors , Young Adult
19.
J Exp Psychol Hum Percept Perform ; 36(6): 1631-44, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20822303

ABSTRACT

Three experiments investigate psychological, methodological, and domain-specific characteristics of loudness change in response to sounds that continuously increase in intensity (up-ramps), relative to sounds that decrease (down-ramps). Timbre (vowel, violin), layer (monotone, chord), and duration (1.8 s, 3.6 s) were manipulated in Experiment 1. Participants judged global loudness change between pairs of spectrally identical up-ramps and down-ramps. It was hypothesized that loudness change is overestimated in up-ramps, relative to down-ramps, using simple speech and musical stimuli. The hypothesis was supported and the proportion of up-ramp overestimation increased with stimulus duration. Experiment 2 investigated recency and a bias for end-levels by presenting paired dynamic stimuli with equivalent end-levels and steady-state controls. Experiment 3 used single stimulus presentations, removing artifacts associated with paired stimuli. Perceptual overestimation of loudness change is influenced by (1) intensity region of the dynamic stimulus; (2) differences in stimulus end-level; (3) order in which paired items are presented; and (4) duration of each item. When methodological artifacts are controlled, overestimation of loudness change in response to up-ramps remains. The relative influence of cognitive and sensory mechanisms is discussed.


Subject(s)
Judgment , Loudness Perception , Music , Sound Spectrography , Speech Acoustics , Acoustic Stimulation/methods , Adolescent , Female , Humans , Illusions , Male , Young Adult
20.
Perception ; 39(5): 695-704, 2010.
Article in English | MEDLINE | ID: mdl-20677706

ABSTRACT

A "perceptual bias for rising intensity" (Neuhoff 1998, Nature 395 123-124) is not dependent on the continuous change of a dynamic, looming sound source. Thirty participants were presented with pairs of 500 ms steady-state sounds corresponding to onset and offset levels of previously used dynamic increasing- and decreasing-intensity stimuli. Independent variables, intensity-change direction (increasing, decreasing), intensity region (high: 70-90 dB SPL, low: 50-70 dB SPL), interstimulus interval (ISI) (0 s, 1.8 s, 3.6 s), and timbre (vowel, violin) were manipulated as a fully within-subjects design. The dependent variable was perceived loudness change between each stimulus item in a pair. It was hypothesised that (i) noncontinuous increases of intensity are overestimated in loudness change, relative to decreases, in both low-intensity and high-intensity regions; and (ii) perceptual overestimation does not occur when end-levels are balanced. The hypotheses were partially supported. At the high-intensity region, increasing stimuli were perceived to change more in loudness than decreasing-intensity stimuli. At the low-intensity region and under balanced end-level conditions, decreasing-intensity stimuli were perceived to change more in loudness than increasing-intensity stimuli. A significant direction x region interaction varied as a function of ISI. Methodological, sensory, and cognitive explanations for overestimation in certain circumstances are discussed.


Subject(s)
Acoustic Stimulation/methods , Loudness Perception/physiology , Adolescent , Adult , Female , Humans , Male , Motion Perception/physiology , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...