Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 50
Filter
1.
Atten Percept Psychophys ; 85(8): 2673-2699, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37817052

ABSTRACT

Prior investigations of simple rhythms in familiar time signatures have shown the importance of several mechanisms; notably, those related to metricization and grouping. But there has been limited study of complex rhythms, including those in unfamiliar time signatures, such as are found outside mainstream Western music. Here, we investigate how the structures of 91 rhythms with nonisochronous onsets (mostly complex, several in unfamiliar time signatures) influence the accuracy, velocity, and timing of taps made by participants attempting to synchronize with these onsets. The onsets were piano-tone cues sounded at a well-formed subset of isochronous cymbal pulses; the latter occurring every 234 ms. We modelled tapping at both the rhythm level and the pulse level; the latter provides insight into how rhythmic structure makes some cues easier to tap and why incorrect (uncued) taps may occur. In our models, we use a wide variety of quantifications of rhythmic features, several of which are novel and many of which are indicative of underlying mechanisms, strategies, or heuristics. The results show that, for these tricky rhythms, taps are disrupted by unfamiliar period lengths and are guided by crude encodings of each rhythm: the density of rhythmic cues, their circular mean and variance, and recognizing common small patterns and the approximate positions of groups of cues. These lossy encodings are often counterproductive for discriminating between cued and uncued pulses and are quite different to mechanisms-such as metricization and emphasizing group boundaries-thought to guide tapping behaviours in learned and familiar rhythms.


Subject(s)
Music , Time Perception , Humans , Auditory Perception , Learning , Cues , Periodicity
2.
PLoS One ; 18(9): e0291642, 2023.
Article in English | MEDLINE | ID: mdl-37729156

ABSTRACT

We provide evidence that the roughness of chords-a psychoacoustic property resulting from unresolved frequency components-is associated with perceived musical stability (operationalized as finishedness) in participants with differing levels and types of exposure to Western or Western-like music. Three groups of participants were tested in a remote cloud forest region of Papua New Guinea (PNG), and two groups in Sydney, Australia (musicians and non-musicians). Unlike prominent prior studies of consonance/dissonance across cultures, we framed the concept of consonance as stability rather than as pleasantness. We find a negative relationship between roughness and musical stability in every group including the PNG community with minimal experience of musical harmony. The effect of roughness is stronger for the Sydney participants, particularly musicians. We find an effect of harmonicity-a psychoacoustic property resulting from chords having a spectral structure resembling a single pitched tone (such as produced by human vowel sounds)-only in the Sydney musician group, which indicates this feature's effect is mediated via a culture-dependent mechanism. In sum, these results underline the importance of both universal and cultural mechanisms in music cognition, and they suggest powerful implications for understanding the origin of pitch structures in Western tonal music as well as on possibilities for new musical forms that align with humans' perceptual and cognitive biases. They also highlight the importance of how consonance/dissonance is operationalized and explained to participants-particularly those with minimal prior exposure to musical harmony.


Subject(s)
Drama , Music , Humans , Australia , Cognition , Niacinamide
3.
PLoS One ; 17(6): e0269597, 2022.
Article in English | MEDLINE | ID: mdl-35767551

ABSTRACT

Music is a vital part of most cultures and has a strong impact on emotions [1-5]. In Western cultures, emotive valence is strongly influenced by major and minor melodies and harmony (chords and their progressions) [6-13]. Yet, how pitch and harmony affect our emotions, and to what extent these effects are culturally mediated or universal, is hotly debated [2, 5, 14-20]. Here, we report an experiment conducted in a remote cloud forest region of Papua New Guinea, across several communities with similar traditional music but differing levels of exposure to Western-influenced tonal music. One hundred and seventy participants were presented with pairs of major and minor cadences (chord progressions) and melodies, and chose which of them made them happier. The experiment was repeated by 60 non-musicians and 19 musicians in Sydney, Australia. Bayesian analyses show that, for cadences, there is strong evidence that greater happiness was reported for major than minor in every community except one: the community with minimal exposure to Western-like music. For melodies, there is strong evidence that greater happiness was reported for those with higher mean pitch (major melodies) than those with lower mean pitch (minor melodies) in only one of the three PNG communities and in both Sydney groups. The results show that the emotive valence of major and minor is strongly associated with exposure to Western-influenced music and culture, although we cannot exclude the possibility of universality.


Subject(s)
Music , Acoustic Stimulation/methods , Auditory Perception/physiology , Australia , Bayes Theorem , Emotions , Humans , Music/psychology , Papua New Guinea
4.
PLoS One ; 14(6): e0218570, 2019.
Article in English | MEDLINE | ID: mdl-31226170

ABSTRACT

This study investigates the role of extrinsic and intrinsic predictors in the perception of affect in mostly unfamiliar musical chords from the Bohlen-Pierce microtonal tuning system. Extrinsic predictors are derived, in part, from long-term statistical regularities in music; for example, the prevalence of a chord in a corpus of music that is relevant to a participant. Conversely, intrinsic predictors make no use of long-term statistical regularities in music; for example, psychoacoustic features inherent in the music, such as roughness. Two types of affect were measured for each chord: pleasantness/unpleasantness and happiness/sadness. We modelled the data with a number of novel and well-established intrinsic predictors, namely roughness, harmonicity, spectral entropy and average pitch height; and a single extrinsic predictor, 12-TET Dissimilarity, which was estimated by the chord's smallest distance to any 12-tone equally tempered chord. Musical sophistication was modelled as a potential moderator of the above predictors. Two experiments were conducted, each using slightly different tunings of the Bohlen-Pierce musical system: a just intonation version and an equal-tempered version. It was found that, across both tunings and across both affective responses, all the tested intrinsic features and 12-TET Dissimilarity have consistent influences in the expected direction. These results contrast with much current music perception research, which tends to assume the dominance of extrinsic over intrinsic predictors. This study highlights the importance of both intrinsic characteristics of the acoustic signal itself, as well as extrinsic factors, such as 12-TET Dissimilarity, on perception of affect in music.


Subject(s)
Affect/physiology , Auditory Perception/physiology , Music , Acoustic Stimulation/psychology , Adolescent , Adult , Emotions , Evoked Potentials, Auditory/physiology , Female , Happiness , Humans , Male , Music/psychology , Pitch Perception , Psychoacoustics , Random Allocation , Young Adult
5.
PLoS One ; 14(5): e0216088, 2019.
Article in English | MEDLINE | ID: mdl-31059519

ABSTRACT

1/f fluctuations have been described in numerous physical and biological processes. This noise structure describes an inverse relationship between the intensity and frequency of events in a time series (for example reflected in power spectra), and is believed to indicate long-range dependence, whereby events at one time point influence events many observations later. 1/f has been identified in rhythmic behaviors, such as music, and is typically attributed to long-range correlations. However short-range dependence in musical performance is a well-established finding and past research has suggested that 1/f can arise from multiple continuing short-range processes. We tested this possibility using simulations and time-series modeling, complemented by traditional analyses using power spectra and detrended fluctuation analysis (as often adopted more recently). Our results show that 1/f-type fluctuations in musical contexts may be explained by short-range models involving multiple time lags, and the temporal ranges in which rhythmic hierarchies are expressed are apt to create these fluctuations through such short-range autocorrelations. We also analyzed gait, heartbeat, and resting-state EEG data, demonstrating the coexistence of multiple short-range processes and 1/f fluctuation in a variety of phenomena. This suggests that 1/f fluctuation might not indicate long-range correlations, and points to its likely origins in musical rhythm and related structures.


Subject(s)
Alpha Rhythm , Music , Noise , Beta Rhythm , Electroencephalography , Gait , Heart Rate , Humans
6.
Q J Exp Psychol (Hove) ; 71(6): 1367-1381, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29808767

ABSTRACT

In a continuous recognition paradigm, most stimuli elicit superior recognition performance when the item to be recognized is the most recent stimulus (a recency-in-memory effect). Furthermore, increasing the number of intervening items cumulatively disrupts memory in most domains. Memory for melodies composed in familiar tuning systems also shows superior recognition for the most recent melody, but no disruptive effects from the number of intervening melodies. A possible explanation has been offered in a novel regenerative multiple representations (RMR) conjecture. The RMR assumes that prior knowledge informs perception and perception influences memory representations. It postulates that melodies are perceived, thus also represented, simultaneously as integrated entities and also as their components (such as pitches, pitch intervals, short phrases and rhythm). Multiple representations of the melody components and melody as a whole can restore one another, thus providing resilience against disruptive effects from intervening items. The conjecture predicts that melodies in an unfamiliar tuning system are not perceived as integrated melodies and should (a) disrupt recency-in-memory advantages and (b) facilitate disruptive effects from the number of intervening items. We test these two predictions in three experiments. Experiments 1 and 2 show that no recency-in-memory effects emerge for melodies in an unfamiliar tuning system. In Experiment 3, disruptive effects occurred as the number of intervening items and unfamiliarity of the stimuli increased. Overall, results are coherent with the predictions of the RMR conjecture. Further investigation of the conjecture's predictions may lead to greater understanding of the fundamental relationships between memory, perception and behavior.


Subject(s)
Auditory Perception/physiology , Memory/physiology , Music , Recognition, Psychology/physiology , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Male , Psychoacoustics , Young Adult
7.
Q J Exp Psychol (Hove) ; 71(5): 1150-1171, 2018 May.
Article in English | MEDLINE | ID: mdl-28403694

ABSTRACT

In many memory domains, a decrease in recognition performance between the first and second presentation of an object is observed as the number of intervening items increases. However, this effect is not universal. Within the auditory domain, this form of interference has been demonstrated in word and single-note recognition, but has yet to be substantiated using relatively complex musical material such as a melody. Indeed, it is becoming clear that music shows intriguing properties when it comes to memory. This study investigated how the number of intervening items influences memory for melodies. In Experiments 1, 2 and 3, one melody was presented per trial in a continuous recognition paradigm. After each melody, participants indicated whether they had heard the melody in the experiment before by responding "old" or "new." In Experiment 4, participants rated perceived familiarity for every melody without being told that melodies reoccur. In four experiments using two corpora of music, two different memory tasks, transposed and untransposed melodies and up to 195 intervening melodies, no sign of a disruptive effect from the number of intervening melodies beyond the first was observed. We propose a new "regenerative multiple representations" conjecture to explain why intervening items increase interference in recognition memory for most domains but not music. This conjecture makes several testable predictions and has the potential to strengthen our understanding of domain specificity in human memory, while moving one step closer to explaining the "paradox" that is memory for melody.


Subject(s)
Memory/physiology , Music , Pitch Perception/physiology , Recognition, Psychology/physiology , Acoustic Stimulation , Adolescent , Adult , Awareness/physiology , Female , Humans , Male , Young Adult
8.
Q J Exp Psychol (Hove) ; : 1-45, 2017 May 26.
Article in English | MEDLINE | ID: mdl-28548562

ABSTRACT

In a continuous recognition paradigm, most stimuli elicit superior recognition performance when the item to be recognised is the most recent stimulus (a recency-in-memory effect). Furthermore, increasing the number of intervening items cumulatively disrupts memory in most domains. Memory for melodies composed in familiar tuning systems also shows superior recognition for the most recent melody, but no disruptive effects from the number of intervening melodies. A possible explanation has been offered in a novel regenerative multiple representations (RMR) conjecture. The RMR assumes that prior knowledge informs perception and perception influences memory representations. It postulates that melodies are perceived, thus also represented, simultaneously as integrated entities and also their components (such as pitches, pitch intervals, short phrases, and rhythm). Multiple representations of the melody components and melody as a whole can restore one another, thus providing resilience against disruptive effects from intervening items. The conjecture predicts that melodies in an unfamiliar tuning system are not perceived as integrated melodies and should: a) disrupt recency-in-memory advantages; and b) facilitate disruptive effects from the number of intervening items. We test these two predictions in three experiments. Experiments 1 and 2 show that no recency-in-memory effects emerge for melodies in an unfamiliar tuning system. In Experiment 3, disruptive effects occurred as the number of intervening items and unfamiliarity of the stimuli increased. Overall, results are coherent with the predictions of the RMR conjecture. Further investigation of the conjecture's predictions may lead to greater understanding of the fundamental relationships between memory, perception, and behavior.

9.
PLoS One ; 11(12): e0167643, 2016.
Article in English | MEDLINE | ID: mdl-27997625

ABSTRACT

Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous sounds, rather than instrumental discontinuous notes. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event 'hazard' analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral measures linked to timbre strengthened the models. Overall, results show that even when little of the pitch and rhythm information important for phrasing in note-based music is available, phrasing is still perceived, primarily in response to changes of intensity and timbre. Implications for electroacoustic music composition and music recommender systems are discussed.


Subject(s)
Acoustics , Music , Pitch Perception/physiology , Adolescent , Adult , Female , Humans , Male
10.
Psychomusicology ; 26(1): 35-42, 2016 Mar.
Article in English | MEDLINE | ID: mdl-27182100

ABSTRACT

Research has established that there is a cognitive link between perception and production of the same movement. However, there has been relatively little research into the relevance of this for non-expert perceivers, such as music listeners who do not play instruments themselves. In two experiments we tested whether participants can quickly learn new associations between sounds and observed movement without performing those movements themselves. We measured motor evoked potentials (MEPs) in the first dorsal interosseous muscle of participants' right hands while test tones were heard and single transcranial magnetic stimulation (TMS) pulses were used to trigger motor activity. In Experiment 1 participants in a 'human' condition (n=4) learnt to associate the test tone with finger movement of the experimenter, while participants in a 'computer' condition (n=4) learnt that the test tone was triggered by a computer. Participants in the human condition showed a larger increase in MEPs compared with those in the computer condition. In a second experiment pairing between sounds and movement occurred without participants repeatedly observing the movement and we found no such difference between the human (n=4) and computer (n=4) conditions. These results suggest that observers can quickly learn to associate sound with movement, so it should not be necessary to have played an instrument to experience some motor resonance when hearing that instrument.

11.
J Exp Psychol Hum Percept Perform ; 42(4): 594-609, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26594881

ABSTRACT

This research explored the relations between the predictability of musical structure, expressive timing in performance, and listeners' perceived musical tension. Studies analyzing the influence of expressive timing on listeners' affective responses have been constrained by the fact that, in most pieces, the notated durations limit performers' interpretive freedom. To circumvent this issue, we focused on the unmeasured prelude, a semi-improvisatory genre without notated durations. In Experiment 1, 12 professional harpsichordists recorded an unmeasured prelude on a harpsichord equipped with a MIDI console. Melodic expectation was assessed using a probabilistic model (IDyOM [Information Dynamics of Music]) whose expectations have been previously shown to match closely those of human listeners. Performance timing information was extracted from the MIDI data using a score-performance matching algorithm. Time-series analyses showed that, in a piece with unspecified note durations, the predictability of melodic structure measurably influenced tempo fluctuations in performance. In Experiment 2, another 10 harpsichordists, 20 nonharpsichordist musicians, and 20 nonmusicians listened to the recordings from Experiment 1 and rated the perceived tension continuously. Granger causality analyses were conducted to investigate predictive relations among melodic expectation, expressive timing, and perceived tension. Although melodic expectation, as modeled by IDyOM, modestly predicted perceived tension for all participant groups, neither of its components, information content or entropy, was Granger causal. In contrast, expressive timing was a strong predictor and was Granger causal. However, because melodic expectation was also predictive of expressive timing, our results outline a complete chain of influence from predictability of melodic structure via expressive performance timing to perceived musical tension. (PsycINFO Database Record


Subject(s)
Auditory Perception , Music/psychology , Time Perception , Adult , Female , Humans , Male , Middle Aged , Young Adult
12.
Behav Res Methods ; 48(2): 783-802, 2016 06.
Article in English | MEDLINE | ID: mdl-26100765

ABSTRACT

Many articles on perception, performance, psychophysiology, and neuroscience seek to relate pairs of time series through assessments of their cross-correlations. Most such series are individually autocorrelated: they do not comprise independent values. Given this situation, an unfounded reliance is often placed on cross-correlation as an indicator of relationships (e.g., referent vs. response, leading vs. following). Such cross-correlations can indicate spurious relationships, because of autocorrelation. Given these dangers, we here simulated how and why such spurious conclusions can arise, to provide an approach to resolving them. We show that when multiple pairs of series are aggregated in several different ways for a cross-correlation analysis, problems remain. Finally, even a genuine cross-correlation function does not answer key motivating questions, such as whether there are likely causal relationships between the series. Thus, we illustrate how to obtain a transfer function describing such relationships, informed by any genuine cross-correlations. We illustrate the confounds and the meaningful transfer functions by two concrete examples, one each in perception and performance, together with key elements of the R software code needed. The approach involves autocorrelation functions, the establishment of stationarity, prewhitening, the determination of cross-correlation functions, the assessment of Granger causality, and autoregressive model development. Autocorrelation also limits the interpretability of other measures of possible relationships between pairs of time series, such as mutual information. We emphasize that further complexity may be required as the appropriate analysis is pursued fully, and that causal intervention experiments will likely also be needed.


Subject(s)
Movement/physiology , Neuropsychological Tests/statistics & numerical data , Neurosciences/methods , Perception/physiology , Psychomotor Performance/physiology , Data Interpretation, Statistical , Humans , Models, Statistical , Neurosciences/statistics & numerical data , Software
13.
PLoS One ; 10(8): e0135082, 2015.
Article in English | MEDLINE | ID: mdl-26285010

ABSTRACT

For listeners familiar with Western twelve-tone equal-tempered (12-TET) music, a novel microtonal tuning system is expected to present additional processing challenges. We aimed to determine whether this was the case, focusing on the extent to which our perceptions can be considered bottom-up (psychoacoustic and primarily perceptual) and top-down (dependent on familiarity and cognitive processing). We elicited both overt response ratings, and covert event-related potentials (ERPs), so as to compare subjective impressions of sounds with the neurophysiological processing of the acoustic signal. We hypothesised that microtonal intervals are perceived differently from 12-TET intervals, and that the responses of musicians (n = 10) and non-musicians (n = 10) are distinct. Two-note chords were presented comprising 12-TET intervals (consonant and dissonant) or microtonal (quarter tone) intervals, and ERP, subjective roughness ratings, and liking ratings were recorded successively. Musical experience mediated the perception of differences between dissonant and microtone intervals, with non-musicians giving similar ratings for each, and musicians preferring dissonant over the less commonly used microtonal intervals, rating them as less rough. ERP response amplitude was greater for consonant intervals than other intervals. Musical experience interacted with interval type, suggesting that musical expertise facilitates the sensory and perceptual discrimination of microtonal intervals from 12-TET intervals, and an increased ability to categorize such intervals. Non-musicians appear to have perceived microtonal intervals as instances of neighbouring 12-TET intervals.


Subject(s)
Auditory Perception/physiology , Behavior/physiology , Electroencephalography , Music/psychology , Surveys and Questionnaires , Acoustic Stimulation , Adult , Evoked Potentials , Female , Humans , Male , Psychoacoustics , Recognition, Psychology
14.
J Parkinsons Dis ; 5(1): 105-16, 2015.
Article in English | MEDLINE | ID: mdl-25468233

ABSTRACT

BACKGROUND: Unsteady gait and falls are major problems for people with Parkinson's disease (PD). Symmetric auditory cues at altered cadences have been used to improve walking speed or step length. However, few people are exactly symmetric in terms of morphology or movement patterns and effects of symmetric cueing on gait steadiness are inconclusive. OBJECTIVES: To investigate if matching auditory cue a/symmetry to an individual's intrinsic symmetry or asymmetry affects gait steadiness, gait symmetry, and comfort to cues, in people with PD, healthy age-matched controls (HAM) and young. METHODS: Thirty participants; 10 with PD, 11 HAM (66 years), and 9 young (30 years), completed five baseline walks (no cues) and twenty-five cued walks at habitual cadence but different a/symmetries. Outcomes included; gait steadiness (step time variability and smoothness by harmonic ratios), walking speed, symmetry, comfort, and cue lag times. RESULTS: Without cues, PD participants had slower and less steady gait than HAM or young. Gait symmetry was distinct from gait steadiness, and unaffected by cue symmetry or a diagnosis of PD, but associated with aging. All participants maintained preferred gait symmetry and lag times independent of cue symmetry. When cues were matched to the individual's habitual gait symmetry and cadence: Gait steadiness improved in the PD group, but deteriorated in the HAM controls, and was unchanged in the young. Gait outcomes worsened for the two PD participants who reported discomfort to cued walking and had high New Freezing of Gait scores. CONCLUSIONS: It cannot be assumed all individuals benefit equally from auditory cues. Symmetry matched auditory cues compensated for unsteady gait in most people with PD, but interfered with gait steadiness in older people without basal ganglia deficits.


Subject(s)
Acoustic Stimulation/methods , Cues , Gait Disorders, Neurologic/etiology , Gait Disorders, Neurologic/therapy , Parkinson Disease/complications , Acceleration , Adult , Age Factors , Aged , Case-Control Studies , Female , Humans , Male , Middle Aged , Statistics, Nonparametric , Treatment Outcome , Walking , Young Adult
15.
Acta Psychol (Amst) ; 149: 117-28, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24809252

ABSTRACT

The aim of this work was to investigate perceived loudness change in response to melodies that increase (up-ramp) or decrease (down-ramp) in acoustic intensity, and the interaction with other musical factors such as melodic contour, tempo, and tonality (tonal/atonal). A within-subjects design manipulated direction of linear intensity change (up-ramp, down-ramp), melodic contour (ascending, descending), tempo, and tonality, using single ramp trials and paired ramp trials, where single up-ramps and down-ramps were assembled to create continuous up-ramp/down-ramp or down-ramp/up-ramp pairs. Twenty-nine (Exp 1) and thirty-six (Exp 2) participants rated loudness continuously in response to trials with monophonic 13-note piano melodies lasting either 6.4s or 12s. Linear correlation coefficients >.89 between loudness and time show that time-series loudness responses to dynamic up-ramp and down-ramp melodies are essentially linear across all melodies. Therefore, 'indirect' loudness change derived from the difference in loudness at the beginning and end points of the continuous response was calculated. Down-ramps were perceived to change significantly more in loudness than up-ramps in both tonalities and at a relatively slow tempo. Loudness change was also greater for down-ramps presented with a congruent descending melodic contour, relative to an incongruent pairing (down-ramp and ascending melodic contour). No differential effect of intensity ramp/melodic contour congruency was observed for up-ramps. In paired ramp trials assessing the possible impact of ramp context, loudness change in response to up-ramps was significantly greater when preceded by down-ramps, than when not preceded by another ramp. Ramp context did not affect down-ramp perception. The contribution to the fields of music perception and psychoacoustics are discussed in the context of real-time perception of music, principles of music composition, and performance of musical dynamics.


Subject(s)
Auditory Perception/physiology , Music , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Loudness Perception/physiology , Male , Psychoacoustics , Time Perception , Young Adult
16.
Cogn Process ; 15(4): 491-501, 2014 Nov.
Article in English | MEDLINE | ID: mdl-24805849

ABSTRACT

Previous studies have demonstrated that synchronising movements with other people can influence affiliative behaviour towards them. While research has focused on synchronisation with visually observed movement, synchronisation with a partner who is heard may have similar effects. We replicate findings showing that synchronisation can influence ratings of likeability of a partner, but demonstrate that this is possible with virtual interaction, involving a video of a partner. Participants performed instructed synchrony in time to sounds instead of the observable actions of another person. Results show significantly higher ratings of likeability of a partner after moving at the same time as sounds attributed to that partner, compared with moving in between sounds. Objectively quantified synchrony also correlated with ratings of likeability. Belief that sounds were made by another person was manipulated in Experiment 2, and results demonstrate that when sounds are attributed to a computer, ratings of likeability are not affected by moving in or out of time. These findings demonstrate that interaction with sound can be experienced as social interaction in the absence of genuine interpersonal contact, which may help explain why people enjoy engaging with recorded music.


Subject(s)
Attention/physiology , Interpersonal Relations , Movement/physiology , Perception , Psychomotor Performance/physiology , Sound , Acoustic Stimulation , Adolescent , Adult , Female , Fixation, Ocular , Humans , Male , Statistics, Nonparametric , Students , Universities , User-Computer Interface , Young Adult
17.
Psychol Res ; 78(5): 721-35, 2014 Sep.
Article in English | MEDLINE | ID: mdl-23975118

ABSTRACT

There is a large body of evidence relating to the ways that people tap in time with sounds, and perform error correction in order to do this. However, off-beat tapping is less well investigated than on-beat tapping. The current study involves coordinating with a stimulus sequence with underlying isochrony and systematic deviations from this isochrony that increase or decrease in magnitude to look at people's capacity to error-correct when performing off-beat synchronisation with a set of sounds. Participants were instructed to 'tap between the tones' but 'try to maintain regularity'. While analysis using typical methods suggested a form of error correction was occurring, a series of more complex analyses demonstrated that participants' performance during each trial can be classified according to one of four different strategies: maintaining a regular pulse, error correction, phase resetting, and negative error correction. While maintaining a regular pulse was the preferred strategy in conditions with increasingly isochronous stimuli, the majority of trials are best explained by other strategies, suggesting that participants were generally influenced by variability in the stimuli.


Subject(s)
Auditory Perception/physiology , Psychomotor Performance/physiology , Time Perception/physiology , Adult , Female , Humans , Male , Young Adult
18.
Sci Rep ; 3: 2690, 2013.
Article in English | MEDLINE | ID: mdl-24045614

ABSTRACT

As we experience a temporal flux of events our expectations of future events change. Such expectations seem to be central to our perception of affect in music, but we have little understanding of how expectations change as recent information is integrated. When music establishes a pitch centre (tonality), we rapidly learn to anticipate its continuation. What happens when anticipations are challenged by new events? Here we show that providing a melodic challenge to an established tonality leads to progressive changes in the impact of the features of the stimulus on listeners' expectations. The results demonstrate that retrospective analysis of recent events can establish new patterns of expectation that converge towards probabilistic interpretations of the temporal stream. These studies point to wider applications of understanding the impact of information flow on future prediction and its behavioural utility.


Subject(s)
Cognition , Music/psychology , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Male , Mental Processes , Pitch Discrimination , Reaction Time , Young Adult
19.
PLoS One ; 8(2): e56052, 2013.
Article in English | MEDLINE | ID: mdl-23460791

ABSTRACT

Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant's imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability.


Subject(s)
Imagination , Music , Noise , Acoustic Stimulation , Adult , Humans , Imagery, Psychotherapy , Task Performance and Analysis
20.
Exp Psychol ; 60(1): 53-63, 2013.
Article in English | MEDLINE | ID: mdl-22935329

ABSTRACT

Synchronization has recently received attention as a form of interpersonal interaction that may affect the affiliative relationships of those engaged in it. While there is evidence to suggest that synchronized movements lead to increased affiliative behavior (Hove & Risen, 2009; Valdesolo & DeSteno, 2011; Wiltermuth & Heath, 2009), the influence of other interpersonal cues has yet to be fully controlled. The current study controls for these features by using computer algorithms to replace human partners. By removing genuine interpersonal interaction, it also tests whether sounds alone can influence affiliative relationships, when it appears that another human agent has triggered those sounds. Results suggest that subjective experience of synchrony had a positive effect on a measure of trust, but task success was a similarly good predictor. An objective measure of synchrony was only related to trust in conditions where participants were instructed to move at the same time as stimuli.


Subject(s)
Attention/physiology , Cues , Motion Perception/physiology , User-Computer Interface , Adolescent , Adult , Female , Humans , Interpersonal Relations , Male , Middle Aged , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...