Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 16: 862663, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35600615

RESUMO

Software is intangible, invisible, and at the same time pervasive in everyday devices, activities, and services accompanying our life. Therefore, citizens hardly realize its complexity, power, and impact in many aspects of their daily life. In this study, we report on one experiment that aims at letting citizens make sense of software presence and activity in their everyday lives, through sound: the invisible complexity of the processes involved in the shutdown of a personal computer. We used sonification to map information embedded in software events into the sound domain. The software events involved in a shutdown have names related to the physical world and its actions: write events (information is saved into digital memories), kill events (running processes are terminated), and exit events (running programs are exited). The research study presented in this article has a "double character." It is an artistic realization that develops specific aesthetic choices, and it has also pedagogical purposes informing the causal listener about the complexity of software behavior. Two different sound design strategies have been applied: one strategy is influenced by the sonic characteristics of the Glitch music scene, which makes deliberate use of glitch-based sound materials, distortions, aliasing, quantization noise, and all the "failures" of digital technologies; and a second strategy based on the sound samples of a subcontrabass Paetzold recorder, an unusual and special acoustic instrument which unique sound has been investigated in the contemporary art music scene. Analysis of quantitative ratings and qualitative comments of 37 participants revealed that the sound design strategies succeeded in communicating the nature of the computer processes. Participants also showed in general an appreciation of the aesthetics of the peculiar sound models used in this study.

2.
Front Neurosci ; 16: 832265, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35360157

RESUMO

In this article, we present our work on the sonification of notated complex spectral structures. It is part of a larger research project about the design of a new notation system for representing sound-based musical structures. Complex spectral structures are notated with special symbols in the scores, which can be digitally rendered so that the user can hear key aspects of what has been notated. This hearing of the notated data is significantly different from reading the same data, and reveals the complexity hidden in its simplified notation. The digitally played score is not the music itself but can provide essential information about the music in ways that can only be obtained in sounding form. The playback needs to be designed so that the user can make relevant sonic readings of the sonified data. The sound notation system used here is an adaptation of Thoresen and Hedman's spectromorphological analysis notation. Symbols originally developed by Lasse Thoresen from Pierre Schaeffer's typo-morphology have in this system been adapted to display measurable spectral features of timbrel structure for the composition and transcription of sound-based musical structures. Spectrum category symbols are placed over a spectral grand-staff that combines indications of pitch and frequency values for the combined display of music related to pitch-based and spectral values. Spectral features of a musical structure such as spectral width and density are represented as graphical symbols and sonically rendered. In perceptual experiments we have verified that users can identify spectral notation parameters based on their sonification. This confirms the main principle of sonification that is that the data/dimensions relations in one domain, in our case notated representation of spectral features, are transformed in perceived relations in the audio domain, and back.

3.
Front Neurosci ; 10: 521, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27891074

RESUMO

In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.

4.
J Acoust Soc Am ; 136(5): 2839-50, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25373983

RESUMO

Both timbre and dynamics of isolated piano tones are determined exclusively by the speed with which the hammer hits the strings. This physical view has been challenged by pianists who emphasize the importance of the way the keyboard is touched. This article presents empirical evidence from two perception experiments showing that touch-dependent sound components make sounds with identical hammer velocities but produced with different touch forms clearly distinguishable. The first experiment focused on finger-key sounds: musicians could identify pressed and struck touches. When the finger-key sounds were removed from the sounds, the effect vanished, suggesting that these sounds were the primary identification cue. The second experiment looked at key-keyframe sounds that occur when the key reaches key-bottom. Key-bottom impact was identified from key motion measured by a computer-controlled piano. Musicians were able to discriminate between piano tones that contain a key-bottom sound from those that do not. However, this effect might be attributable to sounds associated with the mechanical components of the piano action. In addition to the demonstrated acoustical effects of different touch forms, visual and tactile modalities may play important roles during piano performance that influence the production and perception of musical expression on the piano.


Assuntos
Percepção Auditiva , Discriminação Psicológica/fisiologia , Música/psicologia , Tato , Acelerometria , Adulto , Sinais (Psicologia) , Desenho de Equipamento , Equipamentos e Provisões , Feminino , Dedos/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Som , Espectrografia do Som , Estresse Mecânico , Adulto Jovem
5.
PLoS One ; 9(12): e115587, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25551392

RESUMO

Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.


Assuntos
Percepção Auditiva , Emoções , Atividade Motora , Música , Caminhada/psicologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
6.
PLoS One ; 8(12): e82491, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24358192

RESUMO

The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed.


Assuntos
Estimulação Acústica , Bases de Dados Factuais , Projetos de Pesquisa , Som
7.
Front Psychol ; 4: 487, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23908642

RESUMO

The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77-89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0-8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010).

8.
Cortex ; 47(9): 1068-81, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21696717

RESUMO

Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 × 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants' values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than stimuli without performance variations.


Assuntos
Percepção Auditiva , Emoções , Música/psicologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
9.
J Acoust Soc Am ; 118(2): 1154-65, 2005 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-16158669

RESUMO

This study investigated the temporal behavior of grand piano actions from different manufacturers under different touch conditions and dynamic levels. An experimental setup consisting of accelerometers and a calibrated microphone was used to capture key and hammer movements, as well as the sound signal. Five selected keys were played by pianists with two types of touch ("pressed touch" versus "struck touch") over the entire dynamic range. Discrete measurements were extracted from the accelerometer data for each of the over 2300 recorded tones (e.g., finger-key, hammer-string, and key bottom contact times, maximum hammer velocity). Travel times of the hammer (from finger-key to hammer-string) as a function of maximum hammer velocity varied clearly between the two types of touch, but only slightly between pianos. A travel time approximation used in earlier work [Goebl W., (2001). J. Acoust. Soc. Am. 110, 563-572] derived from a computer-controlled piano was verified. Constant temporal behavior over type of touch and low compression properties of the parts of the action (reflected in key bottom contact times) were hypothesized to be indicators for instrumental quality.


Assuntos
Dedos/fisiologia , Música , Tato/fisiologia , Desenho de Equipamento , Humanos , Análise de Regressão , Espectrografia do Som , Fatores de Tempo
10.
J Acoust Soc Am ; 114(4 Pt 1): 2273-83, 2003 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-14587624

RESUMO

The recording and reproducing capabilities of a Yamaha Disklavier grand piano and a Bösendorfer SE290 computer-controlled grand piano were tested, with the goal of examining their reliability for performance research. An experimental setup consisting of accelerometers and a calibrated microphone was used to capture key and hammer movements, as well as the acoustic signal. Five selected keys were played by pianists with two types of touch ("staccato" and "legato"). Timing and dynamic differences between the original performance, the corresponding MIDI file recorded by the computer-controlled pianos, and its reproduction were analyzed. The two devices performed quite differently with respect to timing and dynamic accuracy. The Disklavier's onset capturing was slightly more precise (+/- 10 ms) than its reproduction (-20 to +30 ms); the Bösendorfer performed generally better, but its timing accuracy was slightly less precise for recording (-10 to 3 ms) than for reproduction (+/- 2 ms). Both devices exhibited a systematic (linear) error in recording over time. In the dynamic dimension, the Bösendorfer showed higher consistency over the whole dynamic range, while the Disklavier performed well only in a wide middle range. Neither device was able to capture or reproduce different types of touch.


Assuntos
Microcomputadores , Música , Espectrografia do Som , Desenho de Equipamento , Humanos , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA