Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Adicionar filtros








Intervalo de ano
1.
Salud ment ; 36(6): 449-457, nov.-dic. 2013. ilus, tab
Artigo em Espanhol | LILACS-Express | LILACS | ID: lil-703510

RESUMO

El propósito de este estudio fue registrar diferencias durante la audición de dos tipos diferentes de música en pacientes con Trastorno Depresivo Mayor (TDM), comparados con sujetos sanos, mediante imagen por resonancia magnética funcional (IRMf). La actividad cerebral con estímulos musicales ha sido investigada ampliamente en sujetos sanos, pero son escasos los estudios del procesamiento de la música en estados de patología mental, particularmente en el TDM. Los estudios en esta área interdisciplinaria proveen una nueva perspectiva de investigación para explorar los sustratos neurobiológicos del TDM. Participaron 20 sujetos de sexo masculino: 10 pacientes con TDM (34 ± 7 años) y 10 sujetos control (33 ± 7 años). Los pacientes se seleccionaron en el servicio de pre-consulta del Instituto Nacional de Psiquiatría Ramón de la Fuente Muñiz (INPRFM) de la Ciudad de México, y los sujetos control entre los trabajadores del propio Instituto que respondieron a la invitación. Todos los participantes contestaron, con fines de confirmar el diagnóstico, las escalas de ansiedad y depresión de Hamilton, los inventarios de Beck para ansiedad y depresión y el SCL-90-R. A los pacientes se les aplicó además el MINI-mental test. Para la IRMf se usó un equipo Philips Achieva de tres Teslas en el INPRFM, el análisis se hizo con el formato SPM2 usando el sistema MRIcro. Los estímulos experimentales fueron una obra musical de JS Bach validada como tranquila y otra de J Prodromidès validada como inquietante. Los resultados muestran diferencias tanto entre los grupos de sujetos como entre los tipos de música: en todos los casos se activó el área parahipocampal, la cola del núcleo caudado y la corteza temporal auditiva. Concluimos que el procesamiento neurobiológico de la música es afectado por el TDM. Se discuten las implicaciones clínicas y cognoscitivas de estos hallazgos.


The purpose of this study is the assessment of the differences in brain activity when patients with major depressive disorder (MDD) listen to two different types of music, with healthy subjects as control, by using functional magnetic resonance imaging (fMRI). Brain activity in musical stimuli with healthy subjects has been investigated extensively, but there are few neurobiologic music studies in mental illness, particularly in MDD. Studies in this area provide a new perspective on interdisciplinary research to explore the neurobiological substrates of MDD. This study involved 20 male subjects: 10 patients (34 ± 7 years), and 10 control subjects (33 ± 7 years). The MDD :atients were selected in the pre-consultation service of the National Institute of :sychiatry Ramón de la Fuente Muñiz (IN:RFM) of Mexico City, and control subjects were selected among workers of the Institute who responded to the invitation. All participants completed the Hamilton scales for anxiety and depression, Beck inventories for depression and anxiety and, the SCL-90-R. The Mini-Mental State Examination test was also administered to patients for diagnostic purposes. The fMRI was obtained by Philips Achieva 3-Tesla in the INPRF; the analysis was done using S:M2 format MRIcro system. The experimental stimuli were two pieces of music: one by JS Bach validated as quiet and another one by J Prodromidès validated as disturbing. Results show differences between both groups of subjects and between types of music. In all cases, the parahippocampal area, the tail of the caudate nucleus and the auditory temporal cortex were activated. The neurobiological processing of music is affected by MDD. We discuss the clinical and cognitive implications of these findings.

2.
Salud ment ; 32(1): 21-34, Jan.-Feb. 2009. ilus
Artigo em Espanhol | LILACS-Express | LILACS | ID: lil-632686

RESUMO

Even though music is usually considered a source of intense, diverse, and specific affective states, at the present time there is not a standardized scientific procedure that reveals with reliable confidence the emotional processes and events evoked by music. The progress in understanding musical emotion crucially depends in the development of reasonable secure methods to record and analyze such a peculiar and universally-sought affective process. In 1936 Kate Hevner published a pioneer study where she used a list of 66 adjectives commonly used to categorize musical compositions arranged in a circle of eight groups of similar emotions. The volunteers selected the terms that seemed appropriate to categorize their emotional experience while they listened to masterpieces by Debussy, Mendelssohn, Paganini, Tchaikovsky, and Wagner. The results were presented in histograms showing a different profile for each piece. Subsequent studies have advanced in the methods and techniques to assess the emotions produced by music but there are many still unresolved difficulties concerning the criteria to choose the musical pieces, the terms of emotion, the design of the experiment, the proper controls, and the relevant statistical tools to analyze the results. The present study was undertaken in order to test and advance an experimental technique designed to evaluate and study the human emotions evoked by music. Specifically, the study intends to prove if different musical excerpts evoke a significant agreement in the selection of previously organized emotion terms within a relatively homogeneous population of human subjects. Since music constitutes a form of acoustic language that has been selected and developed through millennia of human cultural evolution for the expression and communication of emotional states, it is supposed that there will be a significant agreement in the attribution of terms of emotion to musical segments among human evaluators belonging to a relatively homogeneous population. The attribution system allowed both to obtain objective responses derived from introspection and to analyze the data by means of an appropriate statistical processing of data obtained in groups of subjects submitted to carefully selected musical stimuli. Volunteer subjects were 108 college-level students of both sexes with a mean age of 22 years from schools and universities located in the central Mexico. The audition and attribution sessions lasted for 90 min and were conducted in a specially adapted classroom located in each institution. Four criteria were established for the selection of the musical excerpts: instrumental music, homogeneous melody and musical theme, clear and distinct affective tone, and samples of different cultures. The ten selected pieces were: 1. Mozart's piano concerto no. 1 7, K 453, third movement; 2. A sound of the magnetic spectra of an aurora borealis, a natural event; 3. Mussorgsky's Gnome, from Pictures at an Exhibition orchestrated by Ravel; 4. Andean folk music; 5. Tchaikovsky's Fifth Symphony, second movement; 6. << Through the Never>>, heavy metal music by Metallica; 7. Japanese Usagi folk music played with koto and shyakuhachi; 8. Mahler's Fifth Symphony, second movement; 9. Taqsim Sigah, Arab folk music played with kamandja, and 1 0. Bach's Inventions in three parts for piano, BMW 797. The selected fragments and their replicas were divided in two to five musically homogeneous segments (mean segment duration: 24 seconds) and were played in different order in each occasion. The segments were played twice during the test. During the first audition, the complete piece was played in order for the subjects to become familiar with the composition and freely express their reaction in writing. During the second hearing, the same piece was played in the separate selected segments and the volunteers were asked to choose those emotion-referring terms that more accurately identified their music-evoked feelings from an adjunct chart obtained and arranged from an original list of 328 Spanish words designing particular emotions. The terms had been previously arranged in 28 sets of semantically related terms located in 14 bipolar axes of opposing affective polarity in a circumflex model of the affective system. The recorded attributions from all the subjects were captured and transformed into ranks. The non-parametric Friedman test of rank bifactorial variance for k related samples was selected for the statistical analysis of agreement. All the data were gathered in the 28 categories or sets of emotion obtained in the previous taxonomy of emotion terms and the difference among the musical segments was tested. The difference was significant for 24 of the 28 emotional categories for α=0.05 and 33 degrees of freedom (Fr ≥43.88). In order to establish in which segments were the main significant differences, the extension of the Friedman test for comparison of groups to a control was undertaken. Thus, after applying the appropriate formula, a critical value of the difference - R1 - Ru - was established at ≥18.59. In this way it was possible to plot the significance level of all 28 emotion categories for each music segment and thereby to obtain the emotion profile of each selected music fragment. The differences obtained for the musical pieces were established both for the significant response of individual emotion, groups of emotions, and the global profile of the response. In all the pieces used, one or more terms showed significance. Sometimes as many as seven terms appear predominant (Mahler, Mozart). In contrast other segments produce only one or two responses (aurora borealis, Arab music). In most musical segments there were null responses implying that there was an agreement concerning not only the emotions that were present, but also those that did not occur. Concerning the global response, there were several profiles recognizable among different pieces. The histogram is slanted to the left when positive and vigorous emotions are reported (Tchaikovsky, Bach). The predominance of emotions in the center-right sector corresponds to negative and quiet emotions (Arab music) or in the fourth sector of negative and agitated emotions (Mahler). Sometimes a <> shaped profile was obtained when vigorous emotions predominated (Mahler, Metallica). A bell-shaped response was obtained when calm emotions were reported, both pleasant and unpleasant (Japanese music). There is also music that globally stimulates one of the four quadrants defined in the affective circle, such as pleasant (Mozart), unpleasant (Mussorgsky), exciting (Metallica) or relaxing emotions (Japanese music). The only segment that produced scattered responses in the four sectors of emotions was the aurora borealis. Very similar profiles were obtained with very different pieces, such as the identical responses to Mozart and Andean music. It is necessary to analyze the individual emotion terms to distinguish them. Several common characteristics can be detected in these two pieces, such as fast speed in tempo allegro, binary rhythm, counterpoint figures, and ascending melody, well known features in music composition. In contrast other segments evoked unpleasant responses (Mussorgsky), where fear, tension, doubt or pain was reported. The listener probably concedes a high value to a piece that evokes emotions that normally avoids in the context of a controlled artistic experience.


A pesar de que la música se considera habitualmente una fuente de estados afectivos variados diferentes e intensos, no existe una técnica en la ciencia que revele con suficiente fidelidad experimental los procesos y estados emocionales evocados por ella. Es así que el progreso en el entendimiento de la emoción musical depende críticamente del desarrollo de métodos razonablemente seguros para registrar y analizar este peculiar proceso afectivo. El presente estudio se realizó para desarrollar y probar una técnica diseñada para el estudio de las emociones humanas provocadas por la música. En particular se estudia si diferentes piezas musicales evocan un acuerdo significativo en la selección de términos de la emoción previamente sistematizados entre una población comparable de sujetos humanos. Dado que la música constituye un tipo de lenguaje acústico evolutiva y culturalmente seleccionado para la comunicación de estados emocionales, se puede suponer un acuerdo significativo entre evaluadores de una población homogénea en la atribución de términos de la emoción a segmentos musicales cuidadosamente seleccionados. El sistema de atribución elegido permitió obtener respuestas objetivas, derivadas de la introspección, a segmentos musicales y analizar los datos mediante un procesamiento estadístico apropiado de nivel de acuerdo entre observadores. Los voluntarios fueron 108 estudiantes con una edad promedio de 22 años, de ambos sexos, de cuatro escuelas de nivel superior en los Estados mexicanos de Querétaro y Guanajuato. Las sesiones duraron 90 minutos cada una y se realizaron en un salón adaptado en cada centro pedagógico. En las sesiones de audición y atribución se tocaron 10 obras musicales: cinco del repertorio clásico, cuatro del inventario popular propio y ajeno, así como la sonorización del espectro magnético de una aurora boreal, un fenómeno natural. Los fragmentos seleccionados, divididos en dos a cinco segmentos de 24seg de duración en promedio, se reprodujeron con diferente orden y en dos pasos consecutivos. Primero se tocó la sección completa para que el oyente expresara libremente sus reacciones por escrito. A continuación se tocaron los segmentos de la misma obra y el oyente escogió términos de la emoción que mejor identificaran su respuesta afectiva a cada uno de ellos a partir de un compendio adjunto. El formato de respuestas presentaba un arreglo donde el sujeto realizó la selección y atribución de los términos de la emoción que le evocaron los diferentes segmentos musicales al tiempo de escucharlos. Para esta tarea se proveyó a los sujetos de un esquema circular de términos de la emoción, anexo al formato, el cual se obtuvo a partir de una lista de 328 palabras en castellano que denominan emociones particulares. Los términos fueron agrupados finalmente en 28 conjuntos y 14 ejes de polaridades afectivas opuestas en un modelo circular del sistema afectivo, con un total de 168 términos. Para el análisis estadístico, las atribuciones de todos los sujetos participantes fueron transformadas a rangos y se aplicó la prueba no paramétrica de varianza bifactorial de Friedman para k muestras relacionadas como análisis de acuerdo entre observadores. Los datos se agruparon en las 28 categorías emocionales mencionadas y se probaron las diferencias entre los segmentos musicales. Éstas resultaron significativas en 24 de las 28 categorías emocionales para α = 0.05, con 33 grados de libertad (Fr ≥ 43.88). Para establecer en cuáles segmentos se ubicaban las diferencias, se aplicó la extensión de la prueba de Friedman para comparaciones de grupos con un control y se obtuvo un valor crítico de las diferencias, - R1 - Ru - ≥18.59, con lo cual fue posible graficar el nivel de significancia de todas las categorías emocionales para cada segmento musical. De esta forma se obtuvo el perfil emocional específico de cada fragmento de música para la población analizada. Los resultados muestran que en todos los segmentos musicales hay predominio significativo de una o más categorías de la emoción y que éstos son diferentes para la mayoría de los segmentos. Si, como es verosímil suponer, los términos de la emoción elegidos por los sujetos participantes correspondían efectivamente a estados emocionales particulares, entonces la mayoría de los segmentos musicales elegidos como estímulos parecen generar una respuesta emocional semejante y relativamente específica entre los escuchas, en función de las características de su composición. Esta técnica puede ser útil para generar y analizar estados emocionales específicos en situaciones experimentales y controladas de audición musical.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA