Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
J Patient Saf ; 18(5): e823-e866, 2022 08 01.
Article in English | MEDLINE | ID: mdl-35195113

ABSTRACT

OBJECTIVE: Electronic health records (EHRs) and big data tools offer the opportunity for surveillance of adverse events (patient harm associated with medical care). We used International Classification of Diseases, Ninth Revision, codes in electronic records to identify known, and potentially novel, adverse reactions to blood transfusion. METHODS: We used 49,331 adult admissions involving critical care at a major teaching hospital, 2001-2012, in the Medical Information Mart for Intensive Care III EHRs database. We formed a T (defined as packed red blood cells, platelets, or plasma) group of 21,443 admissions versus 25,468 comparison (C) admissions. The International Classification of Diseases, Ninth Revision, Clinical Modification , diagnosis codes were compared for T versus C, described, and tested with statistical tools. RESULTS: Transfusion adverse events (TAEs) such as transfusion-associated circulatory overload (TACO; 12 T cases; rate ratio [RR], 15.61; 95% confidence interval [CI], 2.49-98) were found. There were also potential TAEs similar to TAEs, such as fluid overload disorder (361 T admissions; RR, 2.24; 95% CI, 1.88-2.65), similar to TACO. Some diagnoses could have been sequelae of TAEs, including nontraumatic compartment syndrome of abdomen (52 T cases; RR, 6.76; 95% CI, 3.40-14.9) possibly being a consequence of TACO. CONCLUSIONS: Surveillance for diagnosis codes that could be TAE sequelae or unrecognized TAE might be useful supplements to existing medical product adverse event programs.


Subject(s)
Electronic Health Records , Transfusion Reaction , Adult , Blood Transfusion , Humans , Risk Factors , Transfusion Reaction/epidemiology
2.
JMIRx Med ; 2(3): e27017, 2021 Aug 11.
Article in English | MEDLINE | ID: mdl-37725533

ABSTRACT

BACKGROUND: Big data tools provide opportunities to monitor adverse events (patient harm associated with medical care) (AEs) in the unstructured text of electronic health care records (EHRs). Writers may explicitly state an apparent association between treatment and adverse outcome ("attributed") or state the simple treatment and outcome without an association ("unattributed"). Many methods for finding AEs in text rely on predefining possible AEs before searching for prespecified words and phrases or manual labeling (standardization) by investigators. We developed a method to identify possible AEs, even if unknown or unattributed, without any prespecifications or standardization of notes. Our method was inspired by word-frequency analysis methods used to uncover the true authorship of disputed works credited to William Shakespeare. We chose two use cases, "transfusion" and "time-based." Transfusion was chosen because new transfusion AE types were becoming recognized during the study data period; therefore, we anticipated an opportunity to find unattributed potential AEs (PAEs) in the notes. With the time-based case, we wanted to simulate near real-time surveillance. We chose time periods in the hope of detecting PAEs due to contaminated heparin from mid-2007 to mid-2008 that were announced in early 2008. We hypothesized that the prevalence of contaminated heparin may have been widespread enough to manifest in EHRs through symptoms related to heparin AEs, independent of clinicians' documentation of attributed AEs. OBJECTIVE: We aimed to develop a new method to identify attributed and unattributed PAEs using the unstructured text of EHRs. METHODS: We used EHRs for adult critical care admissions at a major teaching hospital (2001-2012). For each case, we formed a group of interest and a comparison group. We concatenated the text notes for each admission into one document sorted by date, and deleted replicate sentences and lists. We identified statistically significant words in the group of interest versus the comparison group. Documents in the group of interest were filtered to those words, followed by topic modeling on the filtered documents to produce topics. For each topic, the three documents with the maximum topic scores were manually reviewed to identify PAEs. RESULTS: Topics centered around medical conditions that were unique to or more common in the group of interest, including PAEs. In each use case, most PAEs were unattributed in the notes. Among the transfusion PAEs was unattributed evidence of transfusion-associated cardiac overload and transfusion-related acute lung injury. Some of the PAEs from mid-2007 to mid-2008 were increased unattributed events consistent with AEs related to heparin contamination. CONCLUSIONS: The Shakespeare method could be a useful supplement to AE reporting and surveillance of structured EHR data. Future improvements should include automation of the manual review process.

3.
Neuroimage ; 209: 116496, 2020 04 01.
Article in English | MEDLINE | ID: mdl-31899286

ABSTRACT

Improvisation is sometimes described as instant composition and offers a glimpse into real-time musical creativity. Over the last decade, researchers have built up our understanding of the core neural activity patterns associated with musical improvisation by investigating cohorts of professional musicians. However, since creative behavior calls on the unique individuality of an artist, averaging data across musicians may dilute important aspects of the creative process. By performing case study investigations of world-class artists, we may gain insight into their unique creative abilities and achieve a deeper understanding of the biological basis of musical creativity. In this experiment, functional magnetic resonance imaging and functional connectivity were used to study the neural correlates of improvisation in famed Classical music performer and improviser, Gabriela Montero. GM completed two control tasks of varying musical complexity; for the Scale condition she repeatedly played a chromatic scale and for the Memory condition she performed a given composition by memory. For the experimental improvisation condition, she performed improvisations. Thus, we were able to compare the neural activity that underlies a generative musical task like improvisation to 'rote' musical tasks of playing pre-learned and pre-memorized music. In GM, improvisation was largely associated with activation of auditory, frontal/cognitive, motor, parietal, occipital, and limbic areas, suggesting that improvisation is a multimodal activity for her. Functional connectivity analysis suggests that the visual network, default mode network, and subcortical networks are involved in improvisation as well. While these findings should not be generalized to other samples or populations, results here shed insight into the brain activity that underlies GM's unique abilities to perform Classical-style musical improvisations.


Subject(s)
Cerebral Cortex/physiology , Connectome , Creativity , Limbic System/physiology , Music , Nerve Net/physiology , Psychomotor Performance/physiology , Cerebral Cortex/diagnostic imaging , Female , Humans , Limbic System/diagnostic imaging , Magnetic Resonance Imaging , Middle Aged , Nerve Net/diagnostic imaging
4.
Cochlear Implants Int ; 16 Suppl 3: S114-20, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26561882

ABSTRACT

OBJECTIVES: The purpose of this study was to investigate the extent to which cochlear implant (CI) users rely on tempo and mode in perception of musical emotion when compared with normal hearing (NH) individuals. METHODS: A test battery of novel four-bar melodies was created and adapted to four permutations with alterations of tonality (major vs. minor) and tempo (presto vs. largo), resulting in non-ambiguous (major key/fast tempo and minor key/slow tempo) and ambiguous (major key/slow tempo, and minor key/fast tempo) musical stimuli. Both CI and NH participants listened to each clip and provided emotional ratings on a Likert scale of +5 (happy) to -5 (sad). RESULTS: A three-way ANOVA demonstrated an overall effect for tempo in both groups, and an overall effect for mode in the NH group. The CI group rated stimuli of the same tempo similarly, regardless of changes in mode, whereas the NH group did not. A subgroup analysis indicated the same effects in both musician and non-musician CI users and NH listeners. DISCUSSION: The results suggest that the CI group relied more heavily on tempo than mode in making musical emotion decisions. The subgroup analysis further suggests that level of musical training did not significantly impact this finding. CONCLUSION: CI users weigh temporal cues more heavily than pitch cues in inferring musical emotion. These findings highlight the significant disadvantage of CI users in comparison with NH listeners for music perception, particularly during recognition of musical emotion, a critically important feature of music.


Subject(s)
Cochlear Implants , Cues , Deafness/psychology , Emotions , Music/psychology , Pitch Perception , Adult , Aged , Cochlear Implantation , Deafness/surgery , Female , Humans , Male , Middle Aged
5.
J Acoust Soc Am ; 136(4): EL256-62, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25324107

ABSTRACT

1/f serial correlations and statistical self-similarity (fractal structure) have been measured in various dimensions of musical compositions. Musical performances also display 1/f properties in expressive tempo fluctuations, and listeners predict tempo changes when synchronizing. Here the authors show that the 1/f structure is sufficient for listeners to predict the onset times of upcoming musical events. These results reveal what information listeners use to anticipate events in complex, non-isochronous acoustic rhythms, and this will entail innovative models of temporal synchronization. This finding could improve therapies for Parkinson's and related disorders and inform deeper understanding of how endogenous neural rhythms anticipate events in complex, temporally structured communication signals.


Subject(s)
Anticipation, Psychological , Auditory Perception , Cues , Music , Periodicity , Acoustic Stimulation , Female , Fractals , Humans , Male , Pitch Perception , Sound Spectrography , Time Factors
6.
Front Psychol ; 5: 970, 2014.
Article in English | MEDLINE | ID: mdl-25232347

ABSTRACT

Fractal structure is a ubiquitous property found in nature and biology, and has been observed in processes at different levels of organization, including rhythmic behavior and musical structure. A temporal process is characterized as fractal when serial long-term correlations and statistical self-similarity (scaling) are present. Previous studies of sensorimotor synchronization using isochronous (non-fractal) stimuli show that participants' errors exhibit persistent structure (positive long-term correlations), while their inter-tap intervals (ITIs) exhibit anti-persistent structure (negative long-term correlations). Auditory-motor synchronization has not been investigated with anti-persistent stimuli. In the current study, we systematically investigated whether the fractal structure of auditory rhythms was reflected in the responses of participants who were asked to coordinate their taps with each event. We asked musicians and non-musicians to tap with 12 different rhythms that ranged from anti-persistent to persistent. The scaling exponents of the ITIs were strongly correlated with the scaling exponents of the stimuli, showing that the long-term structure of the participants' taps scaled with the long-term structure of the stimuli. Surprisingly, the performance of the musicians was not significantly better than that of the non-musicians. Our results imply that humans are able to readily adapt (rather than simply react) to the overall statistical structure of temporally fluctuating stimuli, regardless of musical skill.

7.
PLoS One ; 9(8): e105144, 2014.
Article in English | MEDLINE | ID: mdl-25144200

ABSTRACT

One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.


Subject(s)
Emotions/physiology , Music , Adolescent , Adult , Female , Humans , Male , Middle Aged , Young Adult
8.
PLoS One ; 9(2): e88665, 2014.
Article in English | MEDLINE | ID: mdl-24586366

ABSTRACT

Interactive generative musical performance provides a suitable model for communication because, like natural linguistic discourse, it involves an exchange of ideas that is unpredictable, collaborative, and emergent. Here we show that interactive improvisation between two musicians is characterized by activation of perisylvian language areas linked to processing of syntactic elements in music, including inferior frontal gyrus and posterior superior temporal gyrus, and deactivation of angular gyrus and supramarginal gyrus, brain structures directly implicated in semantic processing of language. These findings support the hypothesis that musical discourse engages language areas of the brain specialized for processing of syntax but in a manner that is not contingent upon semantic processing. Therefore, we argue that neural regions for syntactic processing are not domain-specific for language but instead may be domain-general for communication.


Subject(s)
Brain Mapping , Magnetic Resonance Imaging , Music , Adult , Humans , Language , Male
9.
Music Percept ; 26(5): 401-413, 2009 Jun.
Article in English | MEDLINE | ID: mdl-25190901

ABSTRACT

WE INVESTIGATED PEOPLES' ABILITY TO ADAPT TO THE fluctuating tempi of music performance. In Experiment 1, four pieces from different musical styles were chosen, and performances were recorded from a skilled pianist who was instructed to play with natural expression. Spectral and rescaled range analyses on interbeat interval time-series revealed long-range (1/f type) serial correlations and fractal scaling in each piece. Stimuli for Experiment 2 included two of the performances from Experiment 1, with mechanical versions serving as controls. Participants tapped the beat at »- and ⅛-note metrical levels, successfully adapting to large tempo fluctuations in both performances. Participants predicted the structured tempo fluctuations, with superior performance at the »-note level. Thus, listeners may exploit long-range correlations and fractal scaling to predict tempo changes in music.

SELECTION OF CITATIONS
SEARCH DETAIL
...