Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Perfusion ; : 2676591241258054, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38832503

ABSTRACT

INTRODUCTION: The trial hypothesized that minimally invasive extra-corporeal circulation (MiECC) reduces the risk of serious adverse events (SAEs) after cardiac surgery operations requiring extra-corporeal circulation without circulatory arrest. METHODS: This is a multicentre, international randomized controlled trial across fourteen cardiac surgery centres including patients aged ≥18 and <85 years undergoing elective or urgent isolated coronary artery bypass grafting (CABG), isolated aortic valve replacement (AVR) surgery, or CABG + AVR surgery. Participants were randomized to MiECC or conventional extra-corporeal circulation (CECC), stratified by centre and operation. The primary outcome was a composite of 12 post-operative SAEs up to 30 days after surgery, the risk of which MiECC was hypothesized to reduce. Secondary outcomes comprised: other SAEs; all-cause mortality; transfusion of blood products; time to discharge from intensive care and hospital; health-related quality-of-life. Analyses were performed on a modified intention-to-treat basis. RESULTS: The trial terminated early due to the COVID-19 pandemic; 1071 participants (896 isolated CABG, 97 isolated AVR, 69 CABG + AVR) with median age 66 years and median EuroSCORE II 1.24 were randomized (535 to MiECC, 536 to CECC). Twenty-six participants withdrew after randomization, 22 before and four after intervention. Fifty of 517 (9.7%) randomized to MiECC and 69/522 (13.2%) randomized to CECC group experienced the primary outcome (risk ratio = 0.732, 95% confidence interval (95% CI) = 0.556 to 0.962, p = 0.025). The risk of any SAE not contributing to the primary outcome was similarly reduced (risk ratio = 0.791, 95% CI 0.530 to 1.179, p = 0.250). CONCLUSIONS: MiECC reduces the relative risk of primary outcome events by about 25%. The risk of other SAEs was similarly reduced. Because the trial terminated early without achieving the target sample size, these potential benefits of MiECC are uncertain.

2.
Stud Conserv ; 69(1): 1-16, 2024.
Article in English | MEDLINE | ID: mdl-38384673

ABSTRACT

This contribution presents the results of a technical investigation on the pigments of William Burges' Great Bookcase (1859-62), preserved at the Ashmolean Museum. It is the first thorough material investigation of a remarkable piece of Gothic Revival painted furniture, notably an artwork by Burges, whose work has so far received little attention from a technical point of view. This study was developed during the Covid-19 pandemic, which significantly affected the planned research activities since the investigation relied extensively on collaborations with institutions within and beyond the University of Oxford. The disruption caused by the lockdown and other restrictions went far beyond any prediction and led us to redefine the project's outcome and methodology 'on the fly' while maintaining its overall vision. However, thanks to the timeliness of a substantial research grant received from the Capability for Collection Fund (CapCo, Art and Humanities Research Council), we could ultimately turn this research into a unique opportunity to test the potential of recently acquired instruments, namely the Opus Apollo infrared camera and the Bruker CRONO XRF mapping spectrometer. Therefore, besides reporting on the findings, this contribution outlines the strategy adopted and assesses the new equipment's capability for the non-invasive analysis of complex polychromies.

3.
Clin Linguist Phon ; 35(2): 172-184, 2021 02 01.
Article in English | MEDLINE | ID: mdl-32520595

ABSTRACT

Autism spectrum disorder (ASD) is characterized by deficits in social communication, and even children with ASD with preserved language are often perceived as socially awkward. We ask if linguistic patterns are associated with social perceptions of speakers. Twenty-one adolescents with ASD participated in conversations with an adult; each conversation was then rated for the social dimensions of likability, outgoingness, social skilfulness, responsiveness, and fluency. Conversations were analysed for responses to questions, pauses, and acoustic variables. Wide intonation ranges and more pauses within children's own conversational turn were predictors of more positive social ratings while failure to respond to one's conversational partner, faster syllable rate, and smaller quantity of speech were negative predictors of social perceptions.


Subject(s)
Autism Spectrum Disorder , Adolescent , Adult , Child , Communication , Humans , Judgment , Language , Speech
4.
PLoS One ; 15(1): e0225695, 2020.
Article in English | MEDLINE | ID: mdl-31940347

ABSTRACT

Individuals with serious mental illness experience changes in their clinical states over time that are difficult to assess and that result in increased disease burden and care utilization. It is not known if features derived from speech can serve as a transdiagnostic marker of these clinical states. This study evaluates the feasibility of collecting speech samples from people with serious mental illness and explores the potential utility for tracking changes in clinical state over time. Patients (n = 47) were recruited from a community-based mental health clinic with diagnoses of bipolar disorder, major depressive disorder, schizophrenia or schizoaffective disorder. Patients used an interactive voice response system for at least 4 months to provide speech samples. Clinic providers (n = 13) reviewed responses and provided global assessment ratings. We computed features of speech and used machine learning to create models of outcome measures trained using either population data or an individual's own data over time. The system was feasible to use, recording 1101 phone calls and 117 hours of speech. Most (92%) of the patients agreed that it was easy to use. The individually-trained models demonstrated the highest correlation with provider ratings (rho = 0.78, p<0.001). Population-level models demonstrated statistically significant correlations with provider global assessment ratings (rho = 0.44, p<0.001), future provider ratings (rho = 0.33, p<0.05), BASIS-24 summary score, depression sub score, and self-harm sub score (rho = 0.25,0.25, and 0.28 respectively; p<0.05), and the SF-12 mental health sub score (rho = 0.25, p<0.05), but not with other BASIS-24 or SF-12 sub scores. This study brings together longitudinal collection of objective behavioral markers along with a transdiagnostic, personalized approach for tracking of mental health clinical state in a community-based clinical setting.


Subject(s)
Computational Biology/methods , Mental Disorders/epidemiology , Speech , Female , Humans , Male , Middle Aged , Pilot Projects , Residence Characteristics , Support Vector Machine
5.
J Child Psychol Psychiatry ; 57(8): 927-37, 2016 08.
Article in English | MEDLINE | ID: mdl-27090613

ABSTRACT

BACKGROUND: Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely used ASD screening and diagnostic tools. METHODS: The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders, split at age 10. Algorithms were created via a robust ML classifier, support vector machine, while targeting best-estimate clinical diagnosis of ASD versus non-ASD. Parameter settings were tuned in multiple levels of cross-validation. RESULTS: The created algorithms were more effective (higher performing) than the current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. CONCLUSIONS: ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight the limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools.


Subject(s)
Algorithms , Autism Spectrum Disorder/diagnosis , Psychiatric Status Rating Scales , Support Vector Machine , Adolescent , Adult , Child , Child, Preschool , Female , Humans , Male , Middle Aged , Young Adult
6.
Comput Speech Lang ; 37: 47-66, 2016 May.
Article in English | MEDLINE | ID: mdl-28713198

ABSTRACT

Child engagement is defined as the interaction of a child with his/her environment in a contextually appropriate manner. Engagement behavior in children is linked to socio-emotional and cognitive state assessment with enhanced engagement identified with improved skills. A vast majority of studies however rely solely, and often implicitly, on subjective perceptual measures of engagement. Access to automatic quantification could assist researchers/clinicians to objectively interpret engagement with respect to a target behavior or condition, and furthermore inform mechanisms for improving engagement in various settings. In this paper, we present an engagement prediction system based exclusively on vocal cues observed during structured interaction between a child and a psychologist involving several tasks. Specifically, we derive prosodic cues that capture engagement levels across the various tasks. Our experiments suggest that a child's engagement is reflected not only in the vocalizations, but also in the speech of the interacting psychologist. Moreover, we show that prosodic cues are informative of the engagement phenomena not only as characterized over the entire task (i.e., global cues), but also in short term patterns (i.e., local cues). We perform a classification experiment assigning the engagement of a child into three discrete levels achieving an unweighted average recall of 55.8% (chance is 33.3%). While the systems using global cues and local level cues are each statistically significant in predicting engagement, we obtain the best results after fusing these two components. We perform further analysis of the cues at local and global levels to achieve insights linking specific prosodic patterns to the engagement phenomenon. We observe that while the performance of our model varies with task setting and interacting psychologist, there exist universal prosodic patterns reflective of engagement.

7.
J Autism Dev Disord ; 45(5): 1121-36, 2015 May.
Article in English | MEDLINE | ID: mdl-25294649

ABSTRACT

Machine learning has immense potential to enhance diagnostic and intervention research in the behavioral sciences, and may be especially useful in investigations involving the highly prevalent and heterogeneous syndrome of autism spectrum disorder. However, use of machine learning in the absence of clinical domain expertise can be tenuous and lead to misinformed conclusions. To illustrate this concern, the current paper critically evaluates and attempts to reproduce results from two studies (Wall et al. in Transl Psychiatry 2(4):e100, 2012a; PloS One 7(8), 2012b) that claim to drastically reduce time to diagnose autism using machine learning. Our failure to generate comparable findings to those reported by Wall and colleagues using larger and more balanced data underscores several conceptual and methodological problems associated with these studies. We conclude with proposed best-practices when using machine learning in autism research, and highlight some especially promising areas for collaborative work at the intersection of computational and behavioral science.


Subject(s)
Artificial Intelligence , Autistic Disorder/diagnosis , Diagnosis, Computer-Assisted , Child , Humans
8.
J Speech Lang Hear Res ; 57(4): 1162-77, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24686340

ABSTRACT

PURPOSE: The purpose of this study was to examine relationships between prosodic speech cues and autism spectrum disorder (ASD) severity, hypothesizing a mutually interactive relationship between the speech characteristics of the psychologist and the child. The authors objectively quantified acoustic-prosodic cues of the psychologist and of the child with ASD during spontaneous interaction, establishing a methodology for future large-sample analysis. METHOD: Speech acoustic-prosodic features were semiautomatically derived from segments of semistructured interviews (Autism Diagnostic Observation Schedule, ADOS; Lord, Rutter, DiLavore, & Risi, 1999; Lord et al., 2012) with 28 children who had previously been diagnosed with ASD. Prosody was quantified in terms of intonation, volume, rate, and voice quality. Research hypotheses were tested via correlation as well as hierarchical and predictive regression between ADOS severity and prosodic cues. RESULTS: Automatically extracted speech features demonstrated prosodic characteristics of dyadic interactions. As rated ASD severity increased, both the psychologist and the child demonstrated effects for turn-end pitch slope, and both spoke with atypical voice quality. The psychologist's acoustic cues predicted the child's symptom severity better than did the child's acoustic cues. CONCLUSION: The psychologist, acting as evaluator and interlocutor, was shown to adjust his or her behavior in predictable ways based on the child's social-communicative impairments. The results support future study of speech prosody of both interaction partners during spontaneous conversation, while using automatic computational methods that allow for scalable analysis on much larger corpora.


Subject(s)
Autism Spectrum Disorder/diagnosis , Communication , Physician-Patient Relations , Psychology , Speech Disorders/psychology , Acoustic Stimulation/methods , Acoustic Stimulation/psychology , Adolescent , Autism Spectrum Disorder/psychology , Child , Child, Preschool , Cues , Female , Humans , Male , Severity of Illness Index , Speech , Speech Disorders/etiology , Voice Quality
9.
IEEE Trans Affect Comput ; 5(2): 201-213, 2014.
Article in English | MEDLINE | ID: mdl-25705327

ABSTRACT

Studies in classifying affect from vocal cues have produced exceptional within-corpus results, especially for arousal (activation or stress); yet cross-corpora affect recognition has only recently garnered attention. An essential requirement of many behavioral studies is affect scoring that generalizes across different social contexts and data conditions. We present a robust, unsupervised (rule-based) method for providing a scale-continuous, bounded arousal rating operating on the vocal signal. The method incorporates just three knowledge-inspired features chosen based on empirical and theoretical evidence. It constructs a speaker's baseline model for each feature separately, and then computes single-feature arousal scores. Lastly, it advantageously fuses the single-feature arousal scores into a final rating without knowledge of the true affect. The baseline data is preferably labeled as neutral, but some initial evidence is provided to suggest that no labeled data is required in certain cases. The proposed method is compared to a state-of-the-art supervised technique which employs a high-dimensional feature set. The proposed framework achieves highly-competitive performance with additional benefits. The measure is interpretable, scale-continuous as opposed to discrete, and can operate without any affective labeling. An accompanying Matlab tool is made available with the paper.

10.
Comput Speech Lang ; 28(2)2014 Mar 01.
Article in English | MEDLINE | ID: mdl-24376305

ABSTRACT

Segmental and suprasegmental speech signal modulations offer information about paralinguistic content such as affect, age and gender, pathology, and speaker state. Speaker state encompasses medium-term, temporary physiological phenomena influenced by internal or external biochemical actions (e.g., sleepiness, alcohol intoxication). Perceptual and computational research indicates that detecting speaker state from speech is a challenging task. In this paper, we present a system constructed with multiple representations of prosodic and spectral features that provided the best result at the Intoxication Subchallenge of Interspeech 2011 on the Alcohol Language Corpus. We discuss the details of each classifier and show that fusion improves performance. We additionally address the question of how best to construct a speaker state detection system in terms of robust and practical marginalization of associated variability such as through modeling speakers, utterance type, gender, and utterance length. As is the case in human perception, speaker normalization provides significant improvements to our system. We show that a held-out set of baseline (sober) data can be used to achieve comparable gains to other speaker normalization techniques. Our fused frame-level statistic-functional systems, fused GMM systems, and final combined system achieve unweighted average recalls (UARs) of 69.7%, 65.1%, and 68.8%, respectively, on the test set. More consistent numbers compared to development set results occur with matched-prompt training, where the UARs are 70.4%, 66.2%, and 71.4%, respectively. The combined system improves over the Challenge baseline by 5.5% absolute (8.4% relative), also improving upon our previously best result.

SELECTION OF CITATIONS
SEARCH DETAIL
...