Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Mol Genet Metab ; 142(1): 108350, 2024 May.
Article in English | MEDLINE | ID: mdl-38458123

ABSTRACT

Major clinical events (MCEs) related to long-chain fatty acid oxidation disorders (LC-FAOD) in triheptanoin clinical trials include inpatient or emergency room (ER) visits for three major clinical manifestations: rhabdomyolysis, hypoglycemia, and cardiomyopathy. However, outcomes data outside of LC-FAOD clinical trials are limited. The non-interventional cohort LC-FAOD Odyssey study examines data derived from US medical records and patient reported outcomes to quantify LC-FAOD burden according to management strategy including MCE frequency and healthcare resource utilization (HRU). Thirty-four patients were analyzed of which 21 and 29 patients had received triheptanoin and/or medium chain triglycerides (MCT), respectively. 36% experienced MCEs while receiving triheptanoin versus 54% on MCT. Total mean annualized MCE rates on triheptanoin and MCT were 0.1 and 0.7, respectively. Annualized disease-related inpatient and ER events were lower on triheptanoin (0.2, 0.3, respectively) than MCT (1.2, 1.0, respectively). Patients were managed more in an outpatient setting on triheptanoin (8.9 annualized outpatient visits) vs MCT (7.9). Overall, this shows that those with LC-FAOD in the Odyssey program experienced fewer MCEs and less HRU in inpatient and ER settings during triheptanoin-treated periods compared with the MCT-treated periods. The MCE rate was lower after initiation of triheptanoin, consistent with clinical trials.


Subject(s)
Fatty Acids , Lipid Metabolism, Inborn Errors , Triglycerides , Humans , Male , Female , United States , Lipid Metabolism, Inborn Errors/genetics , Lipid Metabolism, Inborn Errors/drug therapy , Fatty Acids/metabolism , Adolescent , Oxidation-Reduction , Child , Adult , Child, Preschool , Rhabdomyolysis/genetics , Rhabdomyolysis/drug therapy , Hypoglycemia , Cardiomyopathies/drug therapy , Cardiomyopathies/genetics , Infant , Young Adult , Health Resources , Middle Aged
2.
Phonetica ; 77(1): 1-28, 2020.
Article in English | MEDLINE | ID: mdl-30836370

ABSTRACT

BACKGROUND/AIMS: Adult learners often struggle to produce novel phonemes in a second language and lack clear articulatory targets. This study investigates the combined efficacy of perceptual and articulatory training, the latter involving explicit instruction about tongue position and laryngeal control, for the production of non-native phonemes. METHODS: Native English speakers were trained on a series of Hindi coronal stop consonants, with production assessed before, during, and after training sessions, on the basis of acoustic cues to place of articulation and voicing. RESULTS: Improvement in production was most apparent during artic u-latory training, when cues to target articulation were available to learners. Some improvements were maintained after training was concluded. CONCLUSION: Articu-latory training can contribute useful cues to pronunciation for early learners. Improvement in acquisition of targets varies in stability across learners and targets.


Subject(s)
Language , Learning , Phonetics , Speech , Adult , Cues , Female , Humans , India , Male , Speech Acoustics , Speech Perception
5.
J Exp Psychol Learn Mem Cogn ; 45(6): 1107-1141, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30024252

ABSTRACT

Interactive models of language production predict that it should be possible to observe long-distance interactions; effects that arise at one level of processing influence multiple subsequent stages of representation and processing. We examine the hypothesis that disruptions arising in nonform-based levels of planning-specifically, lexical selection-should modulate articulatory processing. A novel automatic phonetic analysis method was used to examine productions in a paradigm yielding both general disruptions to formulation processes and, more specifically, overt errors during lexical selection. This analysis method allowed us to examine articulatory disruptions at multiple levels of analysis, from whole words to individual segments. Baseline performance by young adults was contrasted with young speakers' performance under time pressure (which previous work has argued increases interaction between planning and articulation) and performance by older adults (who may have difficulties inhibiting nontarget representations, leading to heightened interactive effects). The results revealed the presence of interactive effects. Our new analysis techniques revealed these effects were strongest in initial portions of responses, suggesting that speech is initiated as soon as the first segment has been planned. Interactive effects did not increase under response pressure, suggesting interaction between planning and articulation is relatively fixed. Unexpectedly, lexical selection disruptions appeared to yield some degree of facilitation in articulatory processing (possibly reflecting semantic facilitation of target retrieval) and older adults showed weaker, not stronger interactive effects (possibly reflecting weakened connections between lexical and form-level representations). (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Phonetics , Psycholinguistics , Speech , Adolescent , Aged , Aging/psychology , Association , Female , Humans , Inhibition, Psychological , Male , Middle Aged , Neural Networks, Computer , Pattern Recognition, Visual , Reading , Young Adult
6.
Article in English | MEDLINE | ID: mdl-29033692

ABSTRACT

We describe and analyze a simple and effective algorithm for sequence segmentation applied to speech processing tasks. We propose a neural architecture that is composed of two modules trained jointly: a recurrent neural network (RNN) module and a structured prediction model. The RNN outputs are considered as feature functions to the structured model. The overall model is trained with a structured loss function which can be designed to the given segmentation task. We demonstrate the effectiveness of our method by applying it to two simple tasks commonly used in phonetic studies: word segmentation and voice onset time segmentation. Results suggest the proposed model is superior to previous methods, obtaining state-of-the-art results on the tested datasets.

7.
PLoS One ; 11(7): e0158725, 2016.
Article in English | MEDLINE | ID: mdl-27434643

ABSTRACT

The Sapir-Whorf hypothesis holds that our thoughts are shaped by our native language, and that speakers of different languages therefore think differently. This hypothesis is controversial in part because it appears to deny the possibility of a universal groundwork for human cognition, and in part because some findings taken to support it have not reliably replicated. We argue that considering this hypothesis through the lens of probabilistic inference has the potential to resolve both issues, at least with respect to certain prominent findings in the domain of color cognition. We explore a probabilistic model that is grounded in a presumed universal perceptual color space and in language-specific categories over that space. The model predicts that categories will most clearly affect color memory when perceptual information is uncertain. In line with earlier studies, we show that this model accounts for language-consistent biases in color reconstruction from memory in English speakers, modulated by uncertainty. We also show, to our knowledge for the first time, that such a model accounts for influential existing data on cross-language differences in color discrimination from memory, both within and across categories. We suggest that these ideas may help to clarify the debate over the Sapir-Whorf hypothesis.


Subject(s)
Cognition/physiology , Color Perception/physiology , Models, Statistical , Thinking/physiology , Adolescent , Color , Female , Humans , Language , Male , Uncertainty , Young Adult
8.
J Acoust Soc Am ; 140(6): 4517, 2016 Dec.
Article in English | MEDLINE | ID: mdl-28040034

ABSTRACT

A key barrier to making phonetic studies scalable and replicable is the need to rely on subjective, manual annotation. To help meet this challenge, a machine learning algorithm was developed for automatic measurement of a widely used phonetic measure: vowel duration. Manually-annotated data were used to train a model that takes as input an arbitrary length segment of the acoustic signal containing a single vowel that is preceded and followed by consonants and outputs the duration of the vowel. The model is based on the structured prediction framework. The input signal and a hypothesized set of a vowel's onset and offset are mapped to an abstract vector space by a set of acoustic feature functions. The learning algorithm is trained in this space to minimize the difference in expectations between predicted and manually-measured vowel durations. The trained model can then automatically estimate vowel durations without phonetic or orthographic transcription. Results comparing the model to three sets of manually annotated data suggest it outperformed the current gold standard for duration measurement, an hidden Markov model-based forced aligner (which requires orthographic or phonetic transcription as an input).


Subject(s)
Phonetics , Acoustics , Algorithms , Machine Learning , Speech Acoustics , Speech Perception
9.
Brain Lang ; 147: 66-75, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26072003

ABSTRACT

Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge.


Subject(s)
Phonetics , Speech Perception/physiology , Speech , Temporal Lobe/physiology , Humans , Temporal Lobe/cytology , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...