Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE J Biomed Health Inform ; 26(7): 2864-2875, 2022 07.
Article in English | MEDLINE | ID: mdl-35201992

ABSTRACT

OBJECTIVE: While non-invasive, cuffless blood pressure (BP) measurement has demonstrated relevancy in controlled environments, ambulatory measurement is important for hypertension diagnosis and control. We present both in-lab and ambulatory BP estimation results from a diverse cohort of participants. METHODS: Participants (N=1125, aged 21-85, 49.2% female, multiple hypertensive categories) had BP measured in-lab over a 24-hour period with a subset also receiving ambulatory measurements. Radial tonometry, photoplethysmography (PPG), electrocardiography (ECG), and accelerometry signals were collected simultaneously with auscultatory or oscillometric references for systolic (SBP) and diastolic blood pressure (DBP). Predictive models to estimate BP using a variety of sensor-based feature groups were evaluated against challenging baselines. RESULTS: Despite limited availability, tonometry-derived features showed superior performance compared to other feature groups and baselines, yieldingprediction errors of 0.32 ±9.8 mmHg SBP and 0.54 ±7.7 mmHg DBP in-lab, and 0.86 ±8.7 mmHg SBP and 0.75 ±5.9 mmHg DBP for 24-hour averages. SBP error standard deviation (SD) was reduced in normotensive (in-lab: 8.1 mmHg, 24-hr: 7.2 mmHg) and younger (in-lab: 7.8 mmHg, 24-hr: 6.7 mmHg) subpopulations. SBP SD was further reduced 15-20% when constrained to the calibration posture alone. CONCLUSION: Performance for normotensive and younger participants was superior to the general population across all feature groups. Reference type, posture relative to calibration, and controlled vs. ambulatory setting all impacted BP errors. SIGNIFICANCE: Results highlight the need for demographically diverse populations and challenging evaluation settings for BP estimation studies. We present the first public dataset of ambulatory tonometry and cuffless BP over a 24-hour period to aid in future cardiovascular research.


Subject(s)
Hypertension , Wearable Electronic Devices , Blood Pressure/physiology , Blood Pressure Determination/methods , Electrocardiography , Female , Humans , Male , Manometry , Photoplethysmography/methods
2.
PLoS One ; 10(10): e0140850, 2015.
Article in English | MEDLINE | ID: mdl-26488893

ABSTRACT

We investigated interactions between morphological complexity and grammaticality on electrophysiological markers of grammatical processing during reading. Our goal was to determine whether morphological complexity and stimulus grammaticality have independent or additive effects on the P600 event-related potential component. Participants read sentences that were either well-formed or grammatically ill-formed, in which the critical word was either morphologically simple or complex. Results revealed no effects of complexity for well-formed stimuli, but the P600 amplitude was significantly larger for morphologically complex ungrammatical stimuli than for morphologically simple ungrammatical stimuli. These findings suggest that some previous work may have inadequately characterized factors related to reanalysis during morphosyntactic processing. Our results show that morphological complexity by itself does not elicit P600 effects. However, in ungrammatical circumstances, overt morphology provides a more robust and reliable cue to morphosyntactic relationships than null affixation.


Subject(s)
Brain Waves/physiology , Evoked Potentials/physiology , Language , Reading , Adolescent , Adult , Brain Mapping , Comprehension/physiology , Electroencephalography , Female , Humans , Male , Students , Young Adult
3.
Nebr Symp Motiv ; 59: 91-116, 2012.
Article in English | MEDLINE | ID: mdl-23437631

ABSTRACT

It has long been known that the control of attention in visual search depends both on voluntary, top-down deployment according to context-specific goals, and on involuntary, stimulus-driven capture based on the physical conspicuity of perceptual objects. Recent evidence suggests that pairing target stimuli with reward can modulate the voluntary deployment of attention, but there is little evidence that reward modulates the involuntary deployment of attention to task-irrelevant distractors. We report several experiments that investigate the role of reward learning on attentional control. Each experiment involved a training phase and a test phase. In the training phase, different colors were associated with different amounts of monetary reward. In the test phase, color was not task-relevant and participants searched for a shape singleton; in most experiments no reward was delivered in the test phase. We first show that attentional capture by physically salient distractors is magnified by a previous association with reward. In subsequent experiments we demonstrate that physically inconspicuous stimuli previously associated with reward capture attention persistently during extinction--even several days after training. Furthermore, vulnerability to attentional capture by high-value stimuli is negatively correlated across individuals with working memory capacity and positively correlated with trait impulsivity. An analysis of intertrial effects reveals that value-driven attentional capture is spatially specific. Finally, when reward is delivered at test contingent on the task-relevant shape feature, recent reward history modulates value-driven attentional capture by the irrelevant color feature. The influence of learned value on attention may provide a useful model of clinical syndromes characterized by similar failures of cognitive control, including addiction, attention-deficit/hyperactivity disorder, and obesity.


Subject(s)
Attention , Reward , Visual Perception , Volition , Humans , Memory, Short-Term , Reaction Time
4.
Proc Natl Acad Sci U S A ; 108(43): 17621-5, 2011 Oct 25.
Article in English | MEDLINE | ID: mdl-22006295

ABSTRACT

Automated scene interpretation has benefited from advances in machine learning, and restricted tasks, such as face detection, have been solved with sufficient accuracy for restricted settings. However, the performance of machines in providing rich semantic descriptions of natural scenes from digital images remains highly limited and hugely inferior to that of humans. Here we quantify this "semantic gap" in a particular setting: We compare the efficiency of human and machine learning in assigning an image to one of two categories determined by the spatial arrangement of constituent parts. The images are not real, but the category-defining rules reflect the compositional structure of real images and the type of "reasoning" that appears to be necessary for semantic parsing. Experiments demonstrate that human subjects grasp the separating principles from a handful of examples, whereas the error rates of computer programs fluctuate wildly and remain far behind that of humans even after exposure to thousands of examples. These observations lend support to current trends in computer vision such as integrating machine learning with parts-based modeling.


Subject(s)
Algorithms , Artificial Intelligence , Pattern Recognition, Automated/methods , Pattern Recognition, Visual/physiology , Problem Solving , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...