Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Prehosp Emerg Care ; 20(5): 667-71, 2016.
Article in English | MEDLINE | ID: mdl-26986814

ABSTRACT

OBJECTIVE: Adequate visualization of the glottic opening is a key factor to successful endotracheal intubation (ETI); however, few objective tools exist to help guide providers' ETI attempts toward the glottic opening in real-time. Machine learning/artificial intelligence has helped to automate the detection of other visual structures but its utility with ETI is unknown. We sought to test the accuracy of various computer algorithms in identifying the glottic opening, creating a tool that could aid successful intubation. METHODS: We collected a convenience sample of providers who each performed ETI 10 times on a mannequin using a video laryngoscope (C-MAC, Karl Storz Corp, Tuttlingen, Germany). We recorded each attempt and reviewed one-second time intervals for the presence or absence of the glottic opening. Four different machine learning/artificial intelligence algorithms analyzed each attempt and time point: k-nearest neighbor (KNN), support vector machine (SVM), decision trees, and neural networks (NN). We used half of the videos to train the algorithms and the second half to test the accuracy, sensitivity, and specificity of each algorithm. RESULTS: We enrolled seven providers, three Emergency Medicine attendings, and four paramedic students. From the 70 total recorded laryngoscopic video attempts, we created 2,465 time intervals. The algorithms had the following sensitivity and specificity for detecting the glottic opening: KNN (70%, 90%), SVM (70%, 90%), decision trees (68%, 80%), and NN (72%, 78%). CONCLUSIONS: Initial efforts at computer algorithms using artificial intelligence are able to identify the glottic opening with over 80% accuracy. With further refinements, video laryngoscopy has the potential to provide real-time, direction feedback to the provider to help guide successful ETI.


Subject(s)
Artificial Intelligence , Intubation, Intratracheal/methods , Laryngoscopy/methods , Adult , Algorithms , Cross-Sectional Studies , Emergency Medical Services , Emergency Medicine , Glottis , Humans , Laryngoscopes , Manikins , Video Recording , Young Adult
2.
Autism ; 19(2): 248-51, 2015 Feb.
Article in English | MEDLINE | ID: mdl-24345879

ABSTRACT

The anthropomorphic bias describes the finding that the perceived naturalness of a biological motion decreases as the human-likeness of a computer-animated agent increases. To investigate the anthropomorphic bias in autistic children, human or cartoon characters were presented with biological and artificial motions side by side on a touchscreen. Children were required to touch one that would grow while the other would disappear, implicitly rewarding their choice. Only typically developing controls depicted the expected preference for biological motion when rendered with human, but not cartoon, characters. Despite performing the task to report a preference, children with autism depicted neither normal nor reversed anthropomorphic bias, suggesting that they are not sensitive to the congruence of form and motion information when observing computer-animated agents' actions.


Subject(s)
Autistic Disorder/psychology , Cartoons as Topic/psychology , Child Development , Motion Perception , Motion Pictures , Social Perception , Analysis of Variance , Anthropometry , Child , Child, Preschool , Computer Simulation , Female , Humans , Infant , Male , Photic Stimulation/methods
4.
J Autism Dev Disord ; 44(10): 2475-85, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24859047

ABSTRACT

Few direct comparisons have been made between the responsiveness of children with autism to computer-generated or animated characters and their responsiveness to humans. Twelve 4- to 8-year-old children with autism interacted with a human therapist; a human-controlled, interactive avatar in a theme park; a human actor speaking like the avatar; and cartoon characters who sought social responses. We found superior gestural and verbal responses to the therapist; intermediate response levels to the avatar and the actor; and poorest responses to the cartoon characters, although attention was equivalent across conditions. These results suggest that even avatars that provide live, responsive interactions are not superior to human therapists in eliciting verbal and non-verbal communication from children with autism in this age range.


Subject(s)
Autistic Disorder/psychology , Cartoons as Topic/psychology , Nonverbal Communication/psychology , Attention , Child , Child, Preschool , Female , Humans , Male , Professional-Patient Relations , Verbal Behavior
5.
IEEE Trans Pattern Anal Mach Intell ; 35(3): 582-96, 2013 Mar.
Article in English | MEDLINE | ID: mdl-22732658

ABSTRACT

Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.


Subject(s)
Algorithms , Cluster Analysis , Image Processing, Computer-Assisted/methods , Locomotion/physiology , Animals , Bees , Behavior, Animal/physiology , Computer Simulation , Humans , Spatio-Temporal Analysis , Video Recording
6.
Article in English | MEDLINE | ID: mdl-23366363

ABSTRACT

Knowing how well an activity is performed is important for home rehabilitation. We would like to not only know if a motion is being performed correctly, but also in what way the motion is incorrect so that we may provide feedback to the user. This paper describes methods for assessing human motion quality using body-worn tri-axial accelerometers and gyroscopes. We use multi-label classifiers to detect subtle errors in exercise performances of eight individuals with knee osteoarthritis, a degenerative disease of the cartilage. We present results obtained using various machine learning methods with decision tree base classifiers. The classifier can detect classes in multi-label data with 75% sensitivity, 90% specificity and 80% accuracy. The methods presented here form the basis for an at-home rehabilitation device that will recognize errors in patient exercise performance, provide appropriate feedback on the performance, and motivate the patient to continue the prescribed regimen.


Subject(s)
Actigraphy/methods , Algorithms , Artificial Intelligence , Diagnosis, Computer-Assisted/methods , Movement , Osteoarthritis, Knee/physiopathology , Task Performance and Analysis , Humans , Reproducibility of Results , Sensitivity and Specificity
7.
Neuroimage ; 54(2): 1634-42, 2011 Jan 15.
Article in English | MEDLINE | ID: mdl-20832476

ABSTRACT

Because we are a cooperative species, understanding the goals and intentions of others is critical for human survival. In this fMRI study, participants viewed reaching behaviors in which one of four animated characters moved a hand towards one of two objects and either (a) picked up the object, (b) missed the object, or (c) changed his path halfway to lift the other object. The characters included a human, a humanoid robot, stacked boxes with an arm, and a mechanical claw. The first three moved in an identical, human-like biological pattern. Right posterior superior temporal sulcus (pSTS) activity increased when the human or humanoid robot shifted goals or missed the target relative to obtaining the original goal. This suggests that the pSTS was engaged differentially for figures that appeared more human-like rather than for all human-like motion. Medial frontal areas that are part of a protagonist-monitoring network with the right pSTS (e.g., Mason and Just, 2006) were most engaged for the human character, followed by the robot character. The current data suggest that goal-directed action and intention understanding require this network and it is used similarly for the two processes. Moreover, it is modulated by character identity rather than only the presence of biological motion. We discuss the implications for behavioral theories of goal-directed action and intention understanding.


Subject(s)
Brain Mapping , Comprehension/physiology , Goals , Intention , Temporal Lobe/physiology , Adolescent , Adult , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Motion Perception/physiology , Social Perception , Young Adult
8.
Article in English | MEDLINE | ID: mdl-21096970

ABSTRACT

In this paper, we describe methods for assessment of exercise quality using body-worn tri-axial accelerometers. We assess exercise quality by building a classifier that labels incorrect exercises. The incorrect performances are divided into a number of classes of errors as defined by a physical therapist. We focus on exercises commonly prescribed for knee osteoarthritis: standing hamstring curl, reverse hip abduction, and lying straight leg raise. The methods presented here will form the basis for an at-home rehabilitation device that will recognize errors in patient exercise performance, provide appropriate feedback on the performance, and motivate the patient to continue the prescribed regimen.


Subject(s)
Activities of Daily Living/classification , Algorithms , Exercise Therapy/methods , Movement/physiology , Osteoarthritis, Knee/rehabilitation , Acceleration , Exercise Therapy/instrumentation , Female , Fiducial Markers , Humans , Male , Reproducibility of Results
9.
Neural Netw ; 21(4): 621-7, 2008 May.
Article in English | MEDLINE | ID: mdl-18555957

ABSTRACT

This paper describes mechanisms used by humans to stand on moving platforms, such as a bus or ship, and to combine body orientation and motion information from multiple sensors including vision, vestibular, and proprioception. A simple mechanism, sensory re-weighting, has been proposed to explain how human subjects learn to reduce the effects of inconsistent sensors on balance. Our goal is to replicate this robust balance behavior in bipedal robots. We review results exploring sensory re-weighting in humans and describe implementations of sensory re-weighting in simulation and on a robot.


Subject(s)
Adaptation, Physiological/physiology , Leg/physiology , Postural Balance/physiology , Psychomotor Performance/physiology , Robotics/methods , Sensation/physiology , Artificial Intelligence , Humans , Leg/innervation , Muscle, Skeletal/innervation , Muscle, Skeletal/physiology , Neural Networks, Computer , Orientation/physiology , Proprioception/physiology , Robotics/trends , Space Perception/physiology , Vestibule, Labyrinth/physiology , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...