Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Front Artif Intell ; 5: 867834, 2022.
Article in English | MEDLINE | ID: mdl-35450156
2.
Sensors (Basel) ; 21(9)2021 May 02.
Article in English | MEDLINE | ID: mdl-34063180

ABSTRACT

Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from simple indicators of collaboration such as total speaking time or complex indicators like synchrony in the rise and fall of the average pitch. Most studies in the past focused on "how group members talk" (i.e., spectral, temporal features of audio like pitch) and not "what they talk". The "what" of the conversations is more overt contrary to the "how" of the conversations. Very few studies studied "what" group members talk about, and these studies were lab based showing a representative overview of specific words as topic clusters instead of analysing the richness of the content of the conversations by understanding the linkage between these words. To overcome this, we made a starting step in this technical paper based on field trials to prototype a tool to move towards automatic collaboration analytics. We designed a technical setup to collect, process and visualize audio data automatically. The data collection took place while a board game was played among the university staff with pre-assigned roles to create awareness of the connection between learning analytics and learning design. We not only did a word-level analysis of the conversations, but also analysed the richness of these conversations by visualizing the strength of the linkage between these words and phrases interactively. In this visualization, we used a network graph to visualize turn taking exchange between different roles along with the word-level and phrase-level analysis. We also used centrality measures to understand the network graph further based on how much words have hold over the network of words and how influential are certain words. Finally, we found that this approach had certain limitations in terms of automation in speaker diarization (i.e., who spoke when) and text data pre-processing. Therefore, we concluded that even though the technical setup was partially automated, it is a way forward to understand the richness of the conversations between different roles and makes a significant step towards automatic collaboration analytics.


Subject(s)
Speech Perception , Speech , Automation , Communication , Humans , Learning
3.
Sensors (Basel) ; 19(14)2019 Jul 13.
Article in English | MEDLINE | ID: mdl-31337029

ABSTRACT

This study investigated to what extent multimodal data can be used to detect mistakes during Cardiopulmonary Resuscitation (CPR) training. We complemented the Laerdal QCPR ResusciAnne manikin with the Multimodal Tutor for CPR, a multi-sensor system consisting of a Microsoft Kinect for tracking body position and a Myo armband for collecting electromyogram information. We collected multimodal data from 11 medical students, each of them performing two sessions of two-minute chest compressions (CCs). We gathered in total 5254 CCs that were all labelled according to five performance indicators, corresponding to common CPR training mistakes. Three out of five indicators, CC rate, CC depth and CC release, were assessed automatically by the ReusciAnne manikin. The remaining two, related to arms and body position, were annotated manually by the research team. We trained five neural networks for classifying each of the five indicators. The results of the experiment show that multimodal data can provide accurate mistake detection as compared to the ResusciAnne manikin baseline. We also show that the Multimodal Tutor for CPR can detect additional CPR training mistakes such as the correct use of arms and body weight. Thus far, these mistakes were identified only by human instructors. Finally, to investigate user feedback in the future implementations of the Multimodal Tutor for CPR, we conducted a questionnaire to collect valuable feedback aspects of CPR training.


Subject(s)
Cardiopulmonary Resuscitation/education , Computer-Assisted Instruction/methods , Neural Networks, Computer , Body Weight , Cardiopulmonary Resuscitation/methods , Computer-Assisted Instruction/instrumentation , Data Curation , Databases, Factual , Education, Medical/methods , Equipment Design , Humans , Information Storage and Retrieval , Manikins , Posture , Surveys and Questionnaires , Thorax
4.
Sensors (Basel) ; 19(14)2019 Jul 23.
Article in English | MEDLINE | ID: mdl-31340605

ABSTRACT

Sensors can monitor physical attributes and record multimodal data in order to provide feedback. The application calligraphy trainer, exploits these affordances in the context of handwriting learning. It records the expert's handwriting performance to compute an expert model. The application then uses the expert model to provide guidance and feedback to the learners. However, new learners can be overwhelmed by the feedback as handwriting learning is a tedious task. This paper presents the pilot study done with the calligraphy trainer to evaluate the mental effort induced by various types of feedback provided by the application. Ten participants, five in the control group and five in the treatment group, who were Ph.D. students in the technology-enhanced learning domain, took part in the study. The participants used the application to learn three characters from the Devanagari script. The results show higher mental effort in the treatment group when all types of feedback are provided simultaneously. The mental efforts for individual feedback were similar to the control group. In conclusion, the feedback provided by the calligraphy trainer does not impose high mental effort and, therefore, the design considerations of the calligraphy trainer can be insightful for multimodal feedback designers.


Subject(s)
Handwriting , Learning , Adult , Brain/physiology , Electromyography , Female , Humans , Male
5.
Sensors (Basel) ; 15(2): 4097-133, 2015 Feb 11.
Article in English | MEDLINE | ID: mdl-25679313

ABSTRACT

In recent years sensor components have been extending classical computer-based support systems in a variety of applications domains (sports, health, etc.). In this article we review the use of sensors for the application domain of learning. For that we analyzed 82 sensor-based prototypes exploring their learning support. To study this learning support we classified the prototypes according to the Bloom's taxonomy of learning domains and explored how they can be used to assist on the implementation of formative assessment, paying special attention to their use as feedback tools. The analysis leads to current research foci and gaps in the development of sensor-based learning support systems and concludes with a research agenda based on the findings.


Subject(s)
Problem-Based Learning , Remote Sensing Technology , Humans
6.
J Med Internet Res ; 16(3): e89, 2014 Mar 19.
Article in English | MEDLINE | ID: mdl-24647361

ABSTRACT

BACKGROUND: No systematic evaluation of smartphone/mobile apps for resuscitation training and real incident support is available to date. To provide medical, usability, and additional quality criteria for the development of apps, we conducted a mixed-methods sequential evaluation combining the perspective of medical experts and end-users. OBJECTIVE: The study aims to assess the quality of current mobile apps for cardiopulmonary resuscitation (CPR) training and real incident support from expert as well as end-user perspective. METHODS: Two independent medical experts evaluated the medical content of CPR apps from the Google Play store and the Apple App store. The evaluation was based on pre-defined minimum medical content requirements according to current Basic Life Support (BLS) guidelines. In a second phase, non-medical end-users tested usability and appeal of the apps that had at least met the minimum requirements. Usability was assessed with the System Usability Scale (SUS); appeal was measured with the self-developed ReactionDeck toolkit. RESULTS: Out of 61 apps, 46 were included in the experts' evaluation. A consolidated list of 13 apps resulted for the following layperson evaluation. The interrater reliability was substantial (kappa=.61). Layperson end-users (n=14) had a high interrater reliability (intraclass correlation 1 [ICC1]=.83, P<.001, 95% CI 0.75-0.882 and ICC2=.79, P<.001, 95% CI 0.695-0.869). Their evaluation resulted in a list of 5 recommendable apps. CONCLUSIONS: Although several apps for resuscitation training and real incident support are available, very few are designed according to current BLS guidelines and offer an acceptable level of usability and hedonic quality for laypersons. The results of this study are intended to optimize the development of CPR mobile apps. The app ranking supports the informed selection of mobile apps for training situations and CPR campaigns as well as for real incident support.


Subject(s)
Cardiopulmonary Resuscitation/education , Cell Phone , Mobile Applications , Humans , Observer Variation
SELECTION OF CITATIONS
SEARCH DETAIL
...