Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Brain Sci ; 14(1)2024 Jan 07.
Article in English | MEDLINE | ID: mdl-38248275

ABSTRACT

Due to the widespread involvement of distributed collaboration triggered by COVID-19, it has become a new trend that has continued into the post-pandemic era. This study investigated collective performance within two collaborative environments (co-located and distancing settings) by assessing inter-brain synchrony patterns (IBS) among design collaborators using functional near-infrared spectroscopy. The preliminary study was conducted with three dyads who possessed 2-3 years of professional product design experience. Each dyad completed two designated design tasks in distinct settings. In the distributed condition, participants interacted through video conferencing in which they were allowed to communicate by verbalization and sketching using a shared digital whiteboard. To prevent the influences of different sketching tools on design outputs, we employed digital sketching for both environments. The interactions between collaborators were identified in three behaviors: verbal only, sketch only, and mixed communication (verbal and sketch). The consequences revealed a higher level of IBS when mixed communication took place in distributed conditions than in co-located conditions. Comparably, the occurrence of IBS increased when participants solely utilized sketching as the interaction approach within the co-located setting. A mixed communication method combining verbalization and sketching might lead to more coordinated cognitive processes when in physical isolation. Design collaborators are inclined to adjust their interaction behaviors in order to adapt to different design environments, strengthen the exchange of ideas, and construct design consensus. Overall, the present paper discussed the performance of virtual collaborative design based on a neurocognitive perspective, contributing valuable insights for the future intervention design that promotes effective virtual teamwork.

2.
Univers Access Inf Soc ; 22(2): 609-633, 2023.
Article in English | MEDLINE | ID: mdl-34803565

ABSTRACT

Purpose The development of assistive technologies that support people in social interactions has attracted increased attention in HCI. This paper presents a systematic review of studies of Socially Assistive Systems targeted at older adults and people with disabilities. The purpose is threefold: (1) Characterizing related assistive systems with a special focus on the system design, primarily including HCI technologies used and user-involvement approach taken; (2) Examining their ways of system evaluation; (3) Reflecting on insights for future design research. Methods A systematic literature search was conducted using the keywords "social interactions" and "assistive technologies" within the following databases: Scopus, Web of Science, ACM, Science Direct, PubMed, and IEEE Xplore. Results Sixty-five papers met the inclusion criteria and were further analyzed. Our results showed that there were 11 types of HCI technologies that supported social interactions for target users. The most common was cognitive and meaning understanding technologies, often applied with wearable devices for compensating users' sensory loss; 33.85% of studies involved end-users and stakeholders in the design phase; Four types of evaluation methods were identified. The majority of studies adopted laboratory experiments to measure user-system interaction and system validation. Proxy users were used in system evaluation, especially in initial experiments; 42.46% of evaluations were conducted in field settings, primarily including the participants' own homes and institutions. Conclusion We contribute an overview of Socially Assistive Systems that support social interactions for older adults and people with disabilities, as well as illustrate emerging technologies and research opportunities for future work.

3.
Front Psychol ; 12: 645545, 2021.
Article in English | MEDLINE | ID: mdl-34349695

ABSTRACT

As robots become more ubiquitous, they will increasingly need to behave as our team partners and smoothly adapt to the (adaptive) human team behaviors to establish successful patterns of collaboration over time. A substantial amount of adaptations present themselves through subtle and unconscious interactions, which are difficult to observe. Our research aims to bring about awareness of co-adaptation that enables team learning. This paper presents an experimental paradigm that uses a physical human-robot collaborative task environment to explore emergent human-robot co-adaptions and derive the interaction patterns (i.e., the targeted awareness of co-adaptation). The paradigm provides a tangible human-robot interaction (i.e., a leash) that facilitates the expression of unconscious adaptations, such as "leading" (e.g., pulling the leash) and "following" (e.g., letting go of the leash) in a search-and-navigation task. The task was executed by 18 participants, after which we systematically annotated videos of their behavior. We discovered that their interactions could be described by four types of adaptive interactions: stable situations, sudden adaptations, gradual adaptations and active negotiations. From these types of interactions we have created a language of interaction patterns that can be used to describe tacit co-adaptation in human-robot collaborative contexts. This language can be used to enable communication between collaborating humans and robots in future studies, to let them share what they learned and support them in becoming aware of their implicit adaptations.

4.
Sensors (Basel) ; 22(1)2021 Dec 23.
Article in English | MEDLINE | ID: mdl-35009623

ABSTRACT

Social interactions significantly impact the quality of life for people with special needs (e.g., older adults with dementia and children with autism). They may suffer loneliness and social isolation more often than people without disabilities. There is a growing demand for technologies to satisfy the social needs of such user groups. However, evaluating these systems can be challenging due to the extra difficulty of gathering data from people with special needs (e.g., communication barriers involving older adults with dementia and children with autism). Thus, in this systematic review, we focus on studying data gathering methods for evaluating socially assistive systems (SAS). Six academic databases (i.e., Scopus, Web of Science, ACM, Science Direct, PubMed, and IEEE Xplore) were searched, covering articles published from January 2000 to July 2021. A total of 65 articles met the inclusion criteria for this systematic review. The results showed that existing SASs most often targeted people with visual impairments, older adults, and children with autism. For instance, a common type of SASs aimed to help blind people perceive social signals (e.g., facial expressions). SASs were most commonly assessed with interviews, questionnaires, and observation data. Around half of the interview studies only involved target users, while the other half also included secondary users or stakeholders. Questionnaires were mostly used with older adults and people with visual impairments to measure their social interaction, emotional state, and system usability. A great majority of observational studies were carried out with users in special age groups, especially older adults and children with autism. We thereby contribute an overview of how different data gathering methods were used with various target users of SASs. Relevant insights are extracted to inform future development and research.


Subject(s)
Disabled Persons , Quality of Life , Aged , Child , Emotions , Humans , Loneliness , Social Isolation
5.
Front Psychol ; 9: 690, 2018.
Article in English | MEDLINE | ID: mdl-29881360

ABSTRACT

Engagement in activities is of crucial importance for people with dementia. State of the art assessment techniques rely exclusively on behavior observation to measure engagement in dementia. These techniques are either too general to grasp how engagement is naturally expressed through behavior or too complex to be traced back to an overall engagement state. We carried out a longitudinal study to develop a coding system of engagement-related behavior that could tackle these issues and to create an evidence-based model of engagement to make meaning of such a coding system. Fourteen elderlies with mild to moderate dementia took part in the study. They were involved in two activities: a game-based cognitive stimulation and a robot-based free play. The coding system was developed with a mixed approach: ethographic and Laban-inspired. First, we developed two ethograms to describe the behavior of participants in the two activities in detail. Then, we used Laban Movement Analysis (LMA) to identify a common structure to the behaviors in the two ethograms and unify them in a unique coding system. The inter-rater reliability (IRR) of the coding system proved to be excellent for cognitive games (kappa = 0.78) and very good for robot play (kappa = 0.74). From the scoring of the videos, we developed an evidence-based model of engagement. This was based on the most frequent patterns of body part organization (i.e., the way body parts are connected in movement) observed during activities. Each pattern was given a meaning in terms of engagement by making reference to the literature. The model was tested using structural equation modeling (SEM). It achieved an excellent goodness of fit and all the hypothesized relations between variables were significant. We called the coding system that we developed the Ethographic and Laban-Inspired Coding System of Engagement (ELICSE) and the model the Evidence-based Model of Engagement-related Behavior (EMODEB). To the best of our knowledge, the ELICSE and the EMODEB constitute the first formalization of engagement-related behavior for dementia that describes how behavior unfolds over time and what it means in terms of engagement.

6.
Am J Alzheimers Dis Other Demen ; 33(2): 112-121, 2018 03.
Article in English | MEDLINE | ID: mdl-29148293

ABSTRACT

Engagement in activities is crucial to improve quality of life in dementia. Yet, its measurement relies exclusively on behavior observation and the influence that behavioral and psychological symptoms of dementia (BPSD) have on it is overlooked. This study investigated whether quantity of movement, gauged with a wrist-worn accelerometer, could be a sound measure of engagement and whether apathy and depression negatively affected engagement. Fourteen participants with dementia took part in 6 sessions of activities: 3 of cognitive games (eg, jigsaw puzzles) and 3 of robot play (Pleo). Results highlighted significant correlations between quantity of movement and observational scales of engagement and a strong negative influence of apathy and depression on engagement. Overall, these findings suggest that quantity of movement could be used as an ancillary measure of engagement and underline the need to profile people with dementia according to their concurrent BPSD to better understand their engagement in activities.


Subject(s)
Dementia , Games, Recreational , Motivation , Movement/physiology , Accelerometry/methods , Aged, 80 and over , Apathy , Behavioral Symptoms/psychology , Dementia/psychology , Dementia/therapy , Depression , Female , Humans , Male , Psychiatric Status Rating Scales
7.
Int J Soc Robot ; 10(3): 343-355, 2018.
Article in English | MEDLINE | ID: mdl-30996753

ABSTRACT

In this paper, we report an experimental study designed to examine how participants perceive and interpret social hints from gaze exhibited by either a robot or a human tutor when carrying out a matching task. The underlying notion is that knowing where an agent is looking at provides cues that can direct attention to an object of interest during the activity. In this regard, we asked human participants to play a card matching game in the presence of either a human or a robotic tutor under two conditions. In one case, the tutor gave hints to help the participant find the matching cards by gazing toward the correct match, in the other case, the tutor only looked at the participants and did not give them any help. The performance was measured based on the time and the number of tries taken to complete the game. Results show that gaze hints (helping tutor) made the matching task significantly easier (fewer tries) with the robot tutor. Furthermore, we found out that the robots' gaze hints were recognized significantly more often than the human tutor gaze hints, and consequently, the participants performed significantly better with the robot tutor. The reported study provides new findings towards the use of non-verbal gaze hints in human-robot interaction, and lays out new design implications, especially for robot-based educative interventions.

8.
IEEE Int Conf Rehabil Robot ; 2017: 1112-1117, 2017 07.
Article in English | MEDLINE | ID: mdl-28813970

ABSTRACT

In this paper, we present a novel tool to measure engagement in people with dementia playing board games and interacting with a social robot, Pleo. We carried out two studies to reach a comprehensive inventory of behaviours accounting for engagement in dementia. The first one is an exploratory study aimed at modelling engagement in cognitive board games. The second one is a longitudinal study to investigate how people with dementia express engagement in cognitive games and in interactions with social robots. We adopted a technique coming from Ethology to mould behaviour, the ethogram. Ethogram is founded on low level behaviours, and allows hierarchical structuring. Herein, we present preliminary results consisting in the description of two ethograms and in their structuring obtained through thematic analysis. Such results show that an underlying structure of engagement exists across activities, and that different activities trigger different behavioural displays of engagement that adhere to such a structure.


Subject(s)
Dementia/therapy , Interpersonal Relations , Quality of Life , Robotics/instrumentation , Aged , Aged, 80 and over , Female , Games, Recreational , Humans , Longitudinal Studies , Male , Research Design
9.
Front Comput Neurosci ; 10: 63, 2016.
Article in English | MEDLINE | ID: mdl-27458366

ABSTRACT

Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds.

10.
PLoS One ; 10(4): e0124519, 2015.
Article in English | MEDLINE | ID: mdl-25875608

ABSTRACT

Unconscious mental processes have recently started gaining attention in a number of scientific disciplines. One of the theoretical frameworks for describing unconscious processes was introduced by Jung as a part of his model of the psyche. This framework uses the concept of archetypes that represent prototypical experiences associated with objects, people, and situations. Although the validity of Jungian model remains an open question, this framework is convenient from the practical point of view. Moreover, archetypes found numerous applications in the areas of psychology and marketing. Therefore, observation of both conscious and unconscious traces related to archetypal experiences seems to be an interesting research endeavor. In a study with 36 subjects, we examined the effects of experiencing conglomerations of unconscious emotions associated with various archetypes on the participants' introspective reports and patterns of physiological activations. Our hypothesis for this experiment was that physiological data may predict archetypes more precisely than introspective reports due to the implicit nature of archetypal experiences. Introspective reports were collected using the Self-Assessment Manikin (SAM) technique. Physiological measures included cardiovascular, electrodermal, respiratory responses and skin temperature of the subjects. The subjects were stimulated to feel four archetypal experiences and four explicit emotions by means of film clips. The data related to the explicit emotions served as a reference in analysis of archetypal experiences. Our findings indicated that while prediction models trained on the collected physiological data could recognize the archetypal experiences with accuracy of 55 percent, similar models built based on the SAM data demonstrated performance of only 33 percent. Statistical tests enabled us to confirm that physiological observations are better suited for observation of implicit psychological constructs like archetypes than introspective reports.


Subject(s)
Consciousness , Mental Processes/physiology , Psychoanalysis/methods , Psychoanalytic Interpretation , Unconscious, Psychology , Attention/physiology , Emotions/physiology , Humans
11.
JMIR Mhealth Uhealth ; 1(2): e14, 2013 Jul 15.
Article in English | MEDLINE | ID: mdl-25098265

ABSTRACT

BACKGROUND: Freezing of gait (FoG) is one of the most disturbing and least understood symptoms in Parkinson disease (PD). Although the majority of existing assistive systems assume accurate detections of FoG episodes, the detection itself is still an open problem. The specificity of FoG is its dependency on the context of a patient, such as the current location or activity. Knowing the patient's context might improve FoG detection. One of the main technical challenges that needs to be solved in order to start using contextual information for FoG detection is accurate estimation of the patient's position and orientation toward key elements of his or her indoor environment. OBJECTIVE: The objectives of this paper are to (1) present the concept of the monitoring system, based on wearable and ambient sensors, which is designed to detect FoG using the spatial context of the user, (2) establish a set of requirements for the application of position and orientation tracking in FoG detection, (3) evaluate the accuracy of the position estimation for the tracking system, and (4) evaluate two different methods for human orientation estimation. METHODS: We developed a prototype system to localize humans and track their orientation, as an important prerequisite for a context-based FoG monitoring system. To setup the system for experiments with real PD patients, the accuracy of the position and orientation tracking was assessed under laboratory conditions in 12 participants. To collect the data, the participants were asked to wear a smartphone, with and without known orientation around the waist, while walking over a predefined path in the marked area captured by two Kinect cameras with non-overlapping fields of view. RESULTS: We used the root mean square error (RMSE) as the main performance measure. The vision based position tracking algorithm achieved RMSE = 0.16 m in position estimation for upright standing people. The experimental results for the proposed human orientation estimation methods demonstrated the adaptivity and robustness to changes in the smartphone attachment position, when the fusion of both vision and inertial information was used. CONCLUSIONS: The system achieves satisfactory accuracy on indoor position tracking for the use in the FoG detection application with spatial context. The combination of inertial and vision information has the potential for correct patient heading estimation even when the inertial wearable sensor device is put into an a priori unknown position.

12.
Stud Health Technol Inform ; 177: 126-31, 2012.
Article in English | MEDLINE | ID: mdl-22942043

ABSTRACT

This work proposes a concept for indoor ambulatory monitoring for Parkinson's disease patients. In the proposed concept, a wearable inertial sensor is kept as the main monitoring device through the day, and it is expanded by an ambient sensor system in the specific living areas with high estimated probability of occurrence of freezing of gait episode. The ambient sensor system supports decisions of the wearable sensor system by providing relevant spatial context information of the user, which is obtained through precise localization.


Subject(s)
Acceleration , Actigraphy/instrumentation , Algorithms , Diagnosis, Computer-Assisted/instrumentation , Diagnosis, Computer-Assisted/methods , Monitoring, Ambulatory/instrumentation , Parkinson Disease/diagnosis , Equipment Design , Equipment Failure Analysis , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...