Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
JMIR Serious Games ; 12: e50315, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38598265

ABSTRACT

BACKGROUND: Few gamified cognitive tasks are subjected to rigorous examination of psychometric properties, despite their use in experimental and clinical settings. Even small manipulations to cognitive tasks require extensive research to understand their effects. OBJECTIVE: This study aims to investigate how game elements can affect the reliability of scores on a Stroop task. We specifically investigated performance consistency within and across sessions. METHODS: We created 2 versions of the Stroop task, with and without game elements, and then tested each task with participants at 2 time points. The gamified task used points and feedback as game elements. In this paper, we report on the reliability of the gamified Stroop task in terms of internal consistency and test-retest reliability, compared with the control task. We used a permutation approach to evaluate internal consistency. For test-retest reliability, we calculated the Pearson correlation and intraclass correlation coefficients between each time point. We also descriptively compared the reliability of scores on a trial-by-trial basis, considering the different trial types. RESULTS: At the first time point, the Stroop effect was reduced in the game condition, indicating an increase in performance. Participants in the game condition had faster reaction times (P=.005) and lower error rates (P=.04) than those in the basic task condition. Furthermore, the game condition led to higher measures of internal consistency at both time points for reaction times and error rates, which indicates a more consistent response pattern. For reaction time in the basic task condition, at time 1, rSpearman-Brown=0.78, 95% CI 0.64-0.89. At time 2, rSpearman-Brown=0.64, 95% CI 0.40-0.81. For reaction time, in the game condition, at time 1, rSpearman-Brown=0.83, 95% CI 0.71-0.91. At time 2, rSpearman-Brown=0.76, 95% CI 0.60-0.88. Similarly, for error rates in the basic task condition, at time 1, rSpearman-Brown=0.76, 95% CI 0.62-0.87. At time 2, rSpearman-Brown=0.74, 95% CI 0.58-0.86. For error rates in the game condition, at time 1, rSpearman-Brown=0.76, 95% CI 0.62-0.87. At time 2, rSpearman-Brown=0.74, 95% CI 0.58-0.86. Test-retest reliability analysis revealed a distinctive performance pattern depending on the trial type, which may be reflective of motivational differences between task versions. In short, especially in the incongruent trials where cognitive conflict occurs, performance in the game condition reaches peak consistency after 100 trials, whereas performance consistency drops after 50 trials for the basic version and only catches up to the game after 250 trials. CONCLUSIONS: Even subtle gamification can impact task performance albeit not only in terms of a direct difference in performance between conditions. People playing the game reach peak performance sooner, and their performance is more consistent within and across sessions. We advocate for a closer examination of the impact of game elements on performance.

2.
JMIR Serious Games ; 9(3): e26449, 2021 Aug 09.
Article in English | MEDLINE | ID: mdl-34383674

ABSTRACT

BACKGROUND: Serious games are now widely used in many contexts, including psychological research and clinical use. One area of growing interest is that of cognitive assessment, which seeks to measure different cognitive functions such as memory, attention, and perception. Measuring these functions at both the population and individual levels can inform research and indicate health issues. Attention is an important function to assess, as an accurate measure of attention can help diagnose many common disorders, such as attention-deficit/hyperactivity disorder and dementia. However, using games to assess attention poses unique problems, as games inherently manipulate attention through elements such as sound effects, graphics, and rewards, and research on adding game elements to assessments (ie, gamification) has shown mixed results. The process for developing cognitive tasks is robust, with high psychometric standards that must be met before these tasks are used for assessment. Although games offer more diverse approaches for assessment, there is no standard for how they should be developed or evaluated. OBJECTIVE: To better understand the field and provide guidance to interdisciplinary researchers, we aim to answer the question: How are digital games used for the cognitive assessment of attention made and measured? METHODS: We searched several databases for papers that described a digital game used to assess attention that could be deployed remotely without specialized hardware. We used Rayyan, a systematic review software, to screen the records before conducting a systematic review. RESULTS: The initial database search returned 49,365 papers. Our screening process resulted in a total of 74 papers that used a digital game to measure cognitive functions related to attention. Across the studies in our review, we found three approaches to making assessment games: gamifying cognitive tasks, creating custom games based on theories of cognition, and exploring potential assessment properties of commercial games. With regard to measuring the assessment properties of these games (eg, how accurately they assess attention), we found three approaches: comparison to a traditional cognitive task, comparison to a clinical diagnosis, and comparison to knowledge of cognition; however, most studies in our review did not evaluate the game's properties (eg, if participants enjoyed the game). CONCLUSIONS: Our review provides an overview of how games used for the assessment of attention are developed and evaluated. We further identified three barriers to advancing the field: reliance on assumptions, lack of evaluation, and lack of integration and standardization. We then recommend the best practices to address these barriers. Our review can act as a resource to help guide the field toward more standardized approaches and rigorous evaluation required for the widespread adoption of assessment games.

3.
Front Psychol ; 12: 767507, 2021.
Article in English | MEDLINE | ID: mdl-34975656

ABSTRACT

We describe the design and evaluation of a sub-clinical digital assessment tool that integrates digital biomarkers of depression. Based on three standard cognitive tasks (D2 Test of Attention, Delayed Matching to Sample Task, Spatial Working Memory Task) on which people with depression have been known to perform differently than a control group, we iteratively designed a digital assessment tool that could be deployed outside of laboratory contexts, in uncontrolled home environments on computer systems with widely varying system characteristics (e.g., displays resolution, input devices). We conducted two online studies, in which participants used the assessment tool in their own homes, and completed subjective questionnaires including the Patient Health Questionnaire (PHQ-9)-a standard self-report tool for assessing depression in clinical contexts. In a first study (n = 269), we demonstrate that each task can be used in isolation to significantly predict PHQ-9 scores. In a second study (n = 90), we replicate these results and further demonstrate that when used in combination, behavioral metrics from the three tasks significantly predicted PHQ-9 scores, even when taking into account demographic factors known to influence depression such as age and gender. A multiple regression model explained 34.4% of variance in PHQ-9 scores with behavioral metrics from each task providing unique and significant contributions to the prediction.

4.
Appl Ergon ; 81: 102872, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31422273

ABSTRACT

The purpose of this study was to evaluate the safety and efficiency of a specific ambulance while providers delivered basic and advanced life support. Forty-eight, Emergency Medical Service (EMS) teams were observed delivering care to a simulated patient during an anaphylaxis scenario in a moving ambulance that contained a complete compliment of medical supplies and equipment. A detailed coding system was developed and applied to the audio and video behavioural data. Patterns of interaction among EMS personnel, the patient, equipment and the ambulance interior during the patient simulation scenario were analyzed. The results revealed a number of issues associated with the patient compartment including: potentially unsafe seated and standing positions; hazardous barriers to movement around the patient; difficulties accessing equipment and supplies; and the adequacy of work surfaces and waste disposal. A number of design recommendations are made to guide provider and patient comfort, efficiency and safety.


Subject(s)
Ambulances , Delivery of Health Care/standards , Efficiency, Organizational , Emergency Medical Services/standards , Workflow , Adult , Environment Design , Female , Humans , Male , Patient Simulation , Quality of Health Care , Safety
5.
Hum Factors ; 60(1): 101-133, 2018 02.
Article in English | MEDLINE | ID: mdl-29351023

ABSTRACT

Objective An up-to-date meta-analysis of experimental research on talking and driving is needed to provide a comprehensive, empirical, and credible basis for policy, legislation, countermeasures, and future research. Background The effects of cell, mobile, and smart phone use on driving safety continues to be a contentious societal issue. Method All available studies that measured the effects of cell phone use on driving were identified through a variety of search methods and databases. A total of 93 studies containing 106 experiments met the inclusion criteria. Coded independent variables included conversation target (handheld, hands-free, and passenger), setting (laboratory, simulation, or on road), and conversation type (natural, cognitive task, and dialing). Coded dependent variables included reaction time, stimulus detection, lane positioning, speed, headway, eye movements, and collisions. Results The overall sample had 4,382 participants, with driver ages ranging from 14 to 84 years ( M = 25.5, SD = 5.2). Conversation on a handheld or hands-free phone resulted in performance costs when compared with baseline driving for reaction time, stimulus detection, and collisions. Passenger conversation had a similar pattern of effect sizes. Dialing while driving had large performance costs for many variables. Conclusion This meta-analysis found that cell phone and passenger conversation produced moderate performance costs. Drivers minimally compensated while conversing on a cell phone by increasing headway or reducing speed. A number of additional meta-analytic questions are discussed. Application The results can be used to guide legislation, policy, countermeasures, and future research.


Subject(s)
Accidents, Traffic , Automobile Driving , Cell Phone , Interpersonal Relations , Psychomotor Performance , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...