Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
J Safety Res ; 85: 391-397, 2023 06.
Article in English | MEDLINE | ID: mdl-37330888

ABSTRACT

INTRODUCTION: Amongst pool lifeguards, the capacity to identify drowning swimmers quickly and accurately depends on the interpretation of critical cues. However, assessing the capacity for cue utilization amongst lifeguards at present is costly, time-consuming, and largely subjective. The aim of this study was to test the relationship between cue utilization and the detection of drowning swimmers in a series of virtual public swimming pool scenarios. METHOD: Eighty-seven participants with or without lifeguarding experience engaged in three virtual scenarios, two of which were target scenarios where drowning events occurred within a 13 minute or 23 minute period of watch. Cue utilization was assessed using the pool lifeguarding edition of the EXPERTise 2.0 software following which 23 participants were classified with higher cue utilization, while the remaining participants were classified with lower cue utilization. RESULTS: The results revealed that participants with higher cue utilization were more likely to have acquired experience as a lifeguard, were more likely to detect the drowning swimmer within a three minute period, and, in the case of the 13 minute scenario, recorded a greater dwell time on the drowning victim prior to the drowning event. CONCLUSION: The results suggest that cue utilization is associated with drowning detection performance in a simulated environment and could be employed as a basis for assessments of performance amongst lifeguards in the future. PRACTICAL IMPLICATIONS: Measures of cue utilization are associated with the timely detection of drowning victims in virtual pool lifeguarding scenarios. Employers and trainers of lifeguards can potentially augment existing lifeguarding assessment programs to quickly and cost-effectively identify the capabilities of lifeguards. This is especially useful for new lifeguards or where pool lifeguarding is a seasonal activity that might be associated with skill decay.


Subject(s)
Drowning , Swimming Pools , Humans , Cues , Records
2.
Appl Ergon ; 108: 103954, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36566527

ABSTRACT

BACKGROUND: Ensuring that pool lifeguards develop the skills necessary to detect drowning victims is challenging given that these situations are relatively rare, unpredictable and are difficult to simulate accurately and safely. Virtual reality potentially provides a safe and ecologically valid approach to training since it offers a near-to-real visual experience, together with the opportunity to practice task-related skills and receive feedback. As a prelude to the development of a training intervention, the aim of this research was to establish the construct validity of virtual reality drowning detection tasks. METHOD: Using a repeated measures design, a total of 38 qualified lifeguards and 33 non-lifeguards completed 13 min and 23 min simulated drowning detection tasks that were intended to reflect different levels of sustained attention. During the simulated tasks, participants were asked to monitor a virtual pool and identify any drowning targets with accuracy, response latency, and dwell time recorded. RESULTS: During the simulated scenarios, pool lifeguards detected drowning targets more frequently and spent less time than non-lifeguards fixating on the drowning target prior to the drowning onset. No significant differences in response latency were evident between lifeguards and non-lifeguards nor for first fixations on the drowning target. CONCLUSION: The results provide support for the construct validity of virtual reality lifeguarding scenarios, thereby providing the basis for their development and introduction as a potential training approach for developing and maintaining performance in lifeguarding and drowning detection. APPLICATION: This research provides support for the construct validity of virtual reality simulations as a potential training tool, enabling improvements in the fidelity of training solutions to improve pool lifeguard competency in drowning detection.


Subject(s)
Drowning , Humans , Drowning/diagnosis , Drowning/prevention & control , Attention , Reaction Time
3.
Appl Ergon ; 106: 103887, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36037654

ABSTRACT

This study was designed to examine the roles of cue utilization, phishing features and time pressure in the detection of phishing emails. During two experiments, participants completed an email sorting task containing both phishing and genuine emails. Participants were allocated to either a high or low time pressure condition. Performance was assessed via detection sensitivity and response bias. Participants were classified with either higher or lower cue utilization and completed a measure of phishing knowledge. When participants were blind to the nature of the study (N = 191), participants with higher cue utilization were better able to discriminate phishing from genuine emails. However, they also recorded a stronger bias towards classifying emails as phishing, compared to participants with lower cue utilization. When notified of phishing base rates prior to the email sorting task (N = 191), participants with higher cue utilization were better able to discriminate phishing from genuine emails without recording an increase in rate of false alarms, compared to participants with lower cue utilization. Sensitivity increased with a reduction in time pressure, while response bias was influenced by the number of phishing-related features in each email. The outcomes support the proposition that cue-based processing of critical features is associated with an increase in the capacity of individuals to discriminate phishing from genuine emails, above and beyond phishing-related knowledge. From an applied perspective, these outcomes suggest that cue-based training may be beneficial for improving detection of phishing emails.


Subject(s)
Computer Security , Electronic Mail , Humans , Cues
4.
Front Big Data ; 3: 546860, 2020.
Article in English | MEDLINE | ID: mdl-33693413

ABSTRACT

Phishing emails represent a major threat to online information security. While the prevailing research is focused on users' susceptibility, few studies have considered the decision-making strategies that account for skilled detection. One relevant facet of decision-making is cue utilization, where users retrieve feature-event associations stored in long-term memory. High degrees of cue utilization help reduce the demands placed on working memory (i.e., cognitive load), and invariably improve decision performance (i.e., the information-reduction hypothesis in expert performance). The current study explored the effect of cue utilization and cognitive load when detecting phishing emails. A total of 50 undergraduate students completed: (1) a rail control task; (2) a phishing detection task; and (3) a survey of the cues used in detection. A cue utilization assessment battery (EXPERTise 2.0) then classified participants with either higher or lower cue utilization. As expected, higher cue utilization was associated with a greater likelihood of detecting phishing emails. However, variation in cognitive load had no effect on phishing detection, nor was there an interaction between cue utilization and cognitive load. Further, the findings revealed no significant difference in the types of cues used across cue utilization groups or performance levels. These findings have implications for our understanding of cognitive mechanisms that underpin the detection of phishing emails and the role of factors beyond the information-reduction hypothesis.

5.
Med Educ ; 53(2): 175-183, 2019 02.
Article in English | MEDLINE | ID: mdl-30474247

ABSTRACT

CONTEXT: Repetition of a cognitive ability test is known to increase scores, but almost no research has examined whether similar improvement occurs with repetition of interviews. Retest effects can change the rank order of candidates and reduce the test's criterion validity. Because interviews are widely used to select medical students and postgraduate trainees, and because applicants apply to multiple programmes and often reapply if unsuccessful, the potential for retest effects needs to be understood. OBJECTIVES: This study was designed to identify if retest improvements occur when candidates undertake multiple interviews and, if so, whether the effect is attributable to general interview experience or specific experience and whether repeat testing affects criterion validity. METHODS: We compared interview scores of applicants who were interviewed for one or more of three independent undergraduate medical programmes in two consecutive years and those who were interviewed in both years for the same programme. Correlations between initial and repeat interview scores and a written test of social understanding were compared. RESULTS: General experience (being interviewed by multiple programmes) did not produce improvement in subsequent interview performance. There was no evidence of method effect (having prior experience of the multiple mini-interview process). Specific experience (being interviewed by the same programme across 2 years) resulted in a significant improvement in scores for which regression to the mean did not fully account. Criterion validity did not appear to be affected. CONCLUSIONS: Unsuccessful candidates for medical school who reapply and are re-interviewed on a subsequent occasion at the same institution are likely to increase their scores. The results of this study suggest the increase is probably not attributable to improved ability.


Subject(s)
Education, Medical/methods , Interviews as Topic , School Admission Criteria , Schools, Medical/organization & administration , Female , Humans , Male , Students, Medical/psychology , Training Support
6.
Med Educ ; 52(4): 438-446, 2018 04.
Article in English | MEDLINE | ID: mdl-29349791

ABSTRACT

CONTEXT: Evidence of predictive validity is essential for making robust selection decisions in high-stakes contexts such as medical student selection. Currently available evidence is limited to the prediction of academic performance at single points in time with little understanding of the factors that might undermine the predictive validity of tests of academic and non-academic qualities considered important for success. This study addressed these issues by predicting students' changing performance across a medical degree and assessing whether factors outside an institution's control (such as the uptake of commercial coaching) impact validity. METHODS: Three cohorts of students (n = 301) enrolled in an undergraduate medical degree from 2007-2013 were used to identify trajectories of student academic performance using growth mixture modelling. Multinomial logistic regression assessed whether past academic performance, a test of cognitive ability and a multiple mini-interview could predict a student's likely trajectory and whether this predictive validity was different for those who undertook commercial coaching compared with those who didn't. RESULTS: Among the medical students who successfully graduated (n = 268), four unique trajectories of academic performance were identified. In three trajectories, performance changed at the time when learning became more self-directed and focused on clinical specialties. Scores on all selection tests, with the exception of a test of abstract reasoning, significantly affected the odds of following a trajectory that was consistently below average. However, selection tests could not distinguish those whose performance improved across time from those whose performance declined after an average start. Commercial coaching increased the odds of being among the below-average performers, but did not alter the predictive validity of the selection tests. CONCLUSION: Identifying distinct groups of students has important implications for selection, but also for educating medical students. Commercial coaching may result in selecting students who are less suited for coping with the rigours of medical studies.


Subject(s)
Academic Success , College Admission Test , School Admission Criteria/trends , Students, Medical/statistics & numerical data , Education, Medical, Undergraduate , Female , Humans , Male , Young Adult
7.
Ann Behav Med ; 48(2): 205-14, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24500081

ABSTRACT

BACKGROUND: Increasing life expectancies, burgeoning healthcare costs and an emphasis on the management of multiple health-risk behaviours point to a need to delineate health lifestyles in older adults. PURPOSE: The aims of this study were to delineate health lifestyles of a cohort of older adults and to examine the association of these lifestyles with biological and psychological states and socio-economic indices. METHODS: Cluster analysis was applied to data derived from the self-reported 45 and Up cohort study (N = 96,276) of Australians over 45 years, regarding exercise, smoking, alcohol consumption, diet and cancer screening behaviours. RESULTS: Six lifestyle clusters emerged delineated by smoking, screening and physical activity levels. Individuals within health-risk dominant clusters were more likely to be male, living alone, low-income earners, living in a deprived neighbourhood, psychologically distressed and experiencing low quality of life. CONCLUSIONS: Health lifestyle cluster membership can be used to identify older adults at greatest risk for physical and psychological health morbidity.


Subject(s)
Health Behavior , Adaptation, Psychological , Aged , Alcohol Drinking/epidemiology , Australia/epidemiology , Cluster Analysis , Diet/statistics & numerical data , Early Detection of Cancer/statistics & numerical data , Exercise , Female , Health Status , Humans , Life Style , Male , Middle Aged , Psychology , Quality of Life/psychology , Smoking/epidemiology , Socioeconomic Factors , Stress, Psychological/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL
...