Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Appl Ergon ; 42(1): 138-45, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20630495

ABSTRACT

INTRODUCTION: Subjective workload measures are usually administered in a visual-manual format, either electronically or by paper and pencil. However, vocal responses to spoken queries may sometimes be preferable, for example when experimental manipulations require continuous manual responding or when participants have certain sensory/motor impairments. In the present study, we evaluated the acceptability of the hands-free administration of two subjective workload questionnaires - the NASA Task Load Index (NASA-TLX) and the Multiple Resources Questionnaire (MRQ) - in a surgical training environment where manual responding is often constrained. METHOD: Sixty-four undergraduates performed fifteen 90-s trials of laparoscopic training tasks (five replications of 3 tasks - cannulation, ring transfer, and rope manipulation). Half of the participants provided workload ratings using a traditional paper-and-pencil version of the NASA-TLX and MRQ; the remainder used a vocal (hands-free) version of the questionnaires. A follow-up experiment extended the evaluation of the hands-free version to actual medical students in a Minimally Invasive Surgery (MIS) training facility. RESULTS: The NASA-TLX was scored in 2 ways - (1) the traditional procedure using participant-specific weights to combine its 6 subscales, and (2) a simplified procedure - the NASA Raw Task Load Index (NASA-RTLX) - using the unweighted mean of the subscale scores. Comparison of the scores obtained from the hands-free and written administration conditions yielded coefficients of equivalence of r=0.85 (NASA-TLX) and r=0.81 (NASA-RTLX). Equivalence estimates for the individual subscales ranged from r=0.78 ("mental demand") to r=0.31 ("effort"). Both administration formats and scoring methods were equally sensitive to task and repetition effects. For the MRQ, the coefficient of equivalence for the hands-free and written versions was r=0.96 when tested on undergraduates. However, the sensitivity of the hands-free MRQ to task demands (η(partial)(2)=0.138) was substantially less than that for the written version (η(partial)(2)=0.252). This potential shortcoming of the hands-free MRQ did not seem to generalize to medical students who showed robust task effects when using the hands-free MRQ (η(partial)(2)=0.396). A detailed analysis of the MRQ subscales also revealed differences that may be attributable to a "spillover" effect in which participants' judgments about the demands of completing the questionnaires contaminated their judgments about the primary surgical training tasks. CONCLUSION: Vocal versions of the NASA-TLX are acceptable alternatives to standard written formats when researchers wish to obtain global workload estimates. However, care should be used when interpreting the individual subscales if the object is to make comparisons between studies or conditions that use different administration modalities. For the MRQ, the vocal version was less sensitive to experimental manipulations than its written counterpart; however, when medical students rather than undergraduates used the vocal version, the instrument's sensitivity increased well beyond that obtained with any other combination of administration modality and instrument in this study. Thus, the vocal version of the MRQ may be an acceptable workload assessment technique for selected populations, and it may even be a suitable substitute for the NASA-TLX.


Subject(s)
General Surgery/education , Task Performance and Analysis , Workload/psychology , Adolescent , Adult , Ergonomics , Female , Humans , Kentucky , Laparoscopy/education , Laparoscopy/standards , Male , United States , Young Adult
2.
Hum Factors ; 48(3): 422-33, 2006.
Article in English | MEDLINE | ID: mdl-17063959

ABSTRACT

OBJECTIVE: To determine whose naive judgments of consumer product usability are more accurate--those of younger or older adults. Accuracy is here defined as judgments compatible with results from performance-based usability tests. BACKGROUND: Older adults may be better able to predict usability problems than younger adults, making them particularly good participants in studies contributing to the user-centered design of products. This advantage, if present, may stem from older adults' motivation for more usable products or from their experience adapting their own environments to meet their changing physical, cognitive, and sensory needs. METHOD: Sixty older participants (ages 65-75 years) and 60 younger ones (ages 18-22 years) evaluated illustrations of consumer products on specific criteria (e.g., readability, learnability, or error rates). They either rated a single design for each product or ranked six alternative designs. They also explained their choices, indicated which features were most critical for usability, and selected usability-enhancing modifications. RESULTS: Although there was no reliable age difference in the amount of usability information provided in the open-ended explanations, older adults were more accurate at ranking alternative designs, selecting the most usability-critical features, and selecting usability-enhancing modifications (all ps < .05). CONCLUSION: The usability judgments of older adults are more accurate than those of younger adults when these judgments are solicited in a fixed-alternative, but not open-ended, format. APPLICATION: Because older adults are more discerning about potential product usability problems, they may be particularly valuable as research participants in early-stage design research (prior to the availability of working prototypes).


Subject(s)
Commerce , Equipment Design , Judgment , Adolescent , Adult , Age Factors , Aged , Choice Behavior , Female , Humans , Kentucky , Male
3.
Surg Innov ; 12(1): 80-90, 2005 Mar.
Article in English | MEDLINE | ID: mdl-15846451

ABSTRACT

Although the use of performance efficiency measures (speed, movement economy, errors) and ergonomic assessments are relatively well established, the evaluation of cognitive outcomes is rare. This report makes the case for assessment strategies that include mental workload measures as a way to improve training scenarios and training/operating environments. These mental workload measures can be crucially important in determining the difference between well-intentioned but subtly distracting technologies and true breakthroughs that will enhance performance and reduce stress.


Subject(s)
Cognition/physiology , Laparoscopy/psychology , Workload/psychology , Clinical Competence , Humans , Task Performance and Analysis , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...