Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
Stud Health Technol Inform ; 310: 299-303, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269813

ABSTRACT

Clinical simulation is a useful method for evaluating AI-enabled clinical decision support (CDS). Simulation studies permit patient- and risk-free evaluation and far greater experimental control than is possible with clinical studies. The effect of CDS assisted and unassisted patient scenarios on meaningful downstream decisions and actions within the information value chain can be evaluated as outcome measures. This paper discusses the use of clinical simulation in CDS evaluation and presents a case study to demonstrate feasibility of its application.


Subject(s)
Artificial Intelligence , Humans , Computer Simulation
2.
Stud Health Technol Inform ; 310: 604-608, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269880

ABSTRACT

With growing use of machine learning (ML)-enabled medical devices by clinicians and consumers safety events involving these systems are emerging. Current analysis of safety events heavily relies on retrospective review by experts, which is time consuming and cost ineffective. This study develops automated text classifiers and evaluates their potential to identify rare ML safety events from the US FDA's MAUDE. Four stratified classifiers were evaluated using a real-world data distribution with different feature sets: report text; text and device brand name; text and generic device type; and all information combined. We found that stratified classifiers using the generic type of devices were the most effective technique when tested on both stratified (F1-score=85%) and external datasets (precision=100%). All true positives on the external dataset were consistently identified by the three stratified classifiers, indicating the ensemble results from them can be used directly to monitor ML events reported to MAUDE.


Subject(s)
Drugs, Generic , Machine Learning
3.
Stud Health Technol Inform ; 310: 279-283, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269809

ABSTRACT

Real-world performance of machine learning (ML) models is crucial for safely and effectively embedding them into clinical decision support (CDS) systems. We examined evidence about the performance of contemporary ML-based CDS in clinical settings. A systematic search of four bibliographic databases identified 32 studies over a 5-year period. The CDS task, ML type, ML method and real-world performance was extracted and analysed. Most ML-based CDS supported image recognition and interpretation (n=12; 38%) and risk assessment (n=9; 28%). The majority used supervised learning (n=28; 88%) to train random forests (n=7; 22%) and convolutional neural networks (n=7; 22%). Only 12 studies reported real-world performance using heterogenous metrics; and performance degraded in clinical settings compared to model validation. The reporting of model performance is fundamental to ensuring safe and effective use of ML-based CDS in clinical settings. There remain opportunities to improve reporting.


Subject(s)
Decision Support Systems, Clinical , Machine Learning , Databases, Bibliographic , Neural Networks, Computer
4.
Yearb Med Inform ; 32(1): 115-126, 2023 Aug.
Article in English | MEDLINE | ID: mdl-38147855

ABSTRACT

AIMS AND OBJECTIVES: To examine the nature and use of automation in contemporary clinical information systems by reviewing studies reporting the implementation and evaluation of artificial intelligence (AI) technologies in healthcare settings. METHOD: PubMed/MEDLINE, Web of Science, EMBASE, the tables of contents of major informatics journals, and the bibliographies of articles were searched for studies reporting evaluation of AI in clinical settings from January 2021 to December 2022. We documented the clinical application areas and tasks supported, and the level of system autonomy. Reported effects on user experience, decision-making, care delivery and outcomes were summarised. RESULTS: AI technologies are being applied in a wide variety of clinical areas. Most contemporary systems utilise deep learning, use routinely collected data, support diagnosis and triage, are assistive (requiring users to confirm or approve AI provided information or decisions), and are used by doctors in acute care settings in high-income nations. AI systems are integrated and used within existing clinical information systems including electronic medical records. There is limited support for One Health goals. Evaluation is largely based on quantitative methods measuring effects on decision-making. CONCLUSION: AI systems are being implemented and evaluated in many clinical areas. There remain many opportunities to understand patterns of routine use and evaluate effects on decision-making, care delivery and patient outcomes using mixed-methods. Support for One Health including integrating data about environmental factors and social determinants needs further exploration.


Subject(s)
Artificial Intelligence , Delivery of Health Care , Humans , Surveys and Questionnaires , Automation , Information Systems
5.
Intern Med J ; 53(9): 1533-1539, 2023 09.
Article in English | MEDLINE | ID: mdl-37683094

ABSTRACT

The question of whether the time has come to hang up the stethoscope is bound up in the promises of artificial intelligence (AI), promises that have so far proven difficult to deliver, perhaps because of the mismatch between the technical capability of AI and its use in real-world clinical settings. This perspective argues that it is time to move away from discussing the generalised promise of disembodied AI and focus on specifics. We need to focus on how the computational method underlying AI, i.e. machine learning (ML), is embedded into tools, how those tools contribute to clinical tasks and decisions and to what extent they can be relied on. Accordingly, we pose four questions that must be asked to make the discussion real and to understand how ML tools contribute to health care: (1) What does the ML algorithm do? (2) How is output of the ML algorithm used in clinical tools? (3) What does the ML tool contribute to clinical tasks or decisions? (4) Can clinicians act or rely on the ML tool? Two exemplar ML tools are examined to show how these questions can be used to better understand the role of ML in supporting clinical tasks and decisions. Ultimately, ML is just a fancy method of automation. We show that it is useful in automating specific and narrowly defined clinical tasks but likely incapable of automating the full gamut of decisions and tasks performed by clinicians.


Subject(s)
Medicine , Stethoscopes , Humans , Artificial Intelligence , Algorithms
6.
J Am Med Inform Assoc ; 30(12): 2050-2063, 2023 11 17.
Article in English | MEDLINE | ID: mdl-37647865

ABSTRACT

OBJECTIVE: This study aims to summarize the research literature evaluating machine learning (ML)-based clinical decision support (CDS) systems in healthcare settings. MATERIALS AND METHODS: We conducted a review in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta Analyses extension for Scoping Review). Four databases, including PubMed, Medline, Embase, and Scopus were searched for studies published from January 2016 to April 2021 evaluating the use of ML-based CDS in clinical settings. We extracted the study design, care setting, clinical task, CDS task, and ML method. The level of CDS autonomy was examined using a previously published 3-level classification based on the division of clinical tasks between the clinician and CDS; effects on decision-making, care delivery, and patient outcomes were summarized. RESULTS: Thirty-two studies evaluating the use of ML-based CDS in clinical settings were identified. All were undertaken in developed countries and largely in secondary and tertiary care settings. The most common clinical tasks supported by ML-based CDS were image recognition and interpretation (n = 12) and risk assessment (n = 9). The majority of studies examined assistive CDS (n = 23) which required clinicians to confirm or approve CDS recommendations for risk assessment in sepsis and for interpreting cancerous lesions in colonoscopy. Effects on decision-making, care delivery, and patient outcomes were mixed. CONCLUSION: ML-based CDS are being evaluated in many clinical areas. There remain many opportunities to apply and evaluate effects of ML-based CDS on decision-making, care delivery, and patient outcomes, particularly in resource-constrained settings.


Subject(s)
Decision Support Systems, Clinical , Neoplasms , Sepsis , Humans , Delivery of Health Care , Health Facilities
7.
J Am Med Inform Assoc ; 30(7): 1227-1236, 2023 06 20.
Article in English | MEDLINE | ID: mdl-37071804

ABSTRACT

OBJECTIVE: To examine the real-world safety problems involving machine learning (ML)-enabled medical devices. MATERIALS AND METHODS: We analyzed 266 safety events involving approved ML medical devices reported to the US FDA's MAUDE program between 2015 and October 2021. Events were reviewed against an existing framework for safety problems with Health IT to identify whether a reported problem was due to the ML device (device problem) or its use, and key contributors to the problem. Consequences of events were also classified. RESULTS: Events described hazards with potential to harm (66%), actual harm (16%), consequences for healthcare delivery (9%), near misses that would have led to harm if not for intervention (4%), no harm or consequences (3%), and complaints (2%). While most events involved device problems (93%), use problems (7%) were 4 times more likely to harm (relative risk 4.2; 95% CI 2.5-7). Problems with data input to ML devices were the top contributor to events (82%). DISCUSSION: Much of what is known about ML safety comes from case studies and the theoretical limitations of ML. We contribute a systematic analysis of ML safety problems captured as part of the FDA's routine post-market surveillance. Most problems involved devices and concerned the acquisition of data for processing by algorithms. However, problems with the use of devices were more likely to harm. CONCLUSIONS: Safety problems with ML devices involve more than algorithms, highlighting the need for a whole-of-system approach to safe implementation with a special focus on how users interact with devices.


Subject(s)
Algorithms , Device Approval , United States , Delivery of Health Care , United States Food and Drug Administration
8.
BMJ Health Care Inform ; 29(1)2022 May.
Article in English | MEDLINE | ID: mdl-35618316

ABSTRACT

OBJECTIVE: To explore emergency department (ED) and urgent care (UC) clinicians' perceptions of digital access to patients' past medical history (PMH). METHODS: An online survey compared anticipated and actual value of access to digital PMH. UTAUT2 (Unified Theory of Acceptance and Use of Technology 2) was used to assess technology acceptance. Quantitative data were analysed using Mann-Whitney U tests and qualitative data were analysed using a general inductive approach. RESULTS: 33 responses were received. 94% (16/17) of respondents with PMH access said they valued their PMH system and all respondents with no digital PMH access (100%; 16/16) said they believed access would be valuable. Both groups indicated a high level of technology acceptance across all UTAUT2 dimensions. Free-text responses suggested improvements such as increasing the number of patient records available, standardisation of information presentation, increased system reliability, expanded access to information and validation by authoritative/trusted sources. DISCUSSION: Non-PMH respondents' expectations were closely matched with the benefits obtained by PMH respondents. High levels of technology acceptance indicated a strong willingness to adopt. Clinicians appeared clear about the improvements they would like for PMH content and access. Policy implications include the need to focus on higher levels of patient participation, and increasing the breadth and depth of information and processes to ensure patient record curation and stewardship. CONCLUSION: There appears to be strong clinician support for digital access to PMH in ED and UC; however, current systems appear to have many shortcomings.


Subject(s)
Ambulatory Care , Emergency Service, Hospital , Humans , Reproducibility of Results , Surveys and Questionnaires
9.
Stud Health Technol Inform ; 284: 269-274, 2021 Dec 15.
Article in English | MEDLINE | ID: mdl-34920524

ABSTRACT

The use of computerized decision support systems (DSS) in nursing practice is increasing. However, research about who uses DSS, where are they implemented, and how they are linked with standards of nursing is limited. This paper presents evidence on users and settings of DSS implementation, along with specific nursing standards of practice that are facilitated by such DSS. We searched six bibliographic databases using relevant terms and identified 28 studies, each evaluating a unique DSS. Of these, 24 were used by registered nurses and 19 were implemented in short-term care units. Most of the DSS were found to facilitate nursing standards of assessment and intervention, however, outcome identification and evaluation were the least included standards. These findings not only highlight gaps in systems but also offer opportunities for further research development in this area.

10.
J Am Med Inform Assoc ; 28(11): 2502-2513, 2021 10 12.
Article in English | MEDLINE | ID: mdl-34498063

ABSTRACT

OBJECTIVE: The study sought to summarize research literature on nursing decision support systems (DSSs ); understand which steps of the nursing care process (NCP) are supported by DSSs, and analyze effects of automated information processing on decision making, care delivery, and patient outcomes. MATERIALS AND METHODS: We conducted a systematic review in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement. PubMed, CINAHL, Cochrane, Embase, Scopus, and Web of Science were searched from January 2014 to April 2020 for studies focusing on DSSs used exclusively by nurses and their effects. Information about the stages of automation (information acquisition, information analysis, decision and action selection, and action implementation), NCP, and effects was assessed. RESULTS: Of 1019 articles retrieved, 28 met the inclusion criteria, each studying a unique DSS. Most DSSs were concerned with two NCP steps: assessment (82%) and intervention (86%). In terms of automation, all included DSSs automated information analysis and decision selection. Five DSSs automated information acquisition and only one automated action implementation. Effects on decision making, care delivery, and patient outcome were mixed. DSSs improved compliance with recommendations and reduced decision time, but impacts were not always sustainable. There were improvements in evidence-based practice, but impact on patient outcomes was mixed. CONCLUSIONS: Current nursing DSSs do not adequately support the NCP and have limited automation. There remain many opportunities to enhance automation, especially at the stage of information acquisition. Further research is needed to understand how automation within the NCP can improve nurses' decision making, care delivery, and patient outcomes.


Subject(s)
Delivery of Health Care , Nursing Process , Automation , Decision Making , Humans
11.
BMJ Health Care Inform ; 28(1)2021 Apr.
Article in English | MEDLINE | ID: mdl-33853863

ABSTRACT

OBJECTIVE: To examine how and to what extent medical devices using machine learning (ML) support clinician decision making. METHODS: We searched for medical devices that were (1) approved by the US Food and Drug Administration (FDA) up till February 2020; (2) intended for use by clinicians; (3) in clinical tasks or decisions and (4) used ML. Descriptive information about the clinical task, device task, device input and output, and ML method were extracted. The stage of human information processing automated by ML-based devices and level of autonomy were assessed. RESULTS: Of 137 candidates, 59 FDA approvals for 49 unique devices were included. Most approvals (n=51) were since 2018. Devices commonly assisted with diagnostic (n=35) and triage (n=10) tasks. Twenty-three devices were assistive, providing decision support but left clinicians to make important decisions including diagnosis. Twelve automated the provision of information (autonomous information), such as quantification of heart ejection fraction, while 14 automatically provided task decisions like triaging the reading of scans according to suspected findings of stroke (autonomous decisions). Stages of human information processing most automated by devices were information analysis, (n=14) providing information as an input into clinician decision making, and decision selection (n=29), where devices provide a decision. CONCLUSION: Leveraging the benefits of ML algorithms to support clinicians while mitigating risks, requires a solid relationship between clinician and ML-based devices. Such relationships must be carefully designed, considering how algorithms are embedded in devices, the tasks supported, information provided and clinicians' interactions with them.


Subject(s)
Decision Making, Computer-Assisted , Machine Learning , Equipment and Supplies/standards , Humans , United States , United States Food and Drug Administration
12.
BMJ Health Care Inform ; 27(3)2020 Aug.
Article in English | MEDLINE | ID: mdl-32830108

ABSTRACT

OBJECTIVE: To measure lookup rates of externally held primary care records accessed in emergency care and identify patient characteristics, conditions and potential consequences associated with access. MEASURES: Rates of primary care record access and re-presentation to the emergency department (ED) within 30 days and hospital admission. DESIGN: A retrospective observational study of 77 181 ED presentations over 4 years and 9 months, analysing 8184 index presentations in which patients' primary care records were accessed from the ED. Data were compared with 17 449 randomly selected index control presentations. Analysis included propensity score matching for age and triage categories. RESULTS: 6.3% of overall ED presentations triggered a lookup (rising to 8.3% in year 5); 83.1% of patients were only looked up once and 16.9% of patients looked up on multiple occasions. Lookup patients were on average 25 years older (z=-9.180, p<0.001, r=0.43). Patients with more urgent triage classifications had their records accessed more frequently (z=-36.47, p<0.001, r=0.23). Record access was associated with a significant but negligible increase in hospital admission (χ2 (1, n=13 120)=98.385, p<0.001, phi=0.087) and readmission within 30 days (χ2 (1, n=13 120)=86.288, p<0.001, phi=0.081). DISCUSSION: Emergency care clinicians access primary care records more frequently for older patients or those in higher triage categories. Increased levels of inpatient admission and re-presentation within 30 days are likely linked to age and triage categories. CONCLUSION: Further studies should focus on the impact of record access on clinical and process outcomes and which record elements have the most utility to shape clinical decisions.


Subject(s)
Acute Disease , Electronic Health Records/statistics & numerical data , Emergency Medical Services , Primary Health Care , Adult , Aged , Delivery of Health Care , Emergency Service, Hospital , Hospitalization , Humans , Middle Aged , Patient Admission/statistics & numerical data , Propensity Score , Retrospective Studies , Severity of Illness Index , Triage
13.
J Med Internet Res ; 22(6): e14827, 2020 06 01.
Article in English | MEDLINE | ID: mdl-32442129

ABSTRACT

BACKGROUND: Recent advances in natural language processing and artificial intelligence have led to widespread adoption of speech recognition technologies. In consumer health applications, speech recognition is usually applied to support interactions with conversational agents for data collection, decision support, and patient monitoring. However, little is known about the use of speech recognition in consumer health applications and few studies have evaluated the efficacy of conversational agents in the hands of consumers. In other consumer-facing tools, cognitive load has been observed to be an important factor affecting the use of speech recognition technologies in tasks involving problem solving and recall. Users find it more difficult to think and speak at the same time when compared to typing, pointing, and clicking. However, the effects of speech recognition on cognitive load when performing health tasks has not yet been explored. OBJECTIVE: The aim of this study was to evaluate the use of speech recognition for documentation in consumer digital health tasks involving problem solving and recall. METHODS: Fifty university staff and students were recruited to undertake four documentation tasks with a simulated conversational agent in a computer laboratory. The tasks varied in complexity determined by the amount of problem solving and recall required (simple and complex) and the input modality (speech recognition vs keyboard and mouse). Cognitive load, task completion time, error rate, and usability were measured. RESULTS: Compared to using a keyboard and mouse, speech recognition significantly increased the cognitive load for complex tasks (Z=-4.08, P<.001) and simple tasks (Z=-2.24, P=.03). Complex tasks took significantly longer to complete (Z=-2.52, P=.01) and speech recognition was found to be overall less usable than a keyboard and mouse (Z=-3.30, P=.001). However, there was no effect on errors. CONCLUSIONS: Use of a keyboard and mouse was preferable to speech recognition for complex tasks involving problem solving and recall. Further studies using a broader variety of consumer digital health tasks of varying complexity are needed to investigate the contexts in which use of speech recognition is most appropriate. The effects of cognitive load on task performance and its significance also need to be investigated.


Subject(s)
Consumer Health Informatics/methods , Laboratories/standards , Problem Solving/physiology , Speech Recognition Software/standards , Adolescent , Adult , Female , Humans , Male , Middle Aged , Young Adult
14.
Appl Clin Inform ; 10(1): 66-76, 2019 01.
Article in English | MEDLINE | ID: mdl-30699458

ABSTRACT

OBJECTIVE: Clinicians using clinical decision support (CDS) to prescribe medications have an obligation to ensure that prescriptions are safe. One option is to verify the safety of prescriptions if there is uncertainty, for example, by using drug references. Supervisory control experiments in aviation and process control have associated errors, with reduced verification arising from overreliance on decision support. However, it is unknown whether this relationship extends to clinical decision-making. Therefore, we examine whether there is a relationship between verification behaviors and prescribing errors, with and without CDS medication alerts, and whether task complexity mediates this. METHODS: A total of 120 students in the final 2 years of a medical degree prescribed medicines for patient scenarios using a simulated electronic prescribing system. CDS (correct, incorrect, and no CDS) and task complexity (low and high) were varied. Outcomes were omission (missed prescribing errors) and commission errors (accepted false-positive alerts). Verification measures were access of drug references and view time percentage of task time. RESULTS: Failure to access references for medicines with prescribing errors increased omission errors with no CDS (high-complexity: χ 2(1) = 12.716; p < 0.001) and incorrect CDS (Fisher's exact; low-complexity: p = 0.002; high-complexity: p = 0.001). Failure to access references for false-positive alerts increased commission errors (low-complexity: χ 2(1) = 16.673, p < 0.001; high-complexity: χ 2(1) = 18.690, p < 0.001). Fewer participants accessed relevant references with incorrect CDS compared with no CDS (McNemar; low-complexity: p < 0.001; high-complexity: p < 0.001). Lower view time percentages increased omission (F(3, 361.914) = 4.498; p = 0.035) and commission errors (F(1, 346.223) = 2.712; p = 0.045). View time percentages were lower in CDS-assisted conditions compared with unassisted conditions (F(2, 335.743) = 10.443; p < 0.001). DISCUSSION: The presence of CDS reduced verification of prescription safety. When CDS was incorrect, reduced verification was associated with increased prescribing errors. CONCLUSION: CDS can be incorrect, and verification provides one mechanism to detect errors. System designers need to facilitate verification without increasing workload or eliminating the benefits of correct CDS.


Subject(s)
Medical Order Entry Systems , Medication Errors/statistics & numerical data , Confidence Intervals , Decision Support Systems, Clinical , Electronic Prescribing/statistics & numerical data , Humans , Medication Errors/prevention & control , Reproducibility of Results , Students, Medical/statistics & numerical data
15.
Hum Factors ; 60(7): 1008-1021, 2018 11.
Article in English | MEDLINE | ID: mdl-29939764

ABSTRACT

OBJECTIVE: Determine the relationship between cognitive load (CL) and automation bias (AB). BACKGROUND: Clinical decision support (CDS) for electronic prescribing can improve safety but introduces the risk of AB, where reliance on CDS replaces vigilance in information seeking and processing. We hypothesized high CL generated by high task complexity would increase AB errors. METHOD: One hundred twenty medical students prescribed medicines for clinical scenarios using a simulated e-prescribing system in a randomized controlled experiment. Quality of CDS (correct, incorrect, and no CDS) and task complexity (low and high) were varied. CL, omission errors (failure to detect prescribing errors), and commission errors (acceptance of false positive alerts) were measured. RESULTS: Increasing complexity from low to high significantly increased CL, F(1, 118) = 71.6, p < .001. CDS reduced CL in high-complexity conditions compared to no CDS, F(2, 117) = 4.72, p = .015. Participants who made omission errors in incorrect and no CDS conditions exhibited lower CL compared to those who did not, F(1, 636.49) = 3.79, p = .023. CONCLUSION: Results challenge the notion that AB is triggered by increasing task complexity and associated increases in CL. Omission errors were associated with lower CL, suggesting errors may stem from an insufficient allocation of cognitive resources. APPLICATION: This is the first research to examine the relationship between CL and AB. Findings suggest designers and users of CDS systems need to be aware of the risks of AB. Interventions that increase user vigilance and engagement may be beneficial and deserve further investigation.


Subject(s)
Decision Support Systems, Clinical , Electronic Prescribing , Executive Function/physiology , Man-Machine Systems , Memory, Short-Term/physiology , Task Performance and Analysis , Adult , Female , Humans , Male , Young Adult
16.
BMC Med Inform Decis Mak ; 17(1): 28, 2017 03 16.
Article in English | MEDLINE | ID: mdl-28302112

ABSTRACT

BACKGROUND: Clinical decision support (CDS) in e-prescribing can improve safety by alerting potential errors, but introduces new sources of risk. Automation bias (AB) occurs when users over-rely on CDS, reducing vigilance in information seeking and processing. Evidence of AB has been found in other clinical tasks, but has not yet been tested with e-prescribing. This study tests for the presence of AB in e-prescribing and the impact of task complexity and interruptions on AB. METHODS: One hundred and twenty students in the final two years of a medical degree prescribed medicines for nine clinical scenarios using a simulated e-prescribing system. Quality of CDS (correct, incorrect and no CDS) and task complexity (low, low + interruption and high) were varied between conditions. Omission errors (failure to detect prescribing errors) and commission errors (acceptance of false positive alerts) were measured. RESULTS: Compared to scenarios with no CDS, correct CDS reduced omission errors by 38.3% (p < .0001, n = 120), 46.6% (p < .0001, n = 70), and 39.2% (p < .0001, n = 120) for low, low + interrupt and high complexity scenarios respectively. Incorrect CDS increased omission errors by 33.3% (p < .0001, n = 120), 24.5% (p < .009, n = 82), and 26.7% (p < .0001, n = 120). Participants made commission errors, 65.8% (p < .0001, n = 120), 53.5% (p < .0001, n = 82), and 51.7% (p < .0001, n = 120). Task complexity and interruptions had no impact on AB. CONCLUSIONS: This study found evidence of AB omission and commission errors in e-prescribing. Verification of CDS alerts is key to avoiding AB errors. However, interventions focused on this have had limited success to date. Clinicians should remain vigilant to the risks of CDS failures and verify CDS.


Subject(s)
Automation/standards , Decision Support Systems, Clinical/standards , Electronic Prescribing/standards , Medication Errors/prevention & control , Students, Medical , Humans
17.
J Am Med Inform Assoc ; 24(2): 423-431, 2017 Mar 01.
Article in English | MEDLINE | ID: mdl-27516495

ABSTRACT

INTRODUCTION: While potentially reducing decision errors, decision support systems can introduce new types of errors. Automation bias (AB) happens when users become overreliant on decision support, which reduces vigilance in information seeking and processing. Most research originates from the human factors literature, where the prevailing view is that AB occurs only in multitasking environments. OBJECTIVES: This review seeks to compare the human factors and health care literature, focusing on the apparent association of AB with multitasking and task complexity. DATA SOURCES: EMBASE, Medline, Compendex, Inspec, IEEE Xplore, Scopus, Web of Science, PsycINFO, and Business Source Premiere from 1983 to 2015. STUDY SELECTION: Evaluation studies where task execution was assisted by automation and resulted in errors were included. Participants needed to be able to verify automation correctness and perform the task manually. METHODS: Tasks were identified and grouped. Task and automation type and presence of multitasking were noted. Each task was rated for its verification complexity. RESULTS: Of 890 papers identified, 40 met the inclusion criteria; 6 were in health care. Contrary to the prevailing human factors view, AB was found in single tasks, typically involving diagnosis rather than monitoring, and with high verification complexity. LIMITATIONS: The literature is fragmented, with large discrepancies in how AB is reported. Few studies reported the statistical significance of AB compared to a control condition. CONCLUSION: AB appears to be associated with the degree of cognitive load experienced in decision tasks, and appears to not be uniquely associated with multitasking. Strategies to minimize AB might focus on cognitive load reduction.


Subject(s)
Attitude to Computers , Decision Support Systems, Clinical , Automation , Bias , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...