ABSTRACT
OBJECTIVE: A computer-based system to apply trauma resuscitation protocols to patients with penetrating thoracoabdominal trauma was previously validated for 97 consecutive patients at a Level 1 trauma center by a panel of the trauma attendings and further refined by a panel of national trauma experts. The purpose of this article is to describe how this system is now used to objectively critique the actual care given to those patients for process errors in reasoning, independent of outcome. METHODS: A chronological narrative of the care of each patient was presented to the computer program. The actual care was compared with the validated computer protocols at each decision point and differences were classified by a predetermined scoring system from 0 to 100, based on the potential impact on outcome, as critical/noncritical/no errors of commission, omission, or procedure selection. RESULTS: Errors in reasoning occurred in 100% of the 97 cases studied, averaging 11.9/case. Errors of omission were more prevalent than errors of commission (2. 4 errors/case vs 1.2) and were of greater severity (19.4/error vs 5. 1). The largest number of errors involved the failure to record, and perhaps observe, beside information relevant to the reasoning process, an average of 7.4 missing items/patient. Only 2 of the 10 adverse outcomes were judged to be potentially related to errors of reasoning. CONCLUSIONS: Process errors in reasoning were ubiquitous, occurring in every case, although they were infrequently judged to be potentially related to an adverse outcome. Errors of omission were assessed to be more severe. The most common error was failure to consider, or document, available relevant information in the selection of appropriate care.
Subject(s)
Abdominal Injuries/diagnosis , Cardiopulmonary Resuscitation/methods , Diagnosis, Computer-Assisted/statistics & numerical data , Medical Errors/statistics & numerical data , Thoracic Injuries/diagnosis , Trauma Centers/standards , Wounds, Penetrating/diagnosis , Abdominal Injuries/therapy , Cardiopulmonary Resuscitation/adverse effects , Diagnosis, Computer-Assisted/adverse effects , Diagnosis, Computer-Assisted/methods , Female , Hospitals, University , Humans , Incidence , Injury Severity Score , Male , Philadelphia , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity , Statistics as Topic , Thoracic Injuries/therapy , Trauma Centers/statistics & numerical data , Wounds, Penetrating/therapyABSTRACT
The TraumAID system has been designed to provide on-line decision support throughout the initial definitive management of injured patients. Here we describe its retrospective evaluation and the use we subsequently made of judges comments on the validation data to evaluate TraumaTIQ, a new critiquing interface for TraumAID, investigating the question of whether, with timely recording of information, a system could produce commentary in line with that of human experts. Our results show that (1) comparable commentary can be produced, and (2) validation studies, which take great time and effort to conduct, can produce useful data beyond their original design goals.