Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 12697, 2024 06 03.
Article in English | MEDLINE | ID: mdl-38830890

ABSTRACT

Melanoma, the deadliest form of skin cancer, has seen a steady increase in incidence rates worldwide, posing a significant challenge to dermatologists. Early detection is crucial for improving patient survival rates. However, performing total body screening (TBS), i.e., identifying suspicious lesions or ugly ducklings (UDs) by visual inspection, can be challenging and often requires sound expertise in pigmented lesions. To assist users of varying expertise levels, an artificial intelligence (AI) decision support tool was developed. Our solution identifies and characterizes UDs from real-world wide-field patient images. It employs a state-of-the-art object detection algorithm to locate and isolate all skin lesions present in a patient's total body images. These lesions are then sorted based on their level of suspiciousness using a self-supervised AI approach, tailored to the specific context of the patient under examination. A clinical validation study was conducted to evaluate the tool's performance. The results demonstrated an average sensitivity of 95% for the top-10 AI-identified UDs on skin lesions selected by the majority of experts in pigmented skin lesions. The study also found that the tool increased dermatologists' confidence when formulating a diagnosis, and the average majority agreement with the top-10 AI-identified UDs reached 100% when assisted by our tool. With the development of this AI-based decision support tool, we aim to address the shortage of specialists, enable faster consultation times for patients, and demonstrate the impact and usability of AI-assisted screening. Future developments will include expanding the dataset to include histologically confirmed melanoma and validating the tool for additional body regions.


Subject(s)
Early Detection of Cancer , Melanoma , Skin Neoplasms , Supervised Machine Learning , Humans , Skin Neoplasms/diagnosis , Melanoma/diagnosis , Early Detection of Cancer/methods , Artificial Intelligence , Algorithms , Male , Female , Skin/pathology
2.
J Vasc Access ; : 11297298241258628, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38856000

ABSTRACT

BACKGROUND: There is limited knowledge about gaze patterns of intensive care unit (ICU) trainee doctors during the insertion of a central venous catheter (CVC). The primary objective of this study was to examine visual patterns exhibited by ICU trainee doctors during CVC insertion. Additionally, the study investigated whether differences in gaze patterns could be identified between more and less experienced trainee doctors. METHODS: In a real-life, prospective observational study conducted at the interdisciplinary ICU at the University Hospital Zurich, Switzerland, ICU trainee doctors underwent eye-tracking during CVC insertion in a real ICU patient. Using mixed-effects model analyses, the primary outcomes were dwell time, first fixation duration, revisits, fixation count, and average fixation time on different areas of interest (AOI). Secondary outcomes were above eye-tracking outcome measures stratified according to experience level of participants. RESULTS: Eighteen participants were included, of whom 10 were inexperienced and eight more experienced. Dwell time was highest for CVC preparation table (p = 0.02), jugular vein on ultrasound image (p < 0.001) and cervical puncture location (p < 0.001). Concerning experience, dwell time and revisits on jugular vein on ultrasound image (p = 0.02 and p = 0.04, respectively) and cervical puncture location (p = 0.004 and p = 0.01, respectively) were decreased in more experienced ICU trainees. CONCLUSIONS: Various AOIs have distinct significance for ICU trainee doctors during CVC insertion. Experienced participants exhibited different gaze behavior, requiring less attention for preparation and handling tasks, emphasizing the importance of hand-eye coordination.

3.
Front Psychol ; 14: 1169940, 2023.
Article in English | MEDLINE | ID: mdl-37325757

ABSTRACT

Teamwork is critical for safe patient care. Healthcare teams typically train teamwork in simulated clinical situations, which require the ability to measure teamwork via behavior observation. However, the required observations are prone to human biases and include significant cognitive load even for trained instructors. In this observational study we explored how eye tracking and pose estimation as two minimal invasive video-based technologies may measure teamwork during simulation-based teamwork training in healthcare. Mobile eye tracking, measuring where participants look, and multi-person pose estimation, measuring 3D human body and joint position, were used to record 64 third-year medical students who completed a simulated handover case in teams of four. On one hand, we processed the recorded data into the eye contact metric, based on eye tracking and relevant for situational awareness and communication patterns. On the other hand, the distance to patient metric was processed, based on multi-person pose estimation and relevant for team positioning and coordination. After successful data recording, we successfully processed the raw videos to specific teamwork metrics. The average eye contact time was 6.46 s [min 0 s - max 28.01 s], while the average distance to the patient resulted in 1.01 m [min 0.32 m - max 1.6 m]. Both metrics varied significantly between teams and simulated roles of participants (p < 0.001). With the objective, continuous, and reliable metrics we created visualizations illustrating the teams' interactions. Future research is necessary to generalize our findings and how they may complement existing methods, support instructors, and contribute to the quality of teamwork training in healthcare.

4.
Adv Simul (Lond) ; 8(1): 12, 2023 Apr 16.
Article in English | MEDLINE | ID: mdl-37061746

ABSTRACT

BACKGROUND: Cardiopulmonary resuscitation (CPR) training improves CPR skills while heavily relying on feedback. The quality of feedback can vary between experts, indicating a need for data-driven feedback to support experts. The goal of this study was to investigate pose estimation, a motion detection technology, to assess individual and team CPR quality with the arm angle and chest-to-chest distance metrics. METHODS: After mandatory basic life support training, 91 healthcare providers performed a simulated CPR scenario in teams. Their behaviour was simultaneously rated based on pose estimation and by experts. It was assessed if the arm was straight at the elbow, by calculating the mean arm angle, and how close the distance between the team members was during chest compressions, by calculating the chest-to-chest distance. Both pose estimation metrics were compared with the expert ratings. RESULTS: The data-driven and expert-based ratings for the arm angle differed by 77.3%, and based on pose estimation, 13.2% of participants kept the arm straight. The chest-to-chest distance ratings by expert and by pose estimation differed by 20.7% and based on pose estimation 63.2% of participants were closer than 1 m to the team member performing compressions. CONCLUSIONS: Pose estimation-based metrics assessed learners' arm angles in more detail and their chest-to-chest distance comparably to expert ratings. Pose estimation metrics can complement educators with additional objective detail and allow them to focus on other aspects of the simulated CPR training, increasing the training's success and the participants' CPR quality. TRIAL REGISTRATION: Not applicable.

5.
Int J Comput Assist Radiol Surg ; 18(8): 1363-1371, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36808552

ABSTRACT

PURPOSE: Previous work has demonstrated the high accuracy of augmented reality (AR) head-mounted displays for pedicle screw placement in spinal fusion surgery. An important question that remains unanswered is how pedicle screw trajectories should be visualized in AR to best assist the surgeon. METHODOLOGY: We compared five AR visualizations displaying the drill trajectory via Microsoft HoloLens 2 with different configurations of abstraction level (abstract or anatomical), position (overlay or small offset), and dimensionality (2D or 3D) against standard navigation on an external screen. We tested these visualizations in a study with 4 expert surgeons and 10 novices (residents in orthopedic surgery) on lumbar spine models covered by Plasticine. We assessed trajectory deviations ([Formula: see text]) from the preoperative plan, dwell times (%) on areas of interest, and the user experience. RESULTS: Two AR visualizations resulted in significantly lower trajectory deviations (mixed-effects ANOVA, p<0.0001 and p<0.05) compared to standard navigation, whereas no significant differences were found between participant groups. The best ratings for ease of use and cognitive load were obtained with an abstract visualization displayed peripherally around the entry point and with a 3D anatomical visualization displayed with some offset. For visualizations displayed with some offset, participants spent on average only 20% of their time examining the entry point area. CONCLUSION: Our results show that real-time feedback provided by navigation can level task performance between experts and novices, and that the design of a visualization has a significant impact on task performance, visual attention, and user experience. Both abstract and anatomical visualizations can be suitable for navigation when not directly occluding the execution area. Our results shed light on how AR visualizations guide visual attention and the benefits of anchoring information in the peripheral field around the entry point.


Subject(s)
Augmented Reality , Pedicle Screws , Spinal Fusion , Surgery, Computer-Assisted , Humans , Surgery, Computer-Assisted/methods , Lumbar Vertebrae/surgery , Spinal Fusion/methods
6.
BMJ Qual Saf ; 32(1): 26-33, 2023 01.
Article in English | MEDLINE | ID: mdl-35260415

ABSTRACT

BACKGROUND: Patients in intensive care units are prone to the occurrence of medication errors. Look-alike, sound-alike drugs with similar drug names can lead to medication errors and therefore endanger patient safety. Capitalisation of distinct text parts in drug names might facilitate differentiation of medication labels. The aim of this study was to test whether the use of such 'tall man' lettering (TML) reduces the error rate and to examine effects on the visual attention of critical care nurses while identifying syringe labels. METHODS: This was a prospective, randomised in situ simulation conducted at the University Hospital Zurich, Zurich, Switzerland. Under observation by eye tracking, 30 nurses were given 10 successive tasks involving the presentation of a drug name and its selection from a dedicated set of 10 labelled syringes that included look-alike and sound-alike drug names, half of which had TML-coded labels.Error rate as well as dwell time, fixation count, fixation duration and revisits were analysed using a linear mixed-effects model analysis to compare TML-coded with non-TML-coded labels. RESULTS: TML coding of syringe labels led to a significant decrease in the error rate (from 5.3% (8 of 150 in non-TML-coded sets) to 0.7% (1 of 150 in TML-coded sets), p<0.05). Eye tracking further showed that TML affects visual attention, resulting in longer dwell time (p<0.01), more and longer fixations (p<0.05 and p<0.01, respectively) on the drug name as well as more frequent revisits (p<0.01) compared with non-TML-coded labels. Detailed analysis revealed that these effects were stronger for labels using TML in the mid-to-end position of the drug name. CONCLUSIONS: TML in drug names changes visual attention while identifying syringe labels and supports critical care nurses in preventing medication errors.


Subject(s)
Medication Errors , Syringes , Male , Humans , Prospective Studies , Medication Errors/prevention & control , Patient Safety , Drug Labeling/methods , Critical Care
7.
Behav Res Methods ; 54(1): 493-507, 2022 02.
Article in English | MEDLINE | ID: mdl-34258709

ABSTRACT

Eye tracking (ET) technology is increasingly utilized to quantify visual behavior in the study of the development of domain-specific expertise. However, the identification and measurement of distinct gaze patterns using traditional ET metrics has been challenging, and the insights gained shown to be inconclusive about the nature of expert gaze behavior. In this article, we introduce an algorithmic approach for the extraction of object-related gaze sequences and determine task-related expertise by investigating the development of gaze sequence patterns during a multi-trial study of a simplified airplane assembly task. We demonstrate the algorithm in a study where novice (n = 28) and expert (n = 2) eye movements were recorded in successive trials (n = 8), allowing us to verify whether similar patterns develop with increasing expertise. In the proposed approach, AOI sequences were transformed to string representation and processed using the k-mer method, a well-known method from the field of computational biology. Our results for expertise development suggest that basic tendencies are visible in traditional ET metrics, such as the fixation duration, but are much more evident for k-mers of k > 2. With increased on-task experience, the appearance of expert k-mer patterns in novice gaze sequences was shown to increase significantly (p < 0.001). The results illustrate that the multi-trial k-mer approach is suitable for revealing specific cognitive processes and can quantify learning progress using gaze patterns that include both spatial and temporal information, which could provide a valuable tool for novice training and expert assessment.


Subject(s)
Eye Movements , Learning , Humans
8.
Front Med (Lausanne) ; 8: 681321, 2021.
Article in English | MEDLINE | ID: mdl-34568356

ABSTRACT

Introduction: Closed-loop ventilation modes are increasingly being used in intensive care units to ensure more automaticity. Little is known about the visual behavior of health professionals using these ventilation modes. The aim of this study was to analyze gaze patterns of intensive care nurses while ventilating a patient in the closed-loop mode with Intellivent adaptive support ventilation® (I-ASV) and to compare inexperienced with experienced nurses. Materials and Methods: Intensive care nurses underwent eye-tracking during daily care of a patient ventilated in the closed-loop ventilation mode. Five specific areas of interest were predefined (ventilator settings, ventilation curves, numeric values, oxygenation Intellivent, ventilation Intellivent). The main independent variable and primary outcome was dwell time. Secondary outcomes were revisits, average fixation time, first fixation and fixation count on areas of interest in a targeted tracking-time of 60 min. Gaze patterns were compared between I-ASV inexperienced (n = 12) and experienced (n = 16) nurses. Results: In total, 28 participants were included. Overall, dwell time was longer for ventilator settings and numeric values compared to the other areas of interest. Similar results could be obtained for the secondary outcomes. Visual fixation of oxygenation Intellivent and ventilation Intellivent was low. However, dwell time, average fixation time and first fixation on oxygenation Intellivent were longer in experienced compared to inexperienced intensive care nurses. Discussion: Gaze patterns of intensive care nurses were mainly focused on numeric values and settings. Areas of interest related to traditional mechanical ventilation retain high significance for intensive care nurses, despite use of closed-loop mode. More visual attention to oxygenation Intellivent and ventilation Intellivent in experienced nurses implies more routine and familiarity with closed-loop modes in this group. The findings imply the need for constant training and education with new tools in critical care, especially for inexperienced professionals.

9.
J Eye Mov Res ; 14(1)2021 May 19.
Article in English | MEDLINE | ID: mdl-34122747

ABSTRACT

Eye tracking (ET) has shown to reveal the wearer's cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers' use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object- Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.

10.
Int J Comput Assist Radiol Surg ; 16(7): 1171-1180, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34023976

ABSTRACT

PURPOSE: Effective training of extracorporeal membrane oxygenation (ECMO) cannulation is key to fighting the persistently high mortality rate of ECMO interventions. Though augmented reality (AR) is a promising technology for improving information display, only a small percentage of AR projects have addressed training procedures. The present study investigates the potential benefits of AR-based, contextual instructions for ECMO cannulation training as compared to instructions used during conventional training at a university hospital. METHODOLOGY: An AR step-by-step guide was developed for the Microsoft HoloLens 2 that combines text, images, and videos from the conventional training program with simple 3D models. A study was conducted with 21 medical students performing two surgical procedures on a simulator. Participants were divided into two groups, with one group using the conventional instructions for the first procedure and AR instructions for the second and the other group using instructions in reverse order. Training times, a detailed error protocol, and a standardized user experience questionnaire (UEQ) were evaluated. RESULTS: AR-based execution was associated with slightly higher training times and with significantly fewer errors for the more complex second procedure ([Formula: see text], Mann-Whitney U). These differences in errors were most present for knowledge-related errors, resulting in a 66% reduction in the number of errors. AR instructions also led to significantly better ratings on 5 out of the 6 scales used in the UEQ, pointing to higher perceived clarify of information, information acquisition speed, and stimulation. CONCLUSION: The results extend previous research on AR instructions to ECMO cannulation training, indicating its high potential to improve training outcomes as a result of better information acquisition by participants during task execution. Future work should investigate how better performance in a single training session relates to better performance in the long run.


Subject(s)
Augmented Reality , Clinical Competence , Computer-Assisted Instruction/methods , Education, Medical/methods , Extracorporeal Membrane Oxygenation/education , Catheterization , Extracorporeal Membrane Oxygenation/methods , Humans , Students, Medical
11.
J Med Syst ; 45(5): 55, 2021 Mar 25.
Article in English | MEDLINE | ID: mdl-33768346

ABSTRACT

The handling of left ventricular assist devices (LVADs) can be challenging for patients and requires appropriate training. The devices' usability impacts patients' safety and quality of life. In this study, an eye tracking supported human factors testing was performed to reveal problems during use and test the trainings' effectiveness. In total 32 HeartWare HVAD patients (including 6 pre-VAD patients) and 3 technical experts as control group performed a battery change (BC) and a controller change (CC) as an everyday and emergency scenario on a training device. By tracking the patients' gaze point, task duration and pump-off time were evaluated. Patients with LVAD support ≥1 year showed significantly shorter BC task duration than patients with LVAD support <1 year (p = 0.008). In contrast their CC task duration (p = 0.002) and pump-off times (median = 12.35 s) were higher than for LVAD support patients <1 year (median = 5.3 s) with p = 0.001. The shorter BC task duration for patients with LVAD support ≥1 year indicate that with time patients establish routines and gain confidence using their device. The opposite effect was found for CC task duration and pump-off times. This implies the need for intermittent re-training of less frequent tasks to increase patients' safety.


Subject(s)
Heart Failure , Heart-Assist Devices , Eye-Tracking Technology , Humans , Quality of Life , Retrospective Studies , Time Factors
12.
J Clin Monit Comput ; 35(6): 1511-1518, 2021 12.
Article in English | MEDLINE | ID: mdl-33296061

ABSTRACT

Patient safety is a priority in healthcare, yet it is unclear how sources of errors should best be analyzed. Eye tracking is a tool used to monitor gaze patterns in medicine. The aim of this study was to analyze the distribution of visual attention among critical care nurses performing non-simulated, routine patient care on invasively ventilated patients in an ICU. ICU nurses were tracked bedside in daily practice. Eight specific areas of interest were pre-defined (respirator, drug preparation, medication, patient data management system, patient, monitor, communication and equipment/perfusors). Main independent variable and primary outcome was dwell time, secondary outcomes were hit ratio, revisits, fixation count and average fixation time on areas of interest in a targeted tracking-time of 60 min. 28 ICU nurses were analyzed and the average tracking time was 65.5 min. Dwell time was significantly higher for the respirator (12.7% of total dwell time), patient data management system (23.7% of total dwell time) and patient (33.4% of total dwell time) compared to the other areas of interest. A similar distribution was observed for fixation count (respirator 13.3%, patient data management system 25.8% and patient 31.3%). Average fixation time and revisits of the respirator were markedly elevated. Apart from the respirator, average fixation time was highest for the patient data management system, communication and equipment/perfusors. Eye tracking is helpful to analyze the distribution of visual attention of critical care nurses. It demonstrates that the respirator, the patient data management system and the patient form cornerstones in the treatment of critically ill patients. This offers insights into complex work patterns in critical care and the possibility of improving work flows, avoiding human error and maximizing patient safety.


Subject(s)
Critical Care , Eye-Tracking Technology , Communication , Humans , Monitoring, Physiologic
13.
Clin Exp Rheumatol ; 38 Suppl 125(3): 137-139, 2020.
Article in English | MEDLINE | ID: mdl-32865166

ABSTRACT

OBJECTIVES: The assessment of digital ulcers (DUs) in systemic sclerosis (SSc) depends crucially on visual aspects. However, little is known about the differences in visual assessment of these wounds between experts and non-experts or medical lay persons (novices). To develop potential training recommendations for the visual assessment of digital ulcers in SSc, we analysed gaze pattern data during assessment of digital ulcers between assessors with different levels of expertise. METHODS: Gaze pattern data from 36 participants were collected with a mobile eye tracking device. 20 pictures from digital ulcers of SSc patients were presented to each participant. The analysis comprised the scan path, the dwell times (for areas of interest), fixation counts and the entry time for each picture and subject. RESULTS: Most significant differences were found between novices and medically educated groups. Dwell times in the wound area for novices differed statistically significantly from all medically educated groups (1.76s-3.1s less). Above all, novices had lower dwell times in wound margin and wound surrounding and spent more time in other areas (up to 1.92s longer). Correspondingly, they had less fixation points and longer overall scan paths in wound areas. Similar gaze pattern data were generated for medically educated groups. CONCLUSIONS: For the first time, we could analyse the visual assessment of digital ulcers in SSc and detected differences according to levels of medical education and expertise. Adequate training on proper interpretation of changes in all wound areas are warranted to improve wound assessment in digital ulcers.


Subject(s)
Scleroderma, Systemic , Skin Ulcer , Fingers , Humans , Ulcer
14.
Minerva Anestesiol ; 86(11): 1180-1189, 2020 11.
Article in English | MEDLINE | ID: mdl-32643360

ABSTRACT

BACKGROUND: Patient safety is a top priority in healthcare. Little is known about the visual behavior of professionals during high-risk procedures. The aim of this study was to assess feasibility, usability and safety of eye-tracking to analyze gaze patterns during the extubation process in the intensive care unit. METHODS: Eye-tracking was used in this observational study to analyze the extubation process in 22 participants. Independent variables were average fixation time, dwell time, fixation count, hit ratio and revisit count for eighteen areas of interest. Primary outcome was dwell time for all areas of interest. Secondary outcomes were average fixation time, fixation count and revisits. In subgroup analyses, experienced and non-experienced physicians were compared. RESULTS: The most important area of interest was the patient, as analyzed by dwell time. Fixation of other areas of interest varied significantly among participants. Only 54% checked ventilator respiratory rate, despite declaring it as important in questionnaires. Other neglected areas of interest included tidal volume (59%), peak pressure (63.6%), CO2 (63.6%), temperature (18.2%), blood pressure (59%) and heart rate (68%). Experienced physicians gazed more frequently and longer at the patient while spending less time on monitor and ventilator parameters. CONCLUSIONS: Eye-tracking can demonstrate that there is a mismatch between physicians' subjective evaluations and corresponding objective real-life measurements. Structured and standardized extubation processes should be performed to improve patient safety. In the immediate postextubation phase, long dwell time on the patient shows that clinical observation remains the most important cornerstone beyond monitoring devices.


Subject(s)
Airway Extubation , Eye-Tracking Technology , Humans , Intensive Care Units , Monitoring, Physiologic , Pilot Projects
15.
JMIR Hum Factors ; 7(2): e15581, 2020 Jun 03.
Article in English | MEDLINE | ID: mdl-32490840

ABSTRACT

BACKGROUND: In order to give a wide range of people the opportunity to ensure and support home care, one approach is to develop medical devices that are as user-friendly as possible. This allows nonexperts to use medical devices that were originally too complicated to use. For a user-centric development of such medical devices, it is essential to understand which user interface design best supports patients, caregivers, and health care professionals. OBJECTIVE: Using the benefits of mobile eye tracking, this work aims to gain a deeper understanding of the challenges of user cognition. As a consequence, its goal is to identify the obstacles to the usability of the features of two different designs of a single medical device user interface. The medical device is a patient assistance device for home use in peritoneal dialysis therapy. METHODS: A total of 16 participants, with a subset of seniors (8/16, mean age 73.7 years) and young adults (8/16, mean age 25.0 years), were recruited and participated in this study. The handling cycle consisted of seven main tasks. Data analysis started with the analysis of task effectiveness for searching for error-related tasks. Subsequently, the in-depth gaze data analysis focused on these identified critical tasks. In order to understand the challenges of user cognition in critical tasks, gaze data were analyzed with respect to individual user interface features of the medical device system. Therefore, it focused on the two dimensions of dwell time and fixation duration of the gaze. RESULTS: In total, 97% of the handling steps for design 1 and 96% for design 2 were performed correctly, with the main challenges being task 1 insert, task 2 connect, and task 6 disconnect for both designs. In order to understand the two analyzed dimensions of the physiological measurements simultaneously, the authors propose a new graphical representation. It distinguishes four different patterns to compare the eye movements associated with the two designs. The patterns identified for the critical tasks are consistent with the results of the task performance. CONCLUSIONS: This study showed that mobile eye tracking provides insights into information processing in intensive handling tasks related to individual user interface features. The evaluation of each feature of the user interface promises an optimal design by combining the best found features. In this way, manufacturers are able to develop products that can be used by untrained people without prior knowledge. This would allow home care to be provided not only by highly qualified nurses and caregivers, but also by patients themselves, partners, children, or neighbors.

16.
Invest Radiol ; 55(7): 457-462, 2020 07.
Article in English | MEDLINE | ID: mdl-32149859

ABSTRACT

OBJECTIVES: Reducing avoidable radiation exposure during medical procedures is a top priority. The purpose of this study was to quantify, for the first time, the percentage of avoidable radiation during fluoroscopically guided cardiovascular interventions using eye tracking technologies. MATERIALS AND METHODS: Mobile eye tracking glasses were used to measure precisely when the operators looked at a fluoroscopy screen during the interventions. A novel machine learning algorithm and image processing techniques were used to automatically analyze the data and compute the percentage of avoidable radiation. Based on this percentage, the amount of potentially avoidable radiation dose was computed. RESULTS: This study included 30 cardiovascular interventions performed by 5 different operators. A significant percentage of the administered radiation (mean [SD], 43.5% [12.6%]) was avoidable (t29 = 18.86, P < 0.00001); that is, the operators were not looking at the fluoroscopy screen while the x-ray was on. On average, this corresponded to avoidable amounts of air kerma (mean [SD], 229 [66] mGy) and dose area product (mean [SD], 32,781 [9420] mGycm), or more than 11 minutes of avoidable x-ray usage, per procedure. CONCLUSIONS: A significant amount of the administered radiation during cardiovascular interventions is in fact avoidable.


Subject(s)
Eye-Tracking Technology , Fluoroscopy , Radiation Exposure/prevention & control , Radiography, Interventional , Aged , Algorithms , Female , Humans , Machine Learning , Male , Occupational Exposure/prevention & control , Radiation Dosage
17.
J Med Syst ; 44(1): 12, 2019 Dec 06.
Article in English | MEDLINE | ID: mdl-31807889

ABSTRACT

The aim was to gain insights into the visual behaviour and the perceptual skills of operators during catheter-based cardiovascular interventions (CBCVIs). A total of 33 CBCVIs were performed at the University Hospital Zurich by five operators, two experts and three novices, while wearing eye tracking glasses. The visual attention distribution on three areas of interest (AOIs) the "Echo screen", "Fluoro screen" and "Patient" was analysed for the transseptal puncture procedure. Clear visual behaviour patterns were observable in all cases. There is a significant differences in visual attention distribution of the experts compared to the novices. Experts spent 79% of dwell time on the Echo screen and 17% on the Fluoro screen, novices spent 52% on the Echo screen and 40% on the Fluoro screen. Additionally, results showed that experts focused their gaze on smaller areas than novices during critical interventional actions. Operators seem to exhibit identifiable visual behaviour patterns for CBCVIs. These identifiable patterns were significantly different between the expert and the novice operators. This indicates that the visual behaviour of operators could be employed to assist transfer of experts' perceptual skills to novices and to develop tools for objective performance assessment.


Subject(s)
Cardiovascular Diseases/surgery , Catheterization , Clinical Competence , Eye Movements , Surgeons , Humans , Male , Switzerland
18.
Expert Opin Drug Deliv ; 16(2): 163-175, 2019 02.
Article in English | MEDLINE | ID: mdl-30577710

ABSTRACT

BACKGROUND: Increasing interest in digitally enhanced drug delivery tools urges both industry and academia to rethink current approaches to product usability testing. This article introduces mobile eye-tracking, generating detailed contextual data about user engagement with connected self-injection systems as a new methodological approach to formative usability assessment. METHODS: A longitudinal case study with a total of 35 injection-naïve participants was conducted. In three consecutive experiments, eye-tracking was applied to formative usability testing of a novel connected self-injection device. Three eye-tracking derived usability indicators were established to assess product effectiveness, efficiency, and ease of use. RESULTS: Analysis of the data revealed events of user hesitation, process interruption and unintended action, and these occurrences could either be completely eliminated or significantly reduced throughout the process (product effectiveness). At the same time, the overall use duration decreased from 86.1 to 58.7 sec (product efficiency). Analysis revealed that product modifications successfully guided user attention to those interface elements most relevant during each task, thereby improving product ease-of-use. CONCLUSIONS: The step-wise improvement in the usability indicators demonstrates that iteratively applying eye-tracking methods effectively supports the user-centered design of connected self-injection systems. The results highlight how eye-tracking can be employed to gain an in-depth understanding of patient engagement with novel healthcare technologies.


Subject(s)
Eye Movement Measurements , User-Computer Interface , Adult , Humans , Longitudinal Studies
19.
J Eye Mov Res ; 11(6)2018 Dec 10.
Article in English | MEDLINE | ID: mdl-33828716

ABSTRACT

For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assign-ment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies with several hours of recording difficult to analyse quan-titatively. We introduce a new machine learning-based algorithm, the computational Gaze-Object Mapping (cGOM), that automatically maps gaze data onto respective AOIs. cGOM extends state-of-the-art object detection and segmentation by mask R-CNN with a gaze mapping feature. The new algorithm's performance is validated against a manual fixation-by-fixation mapping, which is considered as ground truth, in terms of true positive rate (TPR), true negative rate (TNR) and efficiency. Using only 72 training images with 264 labelled object representations, cGOM is able to reach a TPR of approx. 80% and a TNR of 85% compared to the manual mapping. The break-even point is reached at 2 hours of eye tracking recording for the total procedure, respectively 1 hour considering human working time only. Together with a real-time capability of the mapping process after completed train-ing, even hours of eye tracking recording can be evaluated efficiently. (Code and video examples have been made available at: https://gitlab.ethz.ch/pdz/cgom.git).

SELECTION OF CITATIONS
SEARCH DETAIL
...