Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 12(1): 4200, 2022 03 10.
Article in English | MEDLINE | ID: mdl-35273296

ABSTRACT

Medical training simulators can provide a safe and controlled environment for medical students to practice their physical examination skills. An important source of information for physicians is the visual feedback of involuntary pain facial expressions in response to physical palpation on an affected area of a patient. However, most existing robotic medical training simulators that can capture physical examination behaviours in real-time cannot display facial expressions and comprise a limited range of patient identities in terms of ethnicity and gender. Together, these limitations restrict the utility of medical training simulators because they do not provide medical students with a representative sample of pain facial expressions and face identities, which could result in biased practices. Further, these limitations restrict the utility of such medical simulators to detect and correct early signs of bias in medical training. Here, for the first time, we present a robotic system that can simulate facial expressions of pain in response to palpations, displayed on a range of patient face identities. We use the unique approach of modelling dynamic pain facial expressions using a data-driven perception-based psychophysical method combined with the visuo-haptic inputs of users performing palpations on a robot medical simulator. Specifically, participants performed palpation actions on the abdomen phantom of a simulated patient, which triggered the real-time display of six pain-related facial Action Units (AUs) on a robotic face (MorphFace), each controlled by two pseudo randomly generated transient parameters: rate of change [Formula: see text] and activation delay [Formula: see text]. Participants then rated the appropriateness of the facial expression displayed in response to their palpations on a 4-point scale from "strongly disagree" to "strongly agree". Each participant ([Formula: see text], 4 Asian females, 4 Asian males, 4 White females and 4 White males) performed 200 palpation trials on 4 patient identities (Black female, Black male, White female and White male) simulated using MorphFace. Results showed facial expressions rated as most appropriate by all participants comprise a higher rate of change and shorter delay from upper face AUs (around the eyes) to those in the lower face (around the mouth). In contrast, we found that transient parameter values of most appropriate-rated pain facial expressions, palpation forces, and delays between palpation actions varied across participant-simulated patient pairs according to gender and ethnicity. These findings suggest that gender and ethnicity biases affect palpation strategies and the perception of pain facial expressions displayed on MorphFace. We anticipate that our approach will be used to generate physical examination models with diverse patient demographics to reduce erroneous judgments in medical students, and provide focused training to address these errors.


Subject(s)
Robotic Surgical Procedures , Robotics , Facial Expression , Female , Humans , Male , Pain , Palpation
2.
Int J Bioprint ; 7(4): 417, 2021.
Article in English | MEDLINE | ID: mdl-34805596

ABSTRACT

Respiratory protective equipment (RPE) is traditionally designed through anthropometric sizing to enable mass production. However, this can lead to long-standing problems of low-compliance, severe skin trauma, and higher fit test failure rates among certain demographic groups, particularly females and non-white ethnic groups. Additive manufacturing could be a viable solution to produce custom-fitted RPE, but the manual design process is time-consuming, cost-prohibitive and unscalable for mass customization. This paper proposes an automated design pipeline which generates the computer-aided design models of custom-fit RPE from unprocessed three-dimensional (3D) facial scans. The pipeline successfully processed 197 of 205 facial scans with <2 min/scan. The average and maximum geometric error of the mask were 0.62 mm and 2.03 mm, respectively. No statistically significant differences in mask fit were found between male and female, Asian and White, White and Others, Healthy and Overweight, Overweight and Obese, Middle age, and Senior groups.

SELECTION OF CITATIONS
SEARCH DETAIL
...