Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Mil Med ; 188(Suppl 6): 412-419, 2023 11 08.
Article in English | MEDLINE | ID: mdl-37948233

ABSTRACT

INTRODUCTION: Remote military operations require rapid response times for effective relief and critical care. Yet, the military theater is under austere conditions, so communication links are unreliable and subject to physical and virtual attacks and degradation at unpredictable times. Immediate medical care at these austere locations requires semi-autonomous teleoperated systems, which enable the completion of medical procedures even under interrupted networks while isolating the medics from the dangers of the battlefield. However, to achieve autonomy for complex surgical and critical care procedures, robots require extensive programming or massive libraries of surgical skill demonstrations to learn effective policies using machine learning algorithms. Although such datasets are achievable for simple tasks, providing a large number of demonstrations for surgical maneuvers is not practical. This article presents a method for learning from demonstration, combining knowledge from demonstrations to eliminate reward shaping in reinforcement learning (RL). In addition to reducing the data required for training, the self-supervised nature of RL, in conjunction with expert knowledge-driven rewards, produces more generalizable policies tolerant to dynamic environment changes. A multimodal representation for interaction enables learning complex contact-rich surgical maneuvers. The effectiveness of the approach is shown using the cricothyroidotomy task, as it is a standard procedure seen in critical care to open the airway. In addition, we also provide a method for segmenting the teleoperator's demonstration into subtasks and classifying the subtasks using sequence modeling. MATERIALS AND METHODS: A database of demonstrations for the cricothyroidotomy task was collected, comprising six fundamental maneuvers referred to as surgemes. The dataset was collected by teleoperating a collaborative robotic platform-SuperBaxter, with modified surgical grippers. Then, two learning models are developed for processing the dataset-one for automatic segmentation of the task demonstrations into a sequence of surgemes and the second for classifying each segment into labeled surgemes. Finally, a multimodal off-policy RL with rewards learned from demonstrations was developed to learn the surgeme execution from these demonstrations. RESULTS: The task segmentation model has an accuracy of 98.2%. The surgeme classification model using the proposed interaction features achieved a classification accuracy of 96.25% averaged across all surgemes compared to 87.08% without these features and 85.4% using a support vector machine classifier. Finally, the robot execution achieved a task success rate of 93.5% compared to baselines of behavioral cloning (78.3%) and a twin-delayed deep deterministic policy gradient with shaped rewards (82.6%). CONCLUSIONS: Results indicate that the proposed interaction features for the segmentation and classification of surgical tasks improve classification accuracy. The proposed method for learning surgemes from demonstrations exceeds popular methods for skill learning. The effectiveness of the proposed approach demonstrates the potential for future remote telemedicine on battlefields.


Subject(s)
Robotics , Surgery, Computer-Assisted , Humans , Robotics/methods , Algorithms , Surgery, Computer-Assisted/methods , Machine Learning
2.
Rob Auton Syst ; 147: 103919, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34703078

ABSTRACT

Coexisting with the current COVID-19 pandemic is a global reality that comes with unique challenges impacting daily interactions, business, and facility maintenance. A monumental challenge accompanied is continuous and effective disinfection of shared spaces, such as office/school buildings, elevators, classrooms, and cafeterias. Although ultraviolet light and chemical sprays are routines for indoor disinfection, they irritate humans, hence can only be used when the facility is unoccupied. Stationary air filtration systems, while being irritation-free and commonly available, fail to protect all occupants due to limitations in air circulation and diffusion. Hence, we present a novel collaborative robot (cobot) disinfection system equipped with a Bernoulli Air Filtration Module, with a design that minimizes disturbance to the surrounding airflow and maneuverability among occupants for maximum coverage. The influence of robotic air filtration on dosage at neighbors of a coughing source is analyzed with derivations from a Computational Fluid Dynamics (CFD) simulation. Based on the analysis, the novel occupant-centric online rerouting algorithm decides the path of the robot. The rerouting ensures effective air filtration that minimizes the risk of occupants under their detected layout. The proposed system was tested on a 2 × 3 seating grid (empty seats allowed) in a classroom, and the worst-case dosage for all occupants was chosen as the metric. The system reduced the worst-case dosage among all occupants by 26% and 19% compared to a stationary air filtration system with the same flow rate, and a robotic air filtration system that traverses all the seats but without occupant-centric planning of its path, respectively. Hence, we validated the effectiveness of the proposed robotic air filtration system.

SELECTION OF CITATIONS
SEARCH DETAIL
...