Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Surg Today ; 2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38607395

ABSTRACT

PURPOSES: We performed a conversation analysis of the speech conducted among the surgical team during three-dimensional (3D)-printed liver model navigation for thrice or more repeated hepatectomy (TMRH). METHODS: Seventeen patients underwent 3D-printed liver navigation surgery for TMRH. After transcription of the utterances recorded during surgery, the transcribed utterances were coded by the utterer, utterance object, utterance content, sensor, and surgical process during conversation. We then analyzed the utterances and clarified the association between the surgical process and conversation through the intraoperative reference of the 3D-printed liver. RESULTS: In total, 130 conversations including 1648 segments were recorded. Utterance coding showed that the operator/assistant, 3D-printed liver/real liver, fact check (F)/plan check (Pc), visual check/tactile check, and confirmation of planned resection or preservation target (T)/confirmation of planned or ongoing resection line (L) accounted for 791/857, 885/763, 1148/500, 1208/440, and 1304/344 segments, respectively. The utterance's proportions of assistants, F, F of T on 3D-printed liver, F of T on real liver, and Pc of L on 3D-printed liver were significantly higher during non-expert surgeries than during expert surgeries. Confirming the surgical process with both 3D-printed liver and real liver and performing planning using a 3D-printed liver facilitates the safe implementation of TMRH, regardless of the surgeon's experience. CONCLUSIONS: The present study, using a unique conversation analysis, provided the first evidence for the clinical value of 3D-printed liver for TMRH for anatomical guidance of non-expert surgeons.

2.
Front Psychol ; 11: 2149, 2020.
Article in English | MEDLINE | ID: mdl-33123033

ABSTRACT

This paper presents a cognitive model that simulates an adaptation process to automation in a time-critical task. The paper uses a simple tracking task (which represents vehicle operation) to reveal how the reliance on automation changes as the success probabilities of the automatic and manual mode vary. The model was developed by using a cognitive architecture, ACT-R (Adaptive Control of Thought-Rational). We also introduce two methods of reinforcement learning: the summation of rewards over time and a gating mechanism. The model performs this task through productions that manage perception and motor control. The utility values of these productions are updated based on rewards in every perception-action cycle. A run of this model simulated the overall trends of the behavioral data such as the performance (tracking accuracy), the auto use ratio, and the number of switches between the two modes, suggesting some validity of the assumptions made in our model. This work shows how combining different paradigms of cognitive modeling can lead to practical representations and solutions to automation and trust in automation.

SELECTION OF CITATIONS
SEARCH DETAIL
...