Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38923550

RESUMO

BACKGROUND AND AIM: Hepatocellular carcinoma (HCC) diagnosis mainly relies on its pathognomonic radiological profile, obviating the need for biopsy. The project of incorporating artificial intelligence (AI) techniques in HCC aims to improve the performance of image recognition. Herein, we thoroughly analyze and evaluate proposed AI models in the field of HCC diagnosis. METHODS: A comprehensive review of the literature was performed utilizing MEDLINE/PubMed and Web of Science databases with the end of search date being the 30th of September 2023. The MESH terms "Artificial Intelligence," "Liver Cancer," "Hepatocellular Carcinoma," "Machine Learning," and "Deep Learning" were searched in the title and/or abstract. All references of the obtained articles were also evaluated for any additional information. RESULTS: Our search resulted in 183 studies meeting our inclusion criteria. Across all diagnostic modalities, reported area under the curve (AUC) of most developed models surpassed 0.900. A B-mode US and a contrast-enhanced US model achieved AUCs of 0.947 and 0.957, respectively. Regarding the more challenging task of HCC diagnosis, a 2021 deep learning model, trained with CT scans, classified hepatic malignant lesions with an AUC of 0.986. Finally, a MRI machine learning model developed in 2021 displayed an AUC of 0.975 when differentiating small HCCs from benign lesions, while another MRI-based model achieved HCC diagnosis with an AUC of 0.970. CONCLUSIONS: AI tools may lead to significant improvement in diagnostic management of HCC. Many models fared better or comparable to experienced radiologists while proving capable of elevating radiologists' accuracy, demonstrating promising results for AI implementation in HCC-related diagnostic tasks.

2.
Int J Med Robot ; 20(2): e2632, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38630888

RESUMO

BACKGROUND: Real-time prediction of the remaining surgery duration (RSD) is important for optimal scheduling of resources in the operating room. METHODS: We focus on the intraoperative prediction of RSD from laparoscopic video. An extensive evaluation of seven common deep learning models, a proposed one based on the Transformer architecture (TransLocal) and four baseline approaches, is presented. The proposed pipeline includes a CNN-LSTM for feature extraction from salient regions within short video segments and a Transformer with local attention mechanisms. RESULTS: Using the Cholec80 dataset, TransLocal yielded the best performance (mean absolute error (MAE) = 7.1 min). For long and short surgeries, the MAE was 10.6 and 4.4 min, respectively. Thirty minutes before the end of surgery MAE = 6.2 min, 7.2 and 5.5 min for all long and short surgeries, respectively. CONCLUSIONS: The proposed technique achieves state-of-the-art results. In the future, we aim to incorporate intraoperative indicators and pre-operative data.


Assuntos
Laparoscopia , Humanos , Salas Cirúrgicas , Fontes de Energia Elétrica
3.
J Pediatr Intensive Care ; 12(4): 264-270, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37970142

RESUMO

Patent ductus arteriosus (PDA) has been associated with increased morbidity and mortality in preterm infants. Surgical ligation (SL) is generally performed in symptomatic infants when medical management is contraindicated or has failed. We retrospectively reviewed our institution's experience in surgical management of PDA for extremely low birth weight (ELBW) infants without chest tube placement assessing its efficiency and safety. We evaluated 17 consecutive ELBW infants undergoing SL for symptomatic PDA (January 2012-January 2018) with subsequent follow-up for 6 months postdischarge. Patients consisted of 9 (53%) females and 8 (47%) males. Mean gestational age (GA) at birth was 27.9 ± 2.1 weeks. Median values for surgical age (SA) from birth to operation was 10 days (interquartile range [IQR]: 8-12); PDA diameter 3.4 mm (IQR: 3.2-3.5); surgical weight (SW) 750 g (IQR: 680-850); and days of mechanical ventilation (DMV) as estimated by Kaplan-Meier curve 22 days (95% confidence interval: 14.2-29.8). We observed a statistically significant negative association between DMV and GA at birth (rho = - 0.587, p = 0.017), SA (rho = - 0.629, p = 0.009) and SW (rho = - 0.737, p = 0.001). One patient experienced left laryngeal nerve palsy confirmed by laryngoscopy. Otherwise, there were no adverse events to include surgical-related mortality, recurrence of PDA, or need for chest tube placement during follow-up. SL of PDA in ELBW infants without chest tube placement is both efficient and safe. Universal consensus recommendations for the management of PDA in ELBW neonates are needed. Further study is required regarding the use of the less invasive option of percutaneous PDA closure in ELBW infants.

4.
Bioengineering (Basel) ; 9(12)2022 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-36550943

RESUMO

In this study, we propose a deep learning framework and a self-supervision scheme for video-based surgical gesture recognition. The proposed framework is modular. First, a 3D convolutional network extracts feature vectors from video clips for encoding spatial and short-term temporal features. Second, the feature vectors are fed into a transformer network for capturing long-term temporal dependencies. Two main models are proposed, based on the backbone framework: C3DTrans (supervised) and SSC3DTrans (self-supervised). The dataset consisted of 80 videos from two basic laparoscopic tasks: peg transfer (PT) and knot tying (KT). To examine the potential of self-supervision, the models were trained on 60% and 100% of the annotated dataset. In addition, the best-performing model was evaluated on the JIGSAWS robotic surgery dataset. The best model (C3DTrans) achieves an accuracy of 88.0%, a 95.2% clip level, and 97.5% and 97.9% (gesture level), for PT and KT, respectively. The SSC3DTrans performed similar to C3DTrans when training on 60% of the annotated dataset (about 84% and 93% clip-level accuracies for PT and KT, respectively). The performance of C3DTrans on JIGSAWS was close to 76% accuracy, which was similar to or higher than prior techniques based on a single video stream, no additional video training, and online processing.

5.
Int J Med Robot ; 18(6): e2445, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35942601

RESUMO

BACKGROUND: We present an artificial intelligence framework for vascularity classification of the gallbladder (GB) wall from intraoperative images of laparoscopic cholecystectomy (LC). METHODS: A two-stage Multiple Instance Convolutional Neural Network is proposed. First, a convolutional autoencoder is trained to extract feature representations from 4585 patches of GB images. The second model includes a multi-instance encoder that fetches random patches from a GB region and outputs an equal number of embeddings that feed a multi-input classification module, which employs pooling and self-attention mechanisms, to perform prediction. RESULTS: The evaluation was performed on 234 GB images of low and high vascularity from 68 LC videos. Thorough comparison with various state-of-the-art multi-instance and single-instance learning algorithms was performed for two experimental tasks: image- and video-level classification. The proposed framework shows the best performance with accuracy 92.6%-93.2% and F1 93.5%-93.9%, close to the agreement of two expert evaluators (94%). CONCLUSIONS: The proposed technique provides a novel approach to classify LC operations with respect to the vascular pattern of the GB wall.


Assuntos
Inteligência Artificial , Laparoscopia , Humanos , Vesícula Biliar , Redes Neurais de Computação , Algoritmos
7.
Med Biol Eng Comput ; 59(1): 215-226, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33411267

RESUMO

Adenocarcinoma (AC) and squamous cell carcinoma (SCC) are frequent reported cases of non-small cell lung cancer (NSCLC), responsible for a large fraction of cancer deaths worldwide. In this study, we aim to investigate the potential of NSCLC histology classification into AC and SCC by applying different feature extraction and classification techniques on pre-treatment CT images. The employed image dataset (102 patients) was taken from the publicly available cancer imaging archive collection (TCIA). We investigated four different families of techniques: (a) radiomics with two classifiers (kNN and SVM), (b) four state-of-the-art convolutional neural networks (CNNs) with transfer learning and fine tuning (Alexnet, ResNet101, Inceptionv3 and InceptionResnetv2), (c) a CNN combined with a long short-term memory (LSTM) network to fuse information about the spatial coherency of tumor's CT slices, and (d) combinatorial models (LSTM + CNN + radiomics). In addition, the CT images were independently evaluated by two expert radiologists. Our results showed that the best CNN was Inception (accuracy = 0.67, auc = 0.74). LSTM + Inception yielded superior performance than all other methods (accuracy = 0.74, auc = 0.78). Moreover, LSTM + Inception outperformed experts by 7-25% (p < 0.05). The proposed methodology does not require detailed segmentation of the tumor region and it may be used in conjunction with radiological findings to improve clinical decision-making. Lung cancer histology classification from CT images based on CNN + LSTM.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
8.
Int J Comput Assist Radiol Surg ; 16(1): 103-113, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33146850

RESUMO

PURPOSE: In this study, we propose a deep learning approach for assessment of gallbladder (GB) wall vascularity from images of laparoscopic cholecystectomy (LC). Difficulty in the visualization of GB wall vessels may be the result of fatty infiltration or increased thickening of the GB wall, potentially as a result of cholecystitis or other diseases. METHODS: The dataset included 800 patches and 181 region outlines of the GB wall extracted from 53 operations of the Cholec80 video collection. The GB regions and patches were annotated by two expert surgeons using two labeling schemes: 3 classes (low, medium and high vascularity) and 2 classes (low vs. high). Two convolutional neural network (CNN) architectures were investigated. Preprocessing (vessel enhancement) and post-processing (late fusion of CNN output) techniques were applied. RESULTS: The best model yielded accuracy 94.48% and 83.77% for patch classification into 2 and 3 classes, respectively. For the GB wall regions, the best model yielded accuracy 91.16% (2 classes) and 80.66% (3 classes). The inter-observer agreement was 91.71% (2 classes) and 78.45% (3 classes). Late fusion analysis allowed the computation of spatial probability maps, which provided a visual representation of the probability for each vascularity class across the GB wall region. CONCLUSIONS: This study is the first significant step forward to assess the vascularity of the GB wall from intraoperative images based on computer vision and deep learning techniques. The classification performance of the CNNs was comparable to the agreement of two expert surgeons. The approach may be used for various applications such as for classification of LC operations and context-aware assistance in surgical education and practice.


Assuntos
Colecistectomia Laparoscópica/métodos , Aprendizado Profundo , Vesícula Biliar/irrigação sanguínea , Vesícula Biliar/cirurgia , Vesícula Biliar/diagnóstico por imagem , Humanos , Redes Neurais de Computação
9.
JSLS ; 24(4)2020.
Artigo em Inglês | MEDLINE | ID: mdl-33144823

RESUMO

BACKGROUND AND OBJECTIVES: Current approaches in surgical skills assessment employ virtual reality simulators, motion sensors, and task-specific checklists. Although accurate, these methods may be complex in the interpretation of the generated measures of performance. The aim of this study is to propose an alternative methodology for skills assessment and classification, based on video annotation of laparoscopic tasks. METHODS: Two groups of 32 trainees (students and residents) performed two laparoscopic tasks: peg transfer (PT) and knot tying (KT). Each task was annotated via a video analysis software based on a vocabulary of eight surgical gestures (surgemes) that denote the elementary gestures required to perform a task. The extracted metrics included duration/counts of each surgeme, penalty events, and counts of sequential surgemes (transitions). Our analysis focused on trainees' skill level comparison and classification using a nearest neighbor approach. The classification was assessed via accuracy, sensitivity, and specificity. RESULTS: For PT, almost all metrics showed significant performance difference between the two groups (p < 0.001). Residents were able to complete the task with fewer, shorter surgemes and fewer penalty events. Moreover, residents performed significantly fewer transitions (p < 0.05). For KT, residents performed two surgemes in significantly shorter time (p < 0.05). The metrics derived from the video annotations were also able to recognize the trainees' skill level with 0.71 - 0.86 accuracy, 0.80 - 1.00 sensitivity, and 0.60 - 0.80 specificity. CONCLUSION: The proposed technique provides a tool for skills assessment and experience classification of surgical trainees, as well as an intuitive way for describing what and how surgemes are performed.


Assuntos
Competência Clínica , Educação de Pós-Graduação em Medicina/métodos , Cirurgia Geral/educação , Laparoscopia/educação , Gravação em Vídeo , Adulto , Feminino , Humanos , Masculino , Análise e Desempenho de Tarefas , Adulto Jovem
10.
Gastroenterol Nurs ; 43(6): 411-421, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33055543

RESUMO

Reports evaluating simulation-based sigmoidoscopy training among nurses are scarce. The aim of this prospective nonrandomized study was to assess the performance of nurses in simulated sigmoidoscopy training and the potential impact on their performance of endoscopy unit experience, general professional experience, and skills in manual activities requiring coordinated maneuvers. Forty-four subjects were included: 12 nurses with (Group A) and 14 nurses without endoscopy unit experience (Group B) as well as 18 senior nursing students (Group C). All received simulator training in sigmoidoscopy. Participants were evaluated with respect to predetermined validated metrics. Skills in manual activities requiring coordinated maneuvers were analyzed to draw possible correlations with their performance. The total population required a median number of 5 attempts to achieve all predetermined goals. Groups A and C outperformed Group B regarding the number of attempts needed to achieve the predetermined percentage of visualized mucosa (p = .017, p = .027, respectively). Furthermore, Group A outperformed Group B regarding the predetermined duration of procedure (p = .046). A tendency was observed for fewer attempts needed to achieve the overall successful endoscopy in both Groups A and C compared with Group B. Increased score on playing stringed instruments was associated with decreased total time of procedure (rs = -.34, p = .03) and with decreased number of total attempts for successful endoscopy (rs = -.31, p = .046). This study suggests that training nurses and nursing students in simulated sigmoidoscopy is feasible by means of a proper training program. Experience in endoscopy unit and skills in manual activities have a positive impact on the training process.


Assuntos
Educação em Enfermagem , Treinamento por Simulação , Competência Clínica , Simulação por Computador , Humanos , Estudos Prospectivos , Sigmoidoscopia
11.
Int J Med Robot ; 16(2): e2058, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31713318

RESUMO

BACKGROUND: Various techniques have been proposed in the literature for phase and tool recognition from laparoscopic videos. In comparison, research in multilabel annotation of still frames is limited. METHODS: We describe a framework for multilabel annotation of images extracted from laparoscopic cholecystectomy (LC) videos based on multi-instance multiple-label learning. The image is considered as a bag of features extracted from local regions after coarse segmentation. A method based on variational Bayesian gaussian mixture models (VBGMM) is proposed for bag representation. Three techniques based on different feature extraction and bag representation models are employed for comparison. RESULTS: Four anatomical structures (abdominal wall, gallbladder, fat, and liver bed) and a tool-like object (specimen bag) were annotated in 482 images. Our method achieved the best performance on single label accuracy: 0.87 (highest) and 0.69 (lowest). Moreover, the performance was >20% higher in terms of four multilabel classification error metrics (one-error, ranking-loss, hamming-loss, and coverage). CONCLUSIONS: Our approach provides an accurate and efficient image representation for multilabel classification of still images captured in LC.


Assuntos
Colecistectomia , Processamento de Imagem Assistida por Computador/métodos , Laparoscopia , Reconhecimento Automatizado de Padrão , Algoritmos , Inteligência Artificial , Teorema de Bayes , Análise por Conglomerados , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Distribuição Normal , Reprodutibilidade dos Testes , Gravação em Vídeo
12.
Comput Methods Programs Biomed ; 165: 13-23, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30337068

RESUMO

BACKGROUND AND OBJECTIVE: Laparoscopic surgery offers the potential for video recording of the operation, which is important for technique evaluation, cognitive training, patient briefing and documentation. An effective way for video content representation is to extract a limited number of keyframes with semantic information. In this paper we present a novel method for keyframe extraction from individual shots of the operational video. METHODS: The laparoscopic video was first segmented into video shots using an objectness model, which was trained to capture significant changes in the endoscope field of view. Each frame of a shot was then decomposed into three saliency maps in order to model the preference of human vision to regions with higher differentiation with respect to color, motion and texture. The accumulated responses from each map provided a 3D time series of saliency variation across the shot. The time series was modeled as a multivariate autoregressive process with hidden Markov states (HMMAR model). This approach allowed the temporal segmentation of the shot into a predefined number of states. A representative keyframe was extracted from each state based on the highest state-conditional probability of the corresponding saliency vector. RESULTS: Our method was tested on 168 video shots extracted from various laparoscopic cholecystectomy operations from the publicly available Cholec80 dataset. Four state-of-the-art methodologies were used for comparison. The evaluation was based on two assessment metrics: Color Consistency Score (CCS), which measures the color distance between the ground truth (GT) and the closest keyframe, and Temporal Consistency Score (TCS), which considers the temporal proximity between GT and extracted keyframes. About 81% of the extracted keyframes matched the color content of the GT keyframes, compared to 77% yielded by the second-best method. The TCS of the proposed and the second-best method was close to 1.9 and 1.4 respectively. CONCLUSIONS: Our results demonstrated that the proposed method yields superior performance in terms of content and temporal consistency to the ground truth. The extracted keyframes provided highly semantic information that may be used for various applications related to surgical video content representation, such as workflow analysis, video summarization and retrieval.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Laparoscopia/métodos , Gravação em Vídeo/métodos , Algoritmos , Inteligência Artificial , Colecistectomia Laparoscópica/métodos , Colecistectomia Laparoscópica/estatística & dados numéricos , Cor , Bases de Dados Factuais , Humanos , Laparoscopia/estatística & dados numéricos , Cadeias de Markov , Movimento (Física) , Reconhecimento Automatizado de Padrão/métodos , Gravação em Vídeo/estatística & dados numéricos
13.
Int J Med Robot ; 14(1)2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28809094

RESUMO

BACKGROUND: Various sensors and methods are used for evaluating trainees' skills in laparoscopic procedures. These methods are usually task-specific and involve high costs or advanced setups. METHODS: In this paper, we propose a novel manoeuver representation feature space (MRFS) constructed by tracking the vanishing points of the edges of the graspers on the video sequence frames, acquired by the standard box trainer camera. This study aims to provide task-agnostic classification of trainees in experts and novices using a single MRFS over two basic laparoscopic tasks. RESULTS: The system achieves an average of 96% correct classification ratio (CCR) when no information on the performed task is available and >98% CCR when the task is known, outperforming a recently proposed video-based technique by >13%. CONCLUSIONS: Robustness, extensibility and accurate task-agnostic classification between novices and experts is achieved by utilizing advanced computer vision techniques and derived features from a novel MRFS.


Assuntos
Competência Clínica , Simulação por Computador , Laparoscopia/métodos , Procedimentos Cirúrgicos Robóticos/educação , Procedimentos Cirúrgicos Robóticos/instrumentação , Interface Usuário-Computador , Algoritmos , Processamento Eletrônico de Dados , Desenho de Equipamento , Humanos , Movimento (Física) , Reprodutibilidade dos Testes , Procedimentos Cirúrgicos Robóticos/métodos , Processamento de Sinais Assistido por Computador , Análise e Desempenho de Tarefas , Gravação em Vídeo
14.
Surg Endosc ; 32(1): 87-95, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28664435

RESUMO

BACKGROUND: Basic skills training in laparoscopic high-fidelity simulators (LHFS) improves laparoscopic skills. However, since LHFS are expensive, their availability is limited. The aim of this study was to assess whether automated video analysis of low-cost BlackBox laparoscopic training could provide an alternative to LHFS in basic skills training. METHODS: Medical students volunteered to participate during their surgical semester at the Karolinska University Hospital. After written informed consent, they performed two laparoscopic tasks (PEG-transfer and precision-cutting) on a BlackBox trainer. All tasks were videotaped and sent to MPLSC for automated video analysis, generating two parameters (Pl and Prtcl_tot) that assess the total motion activity. The students then carried out final tests on the MIST-VR simulator. This study was a European collaboration among two simulation centers, located in Sweden and Greece, within the framework of ACS-AEI. RESULTS: 31 students (19 females and 12 males), mean age of 26.2 ± 0.8 years, participated in the study. However, since two of the students completed only one of the three MIST-VR tasks, they were excluded. The three MIST-VR scores showed significant positive correlations to both the Pl variable in the automated video analysis of the PEG-transfer (RSquare 0.48, P < 0.0001; 0.34, P = 0.0009; 0.45, P < 0.0001, respectively) as well as to the Prtcl_tot variable in that same exercise (RSquare 0.42, P = 0.0002; 0.29, P = 0.0024; 0.45, P < 0.0001). However, the correlations were exclusively shown in the group with less PC gaming experience as well as in the female group. CONCLUSIONS: Automated video analysis provides accurate results in line with those of the validated MIST-VR. We believe that a more frequent use of automated video analysis could provide an extended value to cost-efficient laparoscopic BlackBox training. However, since there are gender-specific as well as PC gaming experience differences, this should be taken in account regarding the value of automated video analysis.


Assuntos
Competência Clínica/estatística & dados numéricos , Simulação por Computador/estatística & dados numéricos , Educação de Graduação em Medicina/métodos , Laparoscopia/educação , Gravação em Vídeo/métodos , Adulto , Feminino , Humanos , Masculino
15.
Surg Endosc ; 32(2): 553-568, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29075965

RESUMO

BACKGROUND: In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. METHODS: The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. RESULTS: A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. CONCLUSIONS: Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.


Assuntos
Endoscopia/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Gravação em Vídeo , Cirurgia Vídeoassistida , Algoritmos , Endoscopia/tendências , Humanos , Procedimentos Cirúrgicos Minimamente Invasivos/tendências
16.
J Cardiovasc Thorac Res ; 9(2): 71-77, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28740625

RESUMO

Introduction: Development of pulmonary insufficiency in patients with surgically corrected tetralogy of Fallot (TOF) may lead to severe right heart failure with serious consequences. We herein present our experience with pulmonary valve replacement (PVR) in these patients. Methods: From 2005-2013, 99 consecutive patients (71 males/28 females, mean age 38±8 years), underwent PVR after 7 to 40 (mean 29 ± 8) years from the initial correction. Seventy nine of the symptomatic patients presented in NYHA II, 14 in III and 2 in IV. All underwent PVR with a stented bioprosthetic valve, employing a beating heart technique with normothermic extracorporeal circulation support. Concomitant procedures included resection of aneurysmal outflow tract patches (n = 37), tricuspid valve annuloplasty (n = 36), augmentation of stenotic pulmonary arteries (n = 9), maze procedure (n = 2) and pulmonary artery stenting (n = 4). Results: There were 2 perioperative deaths (2%). One patient developed sternal dehiscence requiring rewiring. Median ICU and hospital stay was 1 and 7 days respectively. Postoperative echocardiography at 6 and 12 months showed excellent bioprosthetic valve performance, significant decrease in size of the right cardiac chambers and reduction of tricuspid regurgitation (TR) in the majority of the patients. At mean follow-up of 3.6 ± 2 years, all surviving patients remain in excellent clinical condition. Conclusion: Probability of reoperation for pulmonary insufficiency in patients with surgically corrected TOF increases with time and timely PVR by preventing the development of right heart failure is crucial for long-term survival. Current bioprosthetic valve technology in combination with the beating heart technique provides excellent immediate and short-term results. Further follow-up is necessary to evaluate long-term outcome.

17.
Surg Endosc ; 31(12): 5012-5023, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28466361

RESUMO

BACKGROUND: The majority of the current surgical simulators employ specialized sensory equipment for instrument tracking. The Leap Motion controller is a new device able to track linear objects with sub-millimeter accuracy. The aim of this study was to investigate the potential of a virtual reality (VR) simulator for assessment of basic laparoscopic skills, based on the low-cost Leap Motion controller. METHODS: A simple interface was constructed to simulate the insertion point of the instruments into the abdominal cavity. The controller provided information about the position and orientation of the instruments. Custom tools were constructed to simulate the laparoscopic setup. Three basic VR tasks were developed: camera navigation (CN), instrument navigation (IN), and bimanual operation (BO). The experiments were carried out in two simulation centers: MPLSC (Athens, Greece) and CRESENT (Riyadh, Kingdom of Saudi Arabia). Two groups of surgeons (28 experts and 21 novices) participated in the study by performing the VR tasks. Skills assessment metrics included time, pathlength, and two task-specific errors. The face validity of the training scenarios was also investigated via a questionnaire completed by the participants. RESULTS: Expert surgeons significantly outperformed novices in all assessment metrics for IN and BO (p < 0.05). For CN, a significant difference was found in one error metric (p < 0.05). The greatest difference between the performances of the two groups occurred for BO. Qualitative analysis of the instrument trajectory revealed that experts performed more delicate movements compared to novices. Subjects' ratings on the feedback questionnaire highlighted the training value of the system. CONCLUSIONS: This study provides evidence regarding the potential use of the Leap Motion controller for assessment of basic laparoscopic skills. The proposed system allowed the evaluation of dexterity of the hand movements. Future work will involve comparison studies with validated simulators and development of advanced training scenarios on current Leap Motion controller.


Assuntos
Competência Clínica/estatística & dados numéricos , Laparoscopia/educação , Treinamento por Simulação/métodos , Realidade Virtual , Cavidade Abdominal/cirurgia , Humanos , Orientação Espacial , Reprodutibilidade dos Testes , Cirurgiões , Inquéritos e Questionários , Interface Usuário-Computador
18.
Int J Comput Assist Radiol Surg ; 11(11): 1937-1949, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27289240

RESUMO

PURPOSE: Over the last decade, the demand for content management of video recordings of surgical procedures has greatly increased. Although a few research methods have been published toward this direction, the related literature is still in its infancy. In this paper, we address the problem of shot detection in endoscopic surgery videos, a fundamental step in content-based video analysis. METHODS: The video is first decomposed into short clips that are processed sequentially. After feature extraction, we employ spatiotemporal Gaussian mixture models (GMM) for each clip and apply a variational Bayesian (VB) algorithm to approximate the posterior distribution of the model parameters. The proper number of components is handled automatically by the VBGMM algorithm. The estimated components are matched along the video sequence via their Kullback-Leibler divergence. Shot borders are defined when component tracking fails, signifying a different visual appearance of the surgical scene. RESULTS: Experimental evaluation was performed on laparoscopic videos containing a variable number of shots. Performance was measured via precision, recall, coverage and overflow metrics. The proposed method was compared with GMM and a shot detection method based on spatiotemporal motion differences (MotionDiff). The results demonstrate that VBGMM has higher performance than all other methods for most assessment metrics: precision and recall >80 %, coverage: 84 %. Overflow for VBGMM was worse than MotionDiff (37 vs. 27 %). CONCLUSIONS: The proposed method generated promising results for shot border detection. Spatiotemporal modeling via VBGMMs provides a means to explore additional applications such as component tracking.


Assuntos
Colecistectomia Laparoscópica/métodos , Endoscopia/métodos , Algoritmos , Teorema de Bayes , Humanos , Modelos Teóricos , Distribuição Normal , Gravação em Vídeo/métodos
19.
Brachytherapy ; 15(2): 252-62, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26727331

RESUMO

PURPOSE: To develop a user-oriented procedure for testing treatment planning system (TPS) dosimetry in high-dose-rate brachytherapy, with particular focus to TPSs using model-based dose calculation algorithms (MBDCAs). METHODS AND MATERIALS: Identical plans were prepared for three computational models using two commercially available systems and the same (192)Ir source. Reference dose distributions were obtained for each plan using the MCNP v.6.1 Monte Carlo (MC) simulation code with input files prepared via automatic parsing of plan information using a custom software tool. The same tool was used for the comparison of reference dose distributions with corresponding MBDCA exports. RESULTS: The single source test case yielded differences due to the MBDCA spatial discretization settings. These affect points at relatively increased distance from the source, and they are abated in test cases with multiple source dwells. Differences beyond MC Type A uncertainty were also observed very close to the source(s), close to the test geometry boundaries, and within heterogeneities. Both MBDCAs studied were found equivalent to MC within 5 cm from the target volume for a clinical breast brachytherapy test case. These are in agreement with previous findings of MBDCA benchmarking in the literature. CONCLUSIONS: The data and the tools presented in this work, that are freely available via the web, can serve as a benchmark for advanced clinical users developing their own tests, a complete commissioning procedure for new adopters of currently available TPSs using MBDCAs, a quality assurance testing tool for future updates of already installed TPSs, or as an admission prerequisite in multicentric clinical trials.


Assuntos
Algoritmos , Braquiterapia/normas , Garantia da Qualidade dos Cuidados de Saúde/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/normas , Mama , Simulação por Computador , Feminino , Humanos , Método de Monte Carlo , Radiometria , Dosagem Radioterapêutica , Incerteza
20.
Int J Med Robot ; 12(3): 387-98, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26415583

RESUMO

BACKGROUND: Despite the significant progress in hand gesture analysis for surgical skills assessment, video-based analysis has not received much attention. In this study we investigate the application of various feature detector-descriptors and temporal modeling techniques for laparoscopic skills assessment. METHODS: Two different setups were designed: static and dynamic video-histogram analysis. Four well-known feature detection-extraction methods were investigated: SIFT, SURF, STAR-BRIEF and STIP-HOG. For the dynamic setup two temporal models were employed (LDS and GMMAR model). Each method was evaluated for its ability to classify experts and novices on peg transfer and knot tying. RESULTS: STIP-HOG yielded the best performance (static: 74-79%; dynamic: 80-89%). Temporal models had equivalent performance. Important differences were found between the two groups with respect to the underlying dynamics of the video-histogram sequences. CONCLUSIONS: Temporal modeling of feature histograms extracted from laparoscopic training videos provides information about the skill level and motion pattern of the operator. Copyright © 2015 John Wiley & Sons, Ltd.


Assuntos
Competência Clínica , Laparoscopia/métodos , Gravação em Vídeo , Humanos , Laparoscopia/instrumentação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...