Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
Surg Innov ; 26(1): 5-20, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30270757

ABSTRACT

Orthognathic surgery belongs to the scope of maxillofacial surgery. It treats dentofacial deformities consisting in discrepancy between the facial bones (upper and lower jaws). Such impairment affects chewing, talking, and breathing and can ultimately result in the loss of teeth. Orthognathic surgery restores facial harmony and dental occlusion through bone cutting, repositioning, and fixation. However, in routine practice, we face the limitations of conventional tools and the lack of intraoperative assistance. These limitations occur at every step of the surgical workflow: preoperative planning, simulation, and intraoperative navigation. The aim of this research was to provide novel tools to improve simulation and navigation. We first developed a semiautomated segmentation pipeline allowing accurate and time-efficient patient-specific 3D modeling from computed tomography scans mandatory to achieve surgical planning. This step allowed an improvement of processing time by a factor of 6 compared with interactive segmentation, with a 1.5-mm distance error. Next, we developed a software to simulate the postoperative outcome on facial soft tissues. Volume meshes were processed from segmented DICOM images, and the Bullet open source mechanical engine was used together with a mass-spring model to reach a postoperative simulation accuracy <1 mm. Our toolset was completed by the development of a real-time navigation system using minimally invasive electromagnetic sensors. This navigation system featured a novel user-friendly interface based on augmented virtuality that improved surgical accuracy and operative time especially for trainee surgeons, therefore demonstrating its educational benefits. The resulting software suite could enhance operative accuracy and surgeon education for improved patient care.


Subject(s)
Computer Simulation , Imaging, Three-Dimensional , Orthognathic Surgical Procedures/methods , Patient-Specific Modeling , Software , Surgery, Computer-Assisted/methods , France , Hospitals, University , Humans , Maxillofacial Abnormalities/diagnostic imaging , Maxillofacial Abnormalities/surgery , Orthognathic Surgery/standards , Orthognathic Surgery/trends , Orthognathic Surgical Procedures/instrumentation , Sensitivity and Specificity
2.
Med Image Anal ; 37: 66-90, 2017 04.
Article in English | MEDLINE | ID: mdl-28160692

ABSTRACT

This article establishes a comprehensive review of all the different methods proposed by the literature concerning augmented reality in intra-abdominal minimally invasive surgery (also known as laparoscopic surgery). A solid background of surgical augmented reality is first provided in order to support the survey. Then, the various methods of laparoscopic augmented reality as well as their key tasks are categorized in order to better grasp the current landscape of the field. Finally, the various issues gathered from these reviewed approaches are organized in order to outline the remaining challenges of augmented reality in laparoscopic surgery.


Subject(s)
Laparoscopy/methods , Laparoscopy/trends , Algorithms , Animals , Humans , Reproducibility of Results
3.
Int J Comput Assist Radiol Surg ; 12(1): 1-11, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27376720

ABSTRACT

PURPOSE: An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. METHODS: We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. RESULTS: Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. CONCLUSIONS: The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.


Subject(s)
Imaging, Three-Dimensional/methods , Phantoms, Imaging , Surgery, Computer-Assisted/methods , Anatomic Landmarks , Animals , Computers, Handheld , Humans , Models, Anatomic , Swine
4.
Med Image Anal ; 30: 130-143, 2016 May.
Article in English | MEDLINE | ID: mdl-26925804

ABSTRACT

The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data. We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach.


Subject(s)
Endoscopes , Laparoscopy/instrumentation , Laparoscopy/methods , Surgery, Computer-Assisted/instrumentation , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Equipment Design , Equipment Failure Analysis , Humans , Intraoperative Care/instrumentation , Intraoperative Care/methods , Phantoms, Imaging , Reproducibility of Results , Sensitivity and Specificity , User-Computer Interface
5.
IEEE Trans Biomed Eng ; 63(9): 1862-1873, 2016 09.
Article in English | MEDLINE | ID: mdl-26625405

ABSTRACT

Barrett's oesophagus, a premalignant condition of the oesophagus has been on a rise in the recent years. The standard diagnostic protocol for Barrett's involves obtaining biopsies at suspicious regions along the oesophagus. The localization and tracking of these biopsy sites "interoperatively" poses a significant challenge for providing targeted treatments and tracking disease progression. This paper proposes an approach to provide guided navigation and relocalization of the biopsy sites using an electromagnetic tracking system. The characteristic of our approach over existing ones is the integration of an electromagnetic sensor at the flexible endoscope tip, so that the endoscopic camera depth inside the oesophagus can be computed in real time, allowing to retrieve and display an image from a previous exploration at the same depth. We first describe our system setup and methodology for interoperative registration. We then propose three incremental experiments of our approach. First, on synthetic data with realistic noise model to analyze the error bounds of our system. The second on in vivo pig data using an optical tracking system to provide a pseudo ground truth. Accuracy results obtained were consistent with the synthetic experiments despite uncertainty introduced due to breathing motion, and remain inside acceptable error margin according to medical experts. Finally, a third experiment designed using data from pigs to simulate a real task of biopsy site relocalization, and evaluated by ten gastro-intestinal experts. It clearly demonstrated the benefit of our system toward assisted guidance by improving the biopsy site retrieval rate from 47.5% to 94%.


Subject(s)
Esophagoscopes , Esophagus/pathology , Esophagus/surgery , Image-Guided Biopsy/instrumentation , Magnetics/instrumentation , Natural Orifice Endoscopic Surgery/instrumentation , Animals , Equipment Design , Equipment Failure Analysis , Esophagus/diagnostic imaging , Image-Guided Biopsy/methods , Microscopy, Video/instrumentation , Reproducibility of Results , Sensitivity and Specificity , Swine
7.
J Craniomaxillofac Surg ; 43(9): 1723-30, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26364761

ABSTRACT

Appropriate positioning of the maxilla is critical in orthognathic surgery. As opposed to splint-based positioning, navigation systems are versatile and appropriate in assessing the vertical dimension. Bulk and disruption to the line of sight are drawbacks of optical navigation systems. Our aim was to develop and assess a novel navigation system based on electromagnetic tracking of the maxilla, including real-time registration of head movements. Since the software interface has proved to greatly influence the accuracy of the procedure, we purposely designed and evaluated an original, user-friendly interface. A sample of 12 surgeons had to navigate the phantom osteotomized maxilla to eight given target positions using the software we have developed. Time and accuracy (translational error and angular error) were compared between a conventional and a navigated session. A questionnaire provided qualitative evaluation. Our system definitely allows a reduction in variability of time and accuracy among different operators. Accuracy was improved in all surgeons (mean terror difference = 1.11 mm, mean aerror difference = 1.32°). Operative time was decreased in trainees. Therefore, they would benefit from such a system that could also serve for educational purposes. The majority of surgeons who strongly agreed that such a navigation system would prove very helpful in complex deformities, also stated that it would be helpful in everyday orthognathic procedures.


Subject(s)
Electromagnetic Phenomena , Maxilla/surgery , Orthognathic Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Humans , Imaging, Three-Dimensional/methods , Models, Anatomic , Software , User-Computer Interface
8.
IEEE Comput Graph Appl ; 35(5): 22-33, 2015.
Article in English | MEDLINE | ID: mdl-26186769

ABSTRACT

Marker tracking is used in numerous applications. Depending on the context and its constraints, tracking accuracy can be a crucial component of the application. This article first explains that the tracking accuracy depends on the illumination, which is not controlled in most applications. In particular, the authors show how corner detection can shift several pixels when lighting or background context change, even if the camera and the marker are static. Based on cross-ratio invariance, the proposed method helps reestimate the corner extraction so that the marker model's cross ratio corresponds to the one computed from the extracted corners in the image. The authors show on real data that their approach improves tracking accuracy, particularly along the camera depth axis, up to several millimeters, depending on the marker depth.

9.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 423-31, 2014.
Article in English | MEDLINE | ID: mdl-25333146

ABSTRACT

Augmented reality for soft tissue laparoscopic surgery is a growing topic of interest in the medical community and has potential application in intra-operative planning and image guidance. Delivery of such systems to the operating room remains complex with theoretical challenges related to tissue deformation and the practical limitations of imaging equipment. Current research in this area generally only solves part of the registration pipeline or relies on fiducials, manual model alignment or assumes that tissue is static. This paper proposes a novel augmented reality framework for intra-operative planning: the approach co-registers pre-operative CT with stereo laparoscopic images using cone beam CT and fluoroscopy as bridging modalities. It does not require fiducials or manual alignment and compensates for tissue deformation from insufflation and respiration while allowing the laparoscope to be navigated. The paper's theoretical and practical contributions are validated using simulated, phantom, ex vivo, in vivo and non medical data.


Subject(s)
Imaging, Three-Dimensional/methods , Laparoscopy/methods , Robotics/methods , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , User-Computer Interface , Algorithms , Computer Graphics , Computer Simulation , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Models, Biological , Preoperative Care/methods
10.
Hepatobiliary Surg Nutr ; 3(2): 73-81, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24812598

ABSTRACT

BACKGROUND: Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. METHODS: From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. RESULTS: From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. CONCLUSIONS: Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR.

11.
World J Surg ; 37(7): 1618-25, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23558758

ABSTRACT

BACKGROUND: The aim of this study was to assess the accuracy of a novel imaging modality, three-dimensional (3D) metabolic and radiologic gathered evaluation (MeRGE), for localizing parathyroid adenomas (PAs). METHODS: Consecutive patients presenting with primary hyperparathyroidism who underwent both thin-slice cervical computed tomography (CT) and (99m)Tc-sestamibi (MIBI) scanning were included. 3D-CT reconstruction was obtained using VR-RENDER, which was used to perform 3D virtual neck exploration (3D-VNE). The MIBI scan was then fused with the 3D reconstruction to obtain 3D-MeRGE. Sensitivity, specificity, and accuracy were assessed. Parathyroid gland volume and preoperative parathormone (PTH) levels were analyzed as predictive factors of correct localization (i.e., correct quadrant). RESULTS: A total of 108 cervical quadrants (27 patients) were analyzed. Sensitivities were 79.31, 75.86, 65.51, and 58.61 % with 3D-MeRGE, 3D-VNE, MIBI, and CT, respectively. Specificity was highest with CT (94.93 %) followed by 3D-VNE (92.4 %). MIBI and 3D-MeRGE had the same specificity (88.6 %). 3D-MeRGE and 3D-VNE achieved higher accuracy than MIBI or CT alone. Mean PTH values were significantly higher in patients with lesions that were correctly identified (true positive, TP) than in those whose lesions were missed (false negative, FN) with 3D-VNE (219.60 ± 212.77 vs. 98.75 ± 12.76 pg/ml; p = 0.01) and 3D-MeRGE (217.69 ± 213.76 vs. 09.75 ± 20.48 pg/ml; p = 0.02). The mean parathyroid gland volume difference between TP and FN was statistically significant with all modalities except CT. CONCLUSIONS: 3D-MeRGE and 3D-VNE showed high accuracy for localization of PAs. 3D-MeRGE performed better than MIBI or CT alone for detecting small adenomas and those with a low PTH level.


Subject(s)
Adenoma/diagnosis , Hyperparathyroidism, Primary/etiology , Imaging, Three-Dimensional/methods , Parathyroid Neoplasms/diagnosis , Radiopharmaceuticals , Technetium Tc 99m Sestamibi , Tomography, X-Ray Computed , Adenoma/complications , Adenoma/surgery , Adult , Aged , Female , Follow-Up Studies , Humans , Hyperparathyroidism, Primary/surgery , Male , Middle Aged , Parathyroid Neoplasms/complications , Parathyroid Neoplasms/surgery , Parathyroidectomy , Predictive Value of Tests , Preoperative Care , Prospective Studies , Sensitivity and Specificity , Treatment Outcome
12.
IEEE Trans Biomed Eng ; 60(8): 2193-204, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23475327

ABSTRACT

The development of imaging devices adapted to small animals has opened the way to image-guided procedures in biomedical research. In this paper, we focus on automated procedures to study the effects of the recurrent administration of substances to the same animal over time. A dedicated system and the associated workflow have been designed to percutaneously position a needle into the abdominal organs of mice. Every step of the procedure has been automated: the camera calibration, the needle access planning, the robotized needle positioning, and the respiratory-gated needle insertion. Specific devices have been developed for the registration, the animal binding under anesthesia, and the skin puncture. Among the presented results, the system accuracy is particularly emphasized, both in vitro using gelose phantoms and in vivo by injecting substances into various abdominal organs. The study shows that robotic assistance could be routinely used in biomedical research laboratories to improve existing procedures, allowing automated accurate treatments and limited animal sacrifices.


Subject(s)
Biopsy, Needle/instrumentation , Biopsy, Needle/veterinary , Image-Guided Biopsy/instrumentation , Image-Guided Biopsy/veterinary , Robotics/instrumentation , Video Recording/instrumentation , Animals , Equipment Design , Equipment Failure Analysis , Miniaturization , Reproducibility of Results , Sensitivity and Specificity
13.
Article in English | MEDLINE | ID: mdl-24579117

ABSTRACT

Minimally invasive laparoscopic surgery is widely used for the treatment of cancer and other diseases. During the procedure, gas insufflation is used to create space for laparoscopic tools and operation. Insufflation causes the organs and abdominal wall to deform significantly. Due to this large deformation, the benefit of surgical plans, which are typically based on pre-operative images, is limited for real time navigation. In some recent work, intra-operative images, such as cone-beam CT or interventional CT, are introduced to provide updated volumetric information after insufflation. Other works in this area have focused on simulation of gas insufflation and exploited only the pre-operative images to estimate deformation. This paper proposes a novel registration method for pre- and intra-operative 3D image fusion for laparoscopic surgery. In this approach, the deformation of pre-operative images is driven by a biomechanical model of the insufflation process. The proposed method was validated by five synthetic data sets generated from clinical images and three pairs of in vivo CT scans acquired from two pigs, before and after insufflation. The results show the proposed method achieved high accuracy for both the synthetic and real insufflation data.


Subject(s)
Imaging, Three-Dimensional/methods , Laparoscopy/methods , Models, Biological , Pneumoradiography/methods , Subtraction Technique , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Animals , Computer Simulation , Humans , Image Enhancement/methods , Multimodal Imaging/methods , Reproducibility of Results , Sensitivity and Specificity , Swine
14.
Article in English | MEDLINE | ID: mdl-24505688

ABSTRACT

The screening of oesophageal adenocarcinoma involves obtaining biopsies at different regions along the oesophagus. The localization and tracking of these biopsy sites inter-operatively poses a significant challenge for providing targeted treatments. This paper presents a novel framework for providing a guided navigation to the gastro-intestinal specialist for accurate re-positioning of the endoscope at previously targeted sites. Firstly, we explain our approach for the application of electromagnetic tracking in acheiving this objective. Then, we show on three in-vivo porcine interventions that our system can provide accurate guidance information, which was qualitatively evaluated by five experts.


Subject(s)
Adenocarcinoma/pathology , Capsule Endoscopy/methods , Esophageal Neoplasms/pathology , Image-Guided Biopsy/methods , Magnetics/methods , Subtraction Technique , Humans , Observer Variation , Signal Processing, Computer-Assisted
15.
Surg Oncol ; 20(3): 189-201, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21802281

ABSTRACT

Minimally invasive surgery represents one of the main evolutions of surgical techniques aimed at providing a greater benefit to the patient. However, minimally invasive surgery increases the operative difficulty since the depth perception is usually dramatically reduced, the field of view is limited and the sense of touch is transmitted by an instrument. However, these drawbacks can currently be reduced by computer technology guiding the surgical gesture. Indeed, from a patient's medical image (US, CT or MRI), Augmented Reality (AR) can increase the surgeon's intra-operative vision by providing a virtual transparency of the patient. AR is based on two main processes: the 3D visualization of the anatomical or pathological structures appearing in the medical image, and the registration of this visualization on the real patient. 3D visualization can be performed directly from the medical image without the need for a pre-processing step thanks to volume rendering. But better results are obtained with surface rendering after organ and pathology delineations and 3D modelling. Registration can be performed interactively or automatically. Several interactive systems have been developed and applied to humans, demonstrating the benefit of AR in surgical oncology. It also shows the current limited interactivity due to soft organ movements and interaction between surgeon instruments and organs. If the current automatic AR systems show the feasibility of such system, it is still relying on specific and expensive equipment which is not available in clinical routine. Moreover, they are not robust enough due to the high complexity of developing a real-time registration taking organ deformation and human movement into account. However, the latest results of automatic AR systems are extremely encouraging and show that it will become a standard requirement for future computer-assisted surgical oncology. In this article, we will explain the concept of AR and its principles. Then, we will review the existing interactive and automatic AR systems in digestive surgical oncology, highlighting their benefits and limitations. Finally, we will discuss the future evolutions and the issues that still have to be tackled so that this technology can be seamlessly integrated in the operating room.


Subject(s)
Diagnostic Imaging , General Surgery , Laparoscopy , Medical Oncology , Minimally Invasive Surgical Procedures , Neoplasms/surgery , Computer Simulation , Humans , Image Processing, Computer-Assisted , Neoplasms/pathology , Software
16.
Comput Methods Programs Biomed ; 100(2): 149-57, 2010 Nov.
Article in English | MEDLINE | ID: mdl-20371130

ABSTRACT

This article presents a method of predictive simulation, patient-dependant, in real time of the abdominal organ positions during free breathing. The method, that considers both influence of the abdominal breathing and thoracic breathing, needs a tracking of the patient skin and a model of the patient-specific modification of the diaphragm shape. From a measurement of the abdomen viscera kinematic during free breathing, we evaluate through a finite element analysis, the stress field sustained by the organs for a hyperelastic mechanical behaviour using large strain theory. From this analysis, we deduce an in vivo Poisson's ratio and a homogeneous bulk modulus of the liver and kidneys, and compare it to the ones in vitro available in the literature.


Subject(s)
Abdomen/physiology , Kidney/anatomy & histology , Liver/anatomy & histology , Respiration , Abdomen/anatomy & histology , Biomechanical Phenomena , Humans , Kidney/diagnostic imaging , Liver/diagnostic imaging , Tomography, X-Ray Computed
17.
Inf Process Med Imaging ; 19: 443-55, 2005.
Article in English | MEDLINE | ID: mdl-17354716

ABSTRACT

In this paper, we propose an original and efficient tree matching algorithm for intra-patient hepatic vascular system registration. Vascular systems are segmented from CT-scan images acquired at different times, and then modeled as trees. The goal of this algorithm is to find common bifurcations (nodes) and vessels (edges) in both trees. Starting from the tree root, edges and nodes are iteratively matched. The algorithm works on a set of match solutions that are updated to keep the best matches thanks to a quality criterion. It is robust against topological modifications due to segmentation failures and against strong deformations. Finally, this algorithm is validated on a large synthetic database containing cases with various deformation and segmentation problems.


Subject(s)
Hepatic Artery/diagnostic imaging , Liver/blood supply , Liver/diagnostic imaging , Pattern Recognition, Automated/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Subtraction Technique , Tomography, X-Ray Computed/methods , Algorithms , Artificial Intelligence , Humans , Imaging, Three-Dimensional/methods , Radiographic Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...