Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 76
Filter
1.
J Gynecol Obstet Hum Reprod ; 52(1): 102500, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36351538

ABSTRACT

Deep infiltrating pelvic endometriosis and its surgical management is associated with a risk of major postoperative complications. Magnetic Resonance Imaging (MRI) is recommended preoperatively in order to obtain the most precise mapping of the extent of endometriotic lesions. The aim of this work was to assess the feasibility and clinical interest of 3D modeling by surface rendering as a preoperative planning tool in a patient with deep infiltrating pelvic endometriosis. We report on a 42 years old patient with history of endometriosis and persistent pain underwent pre operative imaging with MRI that was consistent with deep infiltrating endometriosis. A 3D model of the deep infiltrating endometriosis was generated from the MRI and retrospectively compared to the intra-operative findings. The nodule's location and relationship to the uterus and the rectum was clearly defined by the 3D model and correlated with surgical findings. Virtual reality based on 3D models could be an interesting tool to assist in the preoperative planning of complex surgeries.


Subject(s)
Endometriosis , Virtual Reality , Female , Humans , Adult , Endometriosis/diagnostic imaging , Endometriosis/surgery , Endometriosis/complications , Retrospective Studies , Feasibility Studies , Magnetic Resonance Imaging/methods
2.
Int J Comput Assist Radiol Surg ; 17(1): 129-139, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34750733

ABSTRACT

PURPOSE: Fully Convolutional neural Networks (FCNs) are the most popular models for medical image segmentation. However, they do not explicitly integrate spatial organ positions, which can be crucial for proper labeling in challenging contexts. METHODS: In this work, we propose a method that combines a model representing prior probabilities of an organ position in 3D with visual FCN predictions by means of a generalized prior-driven prediction function. The prior is also used in a self-labeling process to handle low-data regimes, in order to improve the quality of the pseudo-label selection. RESULTS: Experiments carried out on CT scans from the public TCIA pancreas segmentation dataset reveal that the resulting STIPPLE model can significantly increase performances compared to the FCN baseline, especially with few training images. We also show that STIPPLE outperforms state-of-the-art semi-supervised segmentation methods by leveraging the spatial prior information. CONCLUSIONS: STIPPLE provides a segmentation method effective with few labeled examples, which is crucial in the medical domain. It offers an intuitive way to incorporate absolute position information by mimicking expert annotators.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Pancreas/diagnostic imaging , Tomography, X-Ray Computed
3.
Comput Med Imaging Graph ; 91: 101938, 2021 07.
Article in English | MEDLINE | ID: mdl-34153879

ABSTRACT

Training deep ConvNets requires large labeled datasets. However, collecting pixel-level labels for medical image segmentation is very expensive and requires a high level of expertise. In addition, most existing segmentation masks provided by clinical experts focus on specific anatomical structures. In this paper, we propose a method dedicated to handle such partially labeled medical image datasets. We propose a strategy to identify pixels for which labels are correct, and to train Fully Convolutional Neural Networks with a multi-label loss adapted to this context. In addition, we introduce an iterative confidence self-training approach inspired by curriculum learning to relabel missing pixel labels, which relies on selecting the most confident prediction with a specifically designed confidence network that learns an uncertainty measure which is leveraged in our relabeling process. Our approach, INERRANT for Iterative coNfidencE Relabeling of paRtial ANnoTations, is thoroughly evaluated on two public datasets (TCAI and LITS), and one internal dataset with seven abdominal organ classes. We show that INERRANT robustly deals with partial labels, performing similarly to a model trained on all labels even for large missing label proportions. We also highlight the importance of our iterative learning scheme and the proposed confidence measure for optimal performance. Finally we show a practical use case where a limited number of completely labeled data are enriched by publicly available but partially labeled data.


Subject(s)
Learning , Neural Networks, Computer
4.
Anaesth Crit Care Pain Med ; 40(1): 100780, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33197638

ABSTRACT

OBJECTIVE: Ground-glass opacities are the most frequent radiologic features of COVID-19 patients. We aimed to determine the feasibility of automated lung volume measurements, including ground-glass volumes, on the CT of suspected COVID-19 patients. Our goal was to create an automated and quantitative measure of ground-glass opacities from lung CT images that could be used clinically for diagnosis, triage and research. DESIGN: Single centre, retrospective, observational study. MEASUREMENTS: Demographic data, respiratory support treatment (synthetised in the maximal respiratory severity score) and CT-images were collected. Volume of abnormal lung parenchyma was measured with conventional semi-automatic software and with a novel automated algorithm based on voxels X-Ray attenuation. We looked for the relationship between the automated and semi-automated evaluations. The association between the ground-glass opacities volume and the maximal respiratory severity score was assessed. MAIN RESULTS: Thirty-seven patients were included in the main outcome analysis. The mean duration of automated and semi-automated volume measurement process were 15 (2) and 93 (41) min, respectively (p=8.05*10-8). The intraclass correlation coefficient between the semi-automated and automated measurement of ground-glass opacities and restricted normally aerated lung were both superior to 0.99. The association between the automated measured lung volume and the maximal clinical severity score was statistically significant for the restricted normally aerated (p=0.0097, effect-size: -385mL) volumes and for the ratio of ground-glass opacities/restricted normally aerated volumes (p=0.027, effect-size: 3.3). CONCLUSION: The feasibility and preliminary validity of automated impaired lung volume measurements in a high-density COVID-19 cluster was confirmed by our results.


Subject(s)
COVID-19/diagnostic imaging , Lung Volume Measurements/methods , Lung/diagnostic imaging , Tomography, X-Ray Computed/methods , Algorithms , Automation , Feasibility Studies , Female , Humans , Male , Middle Aged , Reproducibility of Results , Retrospective Studies , Severity of Illness Index , Software , Supine Position , Time Factors , Treatment Outcome , Triage
6.
Ann Surg Open ; 1(2): e021, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33392607

ABSTRACT

OBJECTIVE: To develop consensus definitions of image-guided surgery, computer-assisted surgery, hybrid operating room, and surgical navigation systems. SUMMARY BACKGROUND DATA: The use of minimally invasive procedures has increased tremendously over the past 2 decades, but terminology related to image-guided minimally invasive procedures has not been standardized, which is a barrier to clear communication. METHODS: Experts in image-guided techniques and specialized engineers were invited to engage in a systematic process to develop consensus definitions of the key terms listed above. The process was designed following review of common consensus-development methodologies and included participation in 4 online surveys and a post-surveys face-to-face panel meeting held in Strasbourg, France. RESULTS: The experts settled on the terms computer-assisted surgery and intervention, image-guided surgery and intervention, hybrid operating room, and guidance systems and agreed-upon definitions of these terms, with rates of consensus of more than 80% for each term. The methodology used proved to be a compelling strategy to overcome the current difficulties related to data growth rates and technological convergence in this field. CONCLUSIONS: Our multidisciplinary collaborative approach resulted in consensus definitions that may improve communication, knowledge transfer, collaboration, and research in the rapidly changing field of image-guided minimally invasive techniques.

7.
Arch Esp Urol ; 72(3): 347-352, 2019 04.
Article in English | MEDLINE | ID: mdl-30945662

ABSTRACT

OBJECTIVE: We relate a single-center experience in virtual surgical planning to demonstrate interests and perspectives in pediatric urology. METHOD: From 2004 to April 2017, 4 patients were analyzed before intervention at our institution. All patients had undergone a low dose CT scan. The acquisition was then treated by a surface rendering software Pre-, per- and post-operative outcome were retrospectively collected.  RESULTS: 4 patients were operated on from 2004 to April 2017: two for oncological pathologies and two for congenital malformations. Mean age at intervention was 61 months (21-156 months). Two interventions were performed laparoscopically with one conversion. Mean operative time was 135 min (80-180 min). There were no complications.  CONCLUSION: 3D surgical planning should be mandatory in pediatric urology to perform the safest, the most accurate and effective surgery as possible.


ARTICULO SOLO EN INGLES.OBJETIVO: Relatamos la experiencia deun centro con la planificación quirúrgica virtual para demostrar los intereses y las perspectivas en urología pediátrica. MÉTODOS: Desde 2004 hasta abril 2017 se analizaron 4 pacientes antes de la intervención. Todos los pacientes habían sido sometidos a TAC de baja dosis. Laadquisición fue después tratada mediante un software de representación de superficie. Se recogieron los resultados pre-, peri- y postoperatorios  retrospectivamente. RESULTADOS: 4 pacientes fueron intervenidos entre 2004 y abril 2017: dos por patologías oncológicas y dos por malformaciones congénitas. La edad mediaen el momento de la intervención era de 61 meses (21-156 meses). Dos intervenciones fueron realizadas por vía laparoscópica con una conversión. El tiempo medio de operación fue 135 minutos (80-180 min). No hubo complicaciones. CONCLUSIONES: La planificación 3D debería ser obligatoria en urología pediátrica para la realización de la cirugía más segura, precisa y efectiva posible.


Subject(s)
Imaging, Three-Dimensional , Urologic Surgical Procedures , Urology , Child , Child, Preschool , Humans , Retrospective Studies , Software , Tomography, X-Ray Computed
8.
Arch. esp. urol. (Ed. impr.) ; 72(3): 347-352, abr. 2019. ilus, tab
Article in English | IBECS | ID: ibc-180469

ABSTRACT

Objective: We relate a single-center experience in virtual surgical planning to demonstrate interests and perspectives in pediatric urology. Method: From 2004 to April 2017, 4 patients were analyzed before intervention at our institution. All patients had undergone a low dose CT scan. The acquisition was then treated by a surface rendering software Pre-, per- and post-operative outcome were retrospectively collected. Results: 4 patients were operated on from 2004 to April 2017: two for oncological pathologies and two for congenital malformations. Mean age at intervention was 61 months (21-156 months). Two interventions were performed laparoscopically with one conversion. Mean operative time was 135 min (80-180 min). There were no complications. Conclusion:3D surgical planning should be mandatory in pediatric urology to perform the safest, the most accurate and effective surgery as possible


Objetivo: Relatamos la experiencia deun centro con la planificación quirúrgica virtual para demostrar los intereses y las perspectivas en urología pediátrica. Métodos: Desde 2004 hasta abril 2017 se analizaron 4 pacientes antes de la intervención. Todos los pacientes habían sido sometidos a TAC de baja dosis. La adquisición fue después tratada mediante un software de representación de superficie. Se recogieron los resultados pre-, peri- y postoperatorios retrospectivamente. Resultados: 4 pacientes fueron intervenidos entre 2004 y abril 2017: dos por patologías oncológicas y dos por malformaciones congénitas. La edad media en el momento de la intervención era de 61 meses (21-156 meses). Dos intervenciones fueron realizadas por vía laparoscópica con una conversión. El tiempo medio de operación fue 135 minutos (80-180 min). No hubo complicaciones. Conclusiones: La planificación 3D debería ser obligatoria en urología pediátrica para la realización de la cirugía más segura, precisa y efectiva posible


Subject(s)
Humans , Child, Preschool , Child , Imaging, Three-Dimensional , Urologic Surgical Procedures , Retrospective Studies , Software , Tomography, X-Ray Computed
9.
Surg Innov ; 26(1): 5-20, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30270757

ABSTRACT

Orthognathic surgery belongs to the scope of maxillofacial surgery. It treats dentofacial deformities consisting in discrepancy between the facial bones (upper and lower jaws). Such impairment affects chewing, talking, and breathing and can ultimately result in the loss of teeth. Orthognathic surgery restores facial harmony and dental occlusion through bone cutting, repositioning, and fixation. However, in routine practice, we face the limitations of conventional tools and the lack of intraoperative assistance. These limitations occur at every step of the surgical workflow: preoperative planning, simulation, and intraoperative navigation. The aim of this research was to provide novel tools to improve simulation and navigation. We first developed a semiautomated segmentation pipeline allowing accurate and time-efficient patient-specific 3D modeling from computed tomography scans mandatory to achieve surgical planning. This step allowed an improvement of processing time by a factor of 6 compared with interactive segmentation, with a 1.5-mm distance error. Next, we developed a software to simulate the postoperative outcome on facial soft tissues. Volume meshes were processed from segmented DICOM images, and the Bullet open source mechanical engine was used together with a mass-spring model to reach a postoperative simulation accuracy <1 mm. Our toolset was completed by the development of a real-time navigation system using minimally invasive electromagnetic sensors. This navigation system featured a novel user-friendly interface based on augmented virtuality that improved surgical accuracy and operative time especially for trainee surgeons, therefore demonstrating its educational benefits. The resulting software suite could enhance operative accuracy and surgeon education for improved patient care.


Subject(s)
Computer Simulation , Imaging, Three-Dimensional , Orthognathic Surgical Procedures/methods , Patient-Specific Modeling , Software , Surgery, Computer-Assisted/methods , France , Hospitals, University , Humans , Maxillofacial Abnormalities/diagnostic imaging , Maxillofacial Abnormalities/surgery , Orthognathic Surgery/standards , Orthognathic Surgery/trends , Orthognathic Surgical Procedures/instrumentation , Sensitivity and Specificity
10.
IEEE Trans Med Imaging ; 38(1): 79-89, 2019 01.
Article in English | MEDLINE | ID: mdl-30010552

ABSTRACT

Contemporary endoscopic simultaneous localization and mapping (SLAM) methods accurately compute endoscope poses; however, they only provide a sparse 3-D reconstruction that poorly describes the surgical scene. We propose a novel dense SLAM method whose qualities are: 1) monocular, requiring only RGB images of a handheld monocular endoscope; 2) fast, providing endoscope positional tracking and 3-D scene reconstruction, running in parallel threads; 3) dense, yielding an accurate dense reconstruction; 4) robust, to the severe illumination changes, poor texture and small deformations that are typical in endoscopy; and 5) self-contained, without needing any fiducials nor external tracking devices and, therefore, it can be smoothly integrated into the surgical workflow. It works as follows. First, accurate cluster frame poses are estimated using the sparse SLAM feature matches. The system segments clusters of video frames according to parallax criteria. Next, dense matches between cluster frames are computed in parallel by a variational approach that combines zero mean normalized cross correlation and a gradient Huber norm regularizer. This combination copes with challenging lighting and textures at an affordable time budget on a modern GPU. It can outperform pure stereo reconstructions, because the frames cluster can provide larger parallax from the endoscope's motion. We provide an extensive experimental validation on real sequences of the porcine abdominal cavity, both in-vivo and ex-vivo. We also show a qualitative evaluation on human liver. In addition, we show a comparison with the other dense SLAM methods showing the performance gain in terms of accuracy, density, and computation time.


Subject(s)
Augmented Reality , Endoscopy/methods , Imaging, Three-Dimensional/methods , Abdominal Cavity/diagnostic imaging , Algorithms , Animals , Humans , Liver/diagnostic imaging , Swine
11.
Surg Oncol Clin N Am ; 28(1): 31-44, 2019 01.
Article in English | MEDLINE | ID: mdl-30414680

ABSTRACT

Virtual reality (VR) and augmented reality (AR) in complex surgery are evolving technologies enabling improved preoperative planning and intraoperative navigation. The basis of these technologies is a computer-based generation of a patient-specific 3-dimensional model from Digital Imaging and Communications in Medicine (DICOM) data. This article provides a state-of-the- art overview on the clinical use of this technology with a specific focus on hepatic surgery. Although VR and AR are still in an evolving stage with only some clinical application today, these technologies have the potential to become a key factor in improving preoperative and intraoperative decision making.


Subject(s)
Liver Neoplasms/diagnostic imaging , Liver Neoplasms/surgery , Minimally Invasive Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Humans , Imaging, Three-Dimensional/methods , Tomography, X-Ray Computed/methods
12.
Surg Endosc ; 32(8): 3697-3705, 2018 08.
Article in English | MEDLINE | ID: mdl-29725766

ABSTRACT

BACKGROUND: The aim of this study is to categorize splenic artery and vein configurations, and examine their influence on suprapancreatic lymph node (LN) dissection in laparoscopic gastrectomy. METHODS: Digital Imaging and Communications in Medicine images from 169 advanced cancer patients who underwent laparoscopic gastrectomy with D2 dissection were used to reconstruct perigastric vessels in 3D using a volume rendering program (VP Planning®). Splenic artery and vein configuration were classified depending on the relative position of their lowest part in regard to the pancreas. Number of resected LNs and surgical outcomes were analyzed. RESULTS: The splenic artery was categorized as superficial (36.7%), middle (49.1%), and concealed (14.2%), and the splenic vein was categorized as superior (6.5%), middle (42.0%), and inferior to the pancreas (51.5%). The number of resected LNs around the proximal half of the splenic artery (#11p) and the proportion of the splenic vein located inferiorly to the pancreas were significantly higher in splenic arteries of concealed types. LN metastasis of station #7 was an independent risk factor of LN metastasis in station #11p (p = 0.010). Concealed types showed a tendency towards longer operating times, more blood loss, longer hospital stays, and a higher postoperative morbidity. CONCLUSION: Concealed types of splenic artery are associated with an increased difficulty in the dissection of LN station #11p around the splenic artery. A 3D volume rendering program is a useful tool to rapidly and intuitively identify individual anatomical variations, to plan a tailored surgical strategy, and to predict potential challenges.


Subject(s)
Gastrectomy/methods , Laparoscopy , Lymph Node Excision/methods , Splenic Artery/diagnostic imaging , Aged , Female , Humans , Imaging, Three-Dimensional , Male , Software , Splenic Artery/anatomy & histology , Splenic Vein/anatomy & histology , Splenic Vein/diagnostic imaging , Stomach Neoplasms/surgery , Tomography, X-Ray Computed
13.
Dis Colon Rectum ; 61(6): 719-723, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29722730

ABSTRACT

BACKGROUND: Medical software can build a digital clone of the patient with 3-dimensional reconstruction of Digital Imaging and Communication in Medicine images. The virtual clone can be manipulated (rotations, zooms, etc), and the various organs can be selectively displayed or hidden to facilitate a virtual reality preoperative surgical exploration and planning. OBJECTIVE: We present preliminary cases showing the potential interest of virtual reality in colorectal surgery for both cases of diverticular disease and colonic neoplasms. DESIGN: This was a single-center feasibility study. SETTINGS: The study was conducted at a tertiary care institution. PATIENTS: Two patients underwent a laparoscopic left hemicolectomy for diverticular disease, and 1 patient underwent a laparoscopic right hemicolectomy for cancer. The 3-dimensional virtual models were obtained from preoperative CT scans. The virtual model was used to perform preoperative exploration and planning. Intraoperatively, one of the surgeons was manipulating the virtual reality model, using the touch screen of a tablet, which was interactively displayed to the surgical team. MAIN OUTCOME MEASURES: The main outcome was evaluation of the precision of virtual reality in colorectal surgery planning and exploration. RESULTS: In 1 patient undergoing laparoscopic left hemicolectomy, an abnormal origin of the left colic artery beginning as an extremely short common trunk from the inferior mesenteric artery was clearly seen in the virtual reality model. This finding was missed by the radiologist on CT scan. The precise identification of this vascular variant granted a safe and adequate surgery. In the remaining cases, the virtual reality model helped to precisely estimate the vascular anatomy, providing key landmarks for a safer dissection. LIMITATIONS: A larger sample size would be necessary to definitively assess the efficacy of virtual reality in colorectal surgery. CONCLUSIONS: Virtual reality can provide an enhanced understanding of crucial anatomical details, both preoperatively and intraoperatively, which could contribute to improve safety in colorectal surgery.


Subject(s)
Colonic Neoplasms/surgery , Colorectal Surgery/instrumentation , Diverticular Diseases/surgery , Virtual Reality , Adult , Colectomy/methods , Colorectal Surgery/methods , Female , Humans , Imaging, Three-Dimensional , Intraoperative Care/instrumentation , Laparoscopy/methods , Male , Mesenteric Artery, Inferior/diagnostic imaging , Mesenteric Artery, Inferior/surgery , Middle Aged , Preoperative Care/instrumentation , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed/statistics & numerical data , User-Computer Interface
15.
Ann Surg ; 266(5): 890-897, 2017 11.
Article in English | MEDLINE | ID: mdl-28742709

ABSTRACT

OBJECTIVE: We aimed to prospectively evaluate NIR-C, VR-AR, and x-ray intraoperative cholangiography (IOC) during robotic cholecystectomy. BACKGROUND: Near-infrared cholangiography (NIR-C) provides real-time, radiation-free biliary anatomy enhancement. Three-dimensional virtual reality (VR) biliary anatomy models can be obtained via software manipulation of magnetic resonance cholangiopancreatography, enabling preoperative VR exploration, and intraoperative augmented reality (AR) navigation. METHODS: Fifty-eight patients were scheduled for cholecystectomy for gallbladder lithiasis. VR surgical planning was performed on virtual models. At anesthesia induction, indocyanine green was injected intravenously. AR navigation was obtained by overlaying the virtual model onto real-time images. Before and after Calot triangle dissection, NIR-C was obtained by turning the camera to NIR mode. Finally, an IOC was performed. The 3 modality performances were evaluated and image quality was assessed with a Likert-scale questionnaire. RESULTS: The three-dimensional VR planning enabled the identification of 12 anatomical variants in 8 patients, of which only 7 were correctly reported by the radiologists (P = 0.037). A dangerous variant identified at VR induced a "fundus first" approach. The cystic-common bile duct junction was visualized before Calot triangle dissection at VR in 100% of cases, at NIR-C in 98.15%, and in 96.15% at IOC.Mean time to obtain relevant images was shorter with NIR-C versus AR (P = 0.008) and versus IOC (P = 0.00000003). Image quality scores were lower with NIR-C versus AR (P = 0.018) and versus IOC (P < 0.0001). CONCLUSIONS: This high-tech protocol illustrates the multimodal imaging of biliary anatomy towards precision cholecystectomy. Those visualization techniques could complement to reduce the likelihood of biliary injuries (NCT01881399).


Subject(s)
Cholecystectomy/methods , Cholecystolithiasis/surgery , Robotic Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Adult , Aged , Cholangiography , Cholangiopancreatography, Magnetic Resonance , Female , Humans , Imaging, Three-Dimensional , Male , Middle Aged , Models, Anatomic , Optical Imaging , Preoperative Care/methods , Prospective Studies , Radiography, Interventional , Spectroscopy, Near-Infrared , Treatment Outcome , User-Computer Interface
16.
Biomed Mater Eng ; 28(s1): S107-S111, 2017.
Article in English | MEDLINE | ID: mdl-28372285

ABSTRACT

Mini-invasive surgery restricts the surgeon information to two-dimensional digital representation without the corresponding physical information obtained in previous open surgery. To overcome these drawbacks, real time augmented reality interfaces including the true mechanical behaviour of organs depending on their internal microstructure need to be developed. For the case of tumour resection, we present here a finite element numerical study of the liver mechanical behaviour including the effects of its own vascularisation through numerical indentation tests in order extract the corresponding macroscopic behaviour. The obtained numerical results show excellent correlation of the corresponding force-displacement curves when compared with macroscopic experimental data available in the literature.


Subject(s)
Computer Simulation , Imaging, Three-Dimensional/methods , Liver/blood supply , Liver/surgery , Minimally Invasive Surgical Procedures/methods , Models, Anatomic , Biomechanical Phenomena , Finite Element Analysis , Humans , Liver/anatomy & histology
17.
Med Image Anal ; 37: 66-90, 2017 04.
Article in English | MEDLINE | ID: mdl-28160692

ABSTRACT

This article establishes a comprehensive review of all the different methods proposed by the literature concerning augmented reality in intra-abdominal minimally invasive surgery (also known as laparoscopic surgery). A solid background of surgical augmented reality is first provided in order to support the survey. Then, the various methods of laparoscopic augmented reality as well as their key tasks are categorized in order to better grasp the current landscape of the field. Finally, the various issues gathered from these reviewed approaches are organized in order to outline the remaining challenges of augmented reality in laparoscopic surgery.


Subject(s)
Laparoscopy/methods , Laparoscopy/trends , Algorithms , Animals , Humans , Reproducibility of Results
18.
Int J Comput Assist Radiol Surg ; 12(9): 1543-1559, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28097603

ABSTRACT

PURPOSE: We aim at developing a framework for the validation of a subject-specific multi-physics model of liver tumor radiofrequency ablation (RFA). METHODS: The RFA computation becomes subject specific after several levels of personalization: geometrical and biophysical (hemodynamics, heat transfer and an extended cellular necrosis model). We present a comprehensive experimental setup combining multimodal, pre- and postoperative anatomical and functional images, as well as the interventional monitoring of intra-operative signals: the temperature and delivered power. RESULTS: To exploit this dataset, an efficient processing pipeline is introduced, which copes with image noise, variable resolution and anisotropy. The validation study includes twelve ablations from five healthy pig livers: a mean point-to-mesh error between predicted and actual ablation extent of 5.3 ± 3.6 mm is achieved. CONCLUSION: This enables an end-to-end preclinical validation framework that considers the available dataset.


Subject(s)
Catheter Ablation/methods , Liver Neoplasms/surgery , Liver/surgery , Animals , Hemodynamics , Models, Animal , Necrosis/surgery , Swine
19.
Int J Comput Assist Radiol Surg ; 12(1): 1-11, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27376720

ABSTRACT

PURPOSE: An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. METHODS: We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. RESULTS: Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. CONCLUSIONS: The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.


Subject(s)
Imaging, Three-Dimensional/methods , Phantoms, Imaging , Surgery, Computer-Assisted/methods , Anatomic Landmarks , Animals , Computers, Handheld , Humans , Models, Anatomic , Swine
20.
Surg Endosc ; 31(3): 1451-1460, 2017 03.
Article in English | MEDLINE | ID: mdl-27495341

ABSTRACT

BACKGROUND: Intraoperative liver segmentation can be obtained by means of percutaneous intra-portal injection of a fluorophore and illumination with a near-infrared light source. However, the percutaneous approach is challenging in the minimally invasive setting. We aimed to evaluate the feasibility of fluorescence liver segmentation by superselective intra-hepatic arterial injection of indocyanine green (ICG). MATERIALS AND METHODS: Eight pigs (mean weight: 26.01 ± 5.21 kg) were involved. Procedures were performed in a hybrid experimental operative suite equipped with the Artis Zeego®, multiaxis robotic angiography system. A pneumoperitoneum was established and four laparoscopic ports were introduced. The celiac trunk was catheterized, and a microcatheter was advanced into different segmental hepatic artery branches. A near-infrared laparoscope (D-Light P, Karl Storz) was used to detect the fluorescent signal. To assess the correspondence between arterial-based fluorescence demarcation and liver volume, metallic markers were placed along the fluorescent border, followed by a 3D CT-scanning, after injecting intra-arterial radiological contrast (n = 3). To assess the correspondence between arterial and portal supplies, percutaneous intra-portal angiography and intra-arterial angiography were performed simultaneously (n = 1). RESULTS: Bright fluorescence signal enhancing the demarcation of target segments was obtained from 0.1 mg/mL, in matter of seconds. Correspondence between the volume of hepatic segments and arterial territories was confirmed by CT angiography. Higher background fluorescence noise was found after positive staining by intra-portal ICG injection, due to parenchymal accumulation and porto-systemic shunting. CONCLUSIONS: Intra-hepatic arterial ICG injection, rapidly highlights hepatic target segment borders, with a better signal-to-background ratio as compared to portal vein injection, in the experimental setting.


Subject(s)
Angiography/methods , Coloring Agents , Fluorescent Dyes , Hepatic Artery/diagnostic imaging , Indocyanine Green , Liver/diagnostic imaging , Optical Imaging/methods , Animals , Feasibility Studies , Infrared Rays , Injections, Intra-Arterial , Intraoperative Care/methods , Liver/blood supply , Portal Vein , Staining and Labeling , Sus scrofa , Swine
SELECTION OF CITATIONS
SEARCH DETAIL
...