Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
IEEE Trans Med Imaging ; PP2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38865220

ABSTRACT

Minimally invasive surgery (MIS) remains technically demanding due to the difficulty of tracking hidden critical structures within the moving anatomy of the patient. In this study, we propose a soft tissue deformation tracking augmented reality (AR) navigation pipeline for laparoscopic surgery of the kidneys. The proposed navigation pipeline addresses two main sub-problems: the initial registration and deformation tracking. Our method utilizes preoperative MR or CT data and binocular laparoscopes without any additional interventional hardware. The initial registration is resolved through a probabilistic rigid registration algorithm and elastic compensation based on dense point cloud reconstruction. For deformation tracking, the sparse feature point displacement vector field continuously provides temporal boundary conditions for the biomechanical model. To enhance the accuracy of the displacement vector field, a novel feature points selection strategy based on deep learning is proposed. Moreover, an ex-vivo experimental method for internal structures error assessment is presented. The ex-vivo experiments indicate an external surface reprojection error of 4.07 ± 2.17mm and a maximum mean absolutely error for internal structures of 2.98mm. In-vivo experiments indicate mean absolutely error of 3.28 ± 0.40mm and 1.90±0.24mm, respectively. The combined qualitative and quantitative findings indicated the potential of our AR-assisted navigation system in improving the clinical application of laparoscopic kidney surgery.

2.
IEEE Trans Biomed Eng ; PP2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38683702

ABSTRACT

OBJECTIVE: Intraoperative liver deformation poses a considerable challenge during liver surgery, causing significant errors in image-guided surgical navigation systems. This study addresses a critical non-rigid registration problem in liver surgery: the alignment of intrahepatic vascular trees. The goal is to deform the complete vascular shape extracted from preoperative Computed Tomography (CT) volume, aligning it with sparse vascular contour points obtained from intraoperative ultrasound (iUS) images. Challenges arise due to the intricate nature of slender vascular branches, causing existing methods to struggle with accuracy and vascular self-intersection. METHODS: We present a novel non-rigid sparse-dense registration pipeline structured in a coarse-to-fine fashion. In the initial coarse registration stage, we introduce a parametrization deformation graph and a Welsch function-based error metric to enhance convergence and robustness of non-rigid registration. For the fine registration stage, we propose an automatic curvature-based algorithm to detect and eliminate overlapping regions. Subsequently, we generate the complete vascular shape using posterior computation of a Gaussian Process Shape Model. RESULTS: Experimental results using simulated data demonstrate the accuracy and robustness of our proposed method. Evaluation results on the target registration error of tumors highlight the clinical significance of our method in tumor location computation. Comparative analysis against related methods reveals superior accuracy and competitive efficiency of our approach. Moreover, Ex vivo swine liver experiments and clinical experiments were conducted to evaluate the method's performance. CONCLUSION: The experimental results itasize the accurate and robust performance of our proposed method. SIGNIFICANCE: Our proposed non-rigid registration method holds significant application potential in clinical practice.

3.
Comput Methods Programs Biomed ; 244: 107995, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38157826

ABSTRACT

BACKGROUND AND OBJECTIVE: With the urgent demands for rapid and precise localization of pulmonary nodules in procedures such as transthoracic puncture biopsy and thoracoscopic surgery, many surgical navigation and robotic systems are applied in the clinical practice of thoracic operation. However, current available positioning methods have certain limitations, including high radiation exposure, large errors from respiratory, complicated and time-consuming procedures, etc. METHODS: To address these issues, a preoperative computed tomography (CT) image-guided robotic system for transthoracic puncture was proposed in this study. Firstly, an algorithm for puncture path planning based on constraints from clinical knowledge was developed. This algorithm enables the calculation of Pareto optimal solutions for multiple clinical targets concerning puncture angle, puncture length, and distance from hazardous areas. Secondly, to eradicate intraoperative radiation exposure, a fast registration method based on preoperative CT and gated respiration compensation was proposed. The registration process could be completed by the direct selection of points on the skin near the sternum using a hand-held probe. Gating detection and joint optimization algorithms are then performed on the collected point cloud data to compensate for errors from respiratory motion. Thirdly, to enhance accuracy and intraoperative safety, the puncture guide was utilized as an end effector to restrict the movement of the optically tracked needle, then risky actions with patient contact would be strictly limited. RESULTS: The proposed system was evaluated through phantom experiments on our custom-designed simulation test platform for patient respiratory motion to assess its accuracy and feasibility. The results demonstrated an average target point error (TPE) of 2.46 ± 0.68 mm and an angle error (AE) of 1.49 ± 0.45° for the robotic system. CONCLUSIONS: In conclusion, our proposed system ensures accuracy, surgical efficiency, and safety while also reducing needle insertions and radiation exposure in transthoracic puncture procedures, thus offering substantial potential for clinical application.


Subject(s)
Robotic Surgical Procedures , Surgery, Computer-Assisted , Humans , Robotic Surgical Procedures/methods , Biopsy, Needle , Surgery, Computer-Assisted/methods , Punctures , Algorithms
4.
Comput Biol Med ; 166: 107560, 2023 Oct 11.
Article in English | MEDLINE | ID: mdl-37847946

ABSTRACT

BACKGROUNDS: The key to successful dental implant surgery is to place the implants accurately along the pre-operative planned paths. The application of surgical navigation systems can significantly improve the safety and accuracy of implantation. However, the frequent shift of the views of the surgeon between the surgical site and the computer screen causes troubles, which is expected to be solved by the introduction of mixed-reality technology through the wearing of HoloLens devices by enabling the alignment of the virtual three-dimensional (3D) image with the actual surgical site in the same field of view. METHODS: This study utilized mixed reality technology to enhance dental implant surgery navigation. Our first step was reconstructing a virtual 3D model from pre-operative cone-beam CT (CBCT) images. We then obtained the relative position between objects using the navigation device and HoloLens camera. Via the algorithms of virtual-actual registration, the transformation matrixes between the HoloLens devices and the navigation tracker were acquired through the HoloLens-tracker registration, and the transformation matrixes between the virtual model and the patient phantom through the image-phantom registration. In addition, the algorithm of surgical drill calibration assisted in acquiring transformation matrixes between the surgical drill and the patient phantom. These algorithms allow real-time tracking of the surgical drill's location and orientation relative to the patient phantom under the navigation device. With the aid of the HoloLens 2, virtual 3D images and actual patient phantoms can be aligned accurately, providing surgeons with a clear visualization of the implant path. RESULTS: Phantom experiments were conducted using 30 patient phantoms, with a total of 102 dental implants inserted. Comparisons between the actual implant paths and the pre-operatively planned implant paths showed that our system achieved a coronal deviation of 1.507 ± 0.155 mm, an apical deviation of 1.542 ± 0.143 mm, and an angular deviation of 3.468 ± 0.339°. The deviation was not significantly different from that of the navigation-guided dental implant placement but better than the freehand dental implant placement. CONCLUSION: Our proposed system realizes the integration of the pre-operative planned dental implant paths and the patient phantom, which helps surgeons achieve adequate accuracy in traditional dental implant surgery. Furthermore, this system is expected to be applicable to animal and cadaveric experiments in further studies.

5.
Article in English | MEDLINE | ID: mdl-37204961

ABSTRACT

Orthodontic treatment is a lengthy process that requires regular in-person dental monitoring, making remote dental monitoring a viable alternative when face-to-face consultation is not possible. In this study, we propose an improved 3D teeth reconstruction framework that automatically restores the shape, arrangement, and dental occlusion of upper and lower teeth from five intra-oral photographs to aid orthodontists in visualizing the condition of patients in virtual consultations. The framework comprises a parametric model that leverages statistical shape modeling to describe the shape and arrangement of teeth, a modified U-net that extracts teeth contours from intra-oral images, and an iterative process that alternates between finding point correspondences and optimizing a compound loss function to fit the parametric teeth model to predicted teeth contours. We perform a five-fold cross-validation on a dataset of 95 orthodontic cases and report an average Chamfer distance of 1.0121 mm2 and an average Dice similarity coefficient of 0.7672 on all the test samples in the cross-validation, demonstrating a significant improvement compared with the previous work. Our teeth reconstruction framework provides a feasible solution for visualizing 3D teeth models in remote orthodontic consultations.

6.
IEEE Trans Med Imaging ; 42(9): 2751-2762, 2023 09.
Article in English | MEDLINE | ID: mdl-37030821

ABSTRACT

Pelvic fracture is a severe trauma with a high rate of morbidity and mortality. Accurate and automatic diagnosis and surgical planning of pelvic fracture require effective identification and localization of the fracture zones. This is a challenging task due to the complexity of pelvic fractures, which often exhibit multiple fragments and sites, large fragment size differences, and irregular morphology. We have developed a novel two-stage method for the automatic identification and localization of complex pelvic fractures. Our method is unique in that it allows to combine the symmetry properties of the pelvic anatomy and capture the symmetric feature differences caused by the fracture on both the left and right sides, thereby overcoming the limitations of existing methods which consider only image or geometric features. It implements supervised contrastive learning with a novel Siamese deep neural network, which consists of two weight-shared branches with a structural attention mechanism, to minimize the confusion of local complex structures of the pelvic bones with the fracture zones. A structure-focused attention (SFA) module is designed to capture the spatial structural features and enhances the recognition ability of fracture zones. Comprehensive experiments on 103 clinical CT scans from the publicly available dataset CTPelvic1K show that our method achieves a mean accuracy and sensitivity of 0.92 and 0.93, which are superior to those reported with three SOTA contrastive learning methods and five advanced classification networks, demonstrating the effectiveness of identifying and localizing various types of complex pelvic fractures from clinical CT images.


Subject(s)
Fractures, Bone , Pelvic Bones , Humans , Fractures, Bone/diagnostic imaging , Fractures, Bone/surgery , Pelvic Bones/diagnostic imaging , Pelvic Bones/injuries , Tomography, X-Ray Computed , Neural Networks, Computer
7.
Int J Comput Assist Radiol Surg ; 18(9): 1715-1724, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37031310

ABSTRACT

PURPOSE: The treatment of pelvic and acetabular fractures remains technically demanding, and traditional surgical navigation systems suffer from the hand-eye mis-coordination. This paper describes a multi-view interactive virtual-physical registration method to enhance the surgeon's depth perception and a mixed reality (MR)-based surgical navigation system for pelvic and acetabular fracture fixation. METHODS: First, the pelvic structure is reconstructed by segmentation in a preoperative CT scan, and an insertion path for the percutaneous LC-II screw is computed. A custom hand-held registration cube is used for virtual-physical registration. Three strategies are proposed to improve the surgeon's depth perception: vertices alignment, tremble compensation and multi-view averaging. During navigation, distance and angular deviation visual cues are updated to help the surgeon with the guide wire insertion. The methods have been integrated into an MR module in a surgical navigation system. RESULTS: Phantom experiments were conducted. Ablation experimental results demonstrated the effectiveness of each strategy in the virtual-physical registration method. The proposed method achieved the best accuracy in comparison with related works. For percutaneous guide wire placement, our system achieved a mean bony entry point error of 2.76 ± 1.31 mm, a mean bony exit point error of 4.13 ± 1.74 mm, and a mean angular deviation of 3.04 ± 1.22°. CONCLUSIONS: The proposed method can improve the virtual-physical fusion accuracy. The developed MR-based surgical navigation system has clinical application potential. Cadaver and clinical experiments will be conducted in future.


Subject(s)
Augmented Reality , Spinal Fractures , Surgery, Computer-Assisted , Humans , Surgery, Computer-Assisted/methods , Pelvis/surgery , Fracture Fixation, Internal/methods
8.
Phys Med Biol ; 68(2)2023 01 05.
Article in English | MEDLINE | ID: mdl-36595258

ABSTRACT

Orthopedic surgery remains technically demanding due to the complex anatomical structures and cumbersome surgical procedures. The introduction of image-guided orthopedic surgery (IGOS) has significantly decreased the surgical risk and improved the operation results. This review focuses on the application of recent advances in artificial intelligence (AI), deep learning (DL), augmented reality (AR) and robotics in image-guided spine surgery, joint arthroplasty, fracture reduction and bone tumor resection. For the pre-operative stage, key technologies of AI and DL based medical image segmentation, 3D visualization and surgical planning procedures are systematically reviewed. For the intra-operative stage, the development of novel image registration, surgical tool calibration and real-time navigation are reviewed. Furthermore, the combination of the surgical navigation system with AR and robotic technology is also discussed. Finally, the current issues and prospects of the IGOS system are discussed, with the goal of establishing a reference and providing guidance for surgeons, engineers, and researchers involved in the research and development of this area.


Subject(s)
Orthopedic Procedures , Robotics , Surgery, Computer-Assisted , Artificial Intelligence , Surgery, Computer-Assisted/methods
9.
Int J Comput Assist Radiol Surg ; 17(12): 2291-2303, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36166164

ABSTRACT

PURPOSE: Free fibula flap is the gold standard for the treatment of mandibular defects. However, the existing preoperative planning protocol is cumbersome to execute, costly to learn, and poorly collaborative with the robot-assisted cutting of the fibular osteotomy plane. METHODS: A surgical planning system for robotic assisted mandibular reconstruction with fibula free flap is proposed in this study. A fibular osteotomy planning algorithm is presented so that the virtual surgical planning of the fibular osteotomy segments can be obtained automatically with selected mandibular anatomical landmarks. The planned osteotomy planes are then converted into the motion path of the robotic arm, and the automatic fibula osteotomy is completed under optical navigation. RESULTS: Surgical planning was performed on 35 patients to verify the feasibility of our system's virtual surgical planning module, with an average time of 13 min. Phantom experiments were performed to evaluate the reliability and stability of this system. The average distance and angular deviations of the osteotomy planes are 1.04 ± 0.68 mm and 1.56 ±1.10°, respectively. CONCLUSIONS: Our system can achieve not only precise and convenient preoperative planning, but also safe and reliable osteotomy trajectory. The clinical applications of our system for mandibular reconstruction surgery are expected soon.


Subject(s)
Free Tissue Flaps , Mandibular Reconstruction , Robotic Surgical Procedures , Surgery, Computer-Assisted , Humans , Mandibular Reconstruction/methods , Free Tissue Flaps/surgery , Reproducibility of Results , Surgery, Computer-Assisted/methods , Mandible/diagnostic imaging , Mandible/surgery
10.
Med Phys ; 49(8): 5268-5282, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35506596

ABSTRACT

PURPOSE: Precise determination of target is an essential procedure in prostate interventions, such as prostate biopsy, lesion detection, and targeted therapy. However, the prostate delineation may be tough in some cases due to tissue ambiguity or lack of partial anatomical boundary. In this study, we propose a novel supervised registration-based algorithm for precise prostate segmentation, which combines the convolutional neural network (CNN) with a statistical shape model (SSM). METHODS: The proposed network mainly consists of two branches. One called SSM-Net branch was exploited to predict the shape transform matrix, shape control parameters, and shape fine-tuning vector, for the generation of the prostate boundary. Furthermore, according to the inferred boundary, a normalized distance map was calculated as the output of SSM-Net. Another branch named ResU-Net was employed to predict a probability label map from the input images at the same time. Integrating the output of these two branches, the optimal weighted sum of the distance map and the probability map was regarded as the prostate segmentation. RESULTS: Two public data sets PROMISE12 and NCI-ISBI 2013 were utilized to evaluate the performance of the proposed algorithm. The results demonstrated that the segmentation algorithm achieved the best performance with an SSM of 9500 nodes, which obtained a dice of 0.907 and an average surface distance of 1.85 mm. Compared with other methods, our algorithm delineates the prostate region more accurately and efficiently. In addition, we verified the impact of model elasticity augmentation and the fine-tuning item on the network segmentation capability. As a result, both factors have improved the delineation accuracy, with dice increased by 10% and 7%, respectively. CONCLUSIONS: Our segmentation method has the potential to be an effective and robust approach for prostate segmentation.


Subject(s)
Imaging, Three-Dimensional , Prostate , Algorithms , Humans , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Male , Models, Statistical , Neural Networks, Computer , Prostate/diagnostic imaging
11.
IEEE Trans Biomed Eng ; 69(8): 2593-2603, 2022 08.
Article in English | MEDLINE | ID: mdl-35157575

ABSTRACT

OBJECTIVE: Cervical pedicle screw (CPS) placement surgery remains technically demanding due to the complicated anatomy with neurovascular structures. State-of-the-art surgical navigation or robotic systems still suffer from the problem of hand-eye coordination and soft tissue deformation. In this study, we aim at tracking the intraoperative soft tissue deformation and constructing a virtual-physical fusion surgical scene, and integrating them into the robotic system for CPS placement surgery. METHODS: Firstly, we propose a real-time deformation computation method based on the prior shape model and intraoperative partial information acquired from ultrasound images. According to the generated posterior shape, the structure representation of deformed target tissue gets updated continuously. Secondly, a hand tremble compensation method is proposed to improve the accuracy and robustness of the virtual-physical calibration procedure, and a mixed reality based surgical scene is further constructed for CPS placement surgery. Thirdly, we integrate the soft tissue deformation method and virtual-physical fusion method into our previously proposed surgical robotic system, and the surgical workflow for CPS placement surgery is introduced. RESULTS: We conducted phantom and animal experiments to evaluate the feasibility and accuracy of the proposed system. Our system yielded a mean surface distance error of 1.52 ± 0.43 mm for soft tissue deformation computing, and an average distance deviation of 1.04 ± 0.27 mm for CPS placement. CONCLUSION: Results demonstrate that our system involves tremendous clinical application potential. SIGNIFICANCE: Our proposed system promotes the efficiency and safety of the CPS placement surgery.


Subject(s)
Augmented Reality , Pedicle Screws , Robotic Surgical Procedures , Spinal Fusion , Surgery, Computer-Assisted , Animals , Cervical Vertebrae/diagnostic imaging , Cervical Vertebrae/surgery , Spinal Fusion/methods , Surgery, Computer-Assisted/methods
12.
Comput Methods Programs Biomed ; 209: 106326, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34433127

ABSTRACT

BACKGROUND: The accurate distal locking of intramedullary (IM) nails is a clinical challenge for surgeons. Although many navigation systems have been developed, a real-time guide method with free radiation exposure, better user convenience, and high cost performance has not been proposed. METHODS: This paper aims to develop an electromagnetic navigation system named TianXuan-MDTS that provides surgeons with a proven surgical solution. And the registration method with external landmarks for IM nails and calibration algorithm for guiders were proposed. A puncture experiment, model experiments measured by 3D Slicer and cadaver experiments (2 cadaveric leg specimens and 6 drilling operations) are conducted to evaluate its performance and stability. RESULTS: The registration deviations (TRE) is 1.05± 0.13 mm. In the puncture experiment, a success rate of 96% can be achieved in 45.94 s. TianXuan-MDTS were evaluated on 3 tibia model. The results demonstrated that all 9 screw holes were successfully prepared at a rate of 100% in 91.67 s. And the entry point, end point, and angular deviations were 1.60±0.20 mm, 1.47±0.18 mm, and 3.10±0.84°, respectively. Postoperative fluoroscopy in cadaver experiments showed that all drills were in the distal locking holes, with a success rate of 100% and the average time 143.17± 18.27 s. CONCLUSIONS: The experimental results indicate that our system with novel registration and calibration methods could serve as a feasible and promising tool to assist surgeons during distal locking.


Subject(s)
Fracture Fixation, Intramedullary , Surgery, Computer-Assisted , Bone Nails , Electromagnetic Phenomena , Fluoroscopy , Humans
13.
Comput Biol Med ; 133: 104402, 2021 06.
Article in English | MEDLINE | ID: mdl-33895460

ABSTRACT

BACKGROUND AND OBJECTIVE: The distal interlocking of intramedullary nail remains a technically demanding procedure. Existing augmented reality based solutions still suffer from hand-eye coordination problem, prolonged operation time, and inadequate resolution. In this study, an augmented reality based navigation system for distal interlocking of intramedullary nail is developed using Microsoft HoloLens 2, the state-of-the-art optical see-through head-mounted display. METHODS: A customized registration cube is designed to assist surgeons with better depth perception when performing registration procedures. During drilling, surgeons can obtain accurate and in-situ visualization of intramedullary nail and drilling path, and dynamic navigation is enabled. An intraoperative warning system is proposed to provide intuitive feedback of real-time deviations and electromagnetic disturbances. RESULTS: The preclinical phantom experiment showed that the reprojection errors along the X, Y, and Z axes were 1.55 ± 0.27 mm, 1.71 ± 0.40 mm, and 2.84 ± 0.78 mm, respectively. The end-to-end evaluation method indicated the distance error was 1.61 ± 0.44 mm, and the 3D angle error was 1.46 ± 0.46°. A cadaver experiment was also conducted to evaluate the feasibility of the system. CONCLUSION: Our system has potential advantages over the 2D-screen based navigation system and the pointing device based navigation system in terms of accuracy and time consumption, and has tremendous application prospects.


Subject(s)
Augmented Reality , Fracture Fixation, Intramedullary , Surgery, Computer-Assisted , Internal Fixators , Phantoms, Imaging
14.
Front Surg ; 8: 719985, 2021.
Article in English | MEDLINE | ID: mdl-35174201

ABSTRACT

OBJECTIVE: To realize the three-dimensional visual output of surgical navigation information by studying the cross-linking of mixed reality display devices and high-precision optical navigators. METHODS: Applying quaternion-based point alignment algorithms to realize the positioning configuration of mixed reality display devices, high-precision optical navigators, real-time patient tracking and calibration technology; based on open source SDK and development tools, developing mixed reality surgery based on visual positioning and tracking system. In this study, four patients were selected for mixed reality-assisted tumor resection and reconstruction and re-examined 1 month after the operation. We reconstructed postoperative CT and use 3DMeshMetric to form the error distribution map, and completed the error analysis and quality control. RESULTS: Realized the cross-linking of mixed reality display equipment and high-precision optical navigator, developed a digital maxillofacial surgery system based on mixed reality technology and successfully implemented mixed reality-assisted tumor resection and reconstruction in 4 cases. CONCLUSIONS: The maxillofacial digital surgery system based on mixed reality technology can superimpose and display three-dimensional navigation information in the surgeon's field of vision. Moreover, it solves the problem of visual conversion and space conversion of the existing navigation system. It improves the work efficiency of digitally assisted surgery, effectively reduces the surgeon's dependence on spatial experience and imagination, and protects important anatomical structures during surgery. It is a significant clinical application value and potential.

15.
Expert Rev Med Devices ; 18(1): 47-62, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33283563

ABSTRACT

Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.


Subject(s)
Augmented Reality , Surgery, Computer-Assisted , Virtual Reality , Humans , Surgical Procedures, Operative , Touch Perception , Visual Perception
SELECTION OF CITATIONS
SEARCH DETAIL
...