Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Med Imaging ; PP2024 May 28.
Article in English | MEDLINE | ID: mdl-38805326

ABSTRACT

Accurately reconstructing 4D critical organs contributes to the visual guidance in X-ray image-guided interventional operation. Current methods estimate intraoperative dynamic meshes by refining a static initial organ mesh from the semantic information in the single-frame X-ray images. However, these methods fall short of reconstructing an accurate and smooth organ sequence due to the distinct respiratory patterns between the initial mesh and X-ray image. To overcome this limitation, we propose a novel dual-stage complementary 4D organ reconstruction (DSC-Recon) model for recovering dynamic organ meshes by utilizing the preoperative and intraoperative data with different respiratory patterns. DSC-Recon is structured as a dual-stage framework: 1) The first stage focuses on addressing a flexible interpolation network applicable to multiple respiratory patterns, which could generate dynamic shape sequences between any pair of preoperative 3D meshes segmented from CT scans. 2) In the second stage, we present a deformation network to take the generated dynamic shape sequence as the initial prior and explore the discriminate feature (i.e., target organ areas and meaningful motion information) in the intraoperative X-ray images, predicting the deformed mesh by introducing a designed feature mapping pipeline integrated into the initialized shape refinement process. Experiments on simulated and clinical datasets demonstrate the superiority of our method over state-of-the-art methods in both quantitative and qualitative aspects.

2.
IEEE Trans Biomed Eng ; 71(2): 700-711, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38241137

ABSTRACT

OBJECTIVE: Biliary interventional procedures require physicians to track the interventional instrument tip (Tip) precisely with X-ray image. However, Tip positioning relies heavily on the physicians' experience due to the limitations of X-ray imaging and the respiratory interference, which leads to biliary damage, prolonged operation time, and increased X-ray radiation. METHODS: We construct an augmented reality (AR) navigation system for biliary interventional procedures. It includes system calibration, respiratory motion correction and fusion navigation. Firstly, the magnetic and 3D computed tomography (CT) coordinates are aligned through system calibration. Secondly, a respiratory motion correction method based on manifold regularization is proposed to correct the misalignment of the two coordinates caused by respiratory motion. Thirdly, the virtual biliary, liver and Tip from CT are overlapped to the corresponding position of the patient for dynamic virtual-real fusion. RESULTS: Our system is respectively evaluated and achieved an average alignment error of 0.75 ± 0.17 mm and 2.79 ± 0.46 mm on phantoms and patients. The navigation experiments conducted on phantoms achieve an average Tip positioning error of 0.98 ± 0.15 mm and an average fusion error of 1.67 ± 0.34 mm after correction. CONCLUSION: Our system can automatically register the Tip to the corresponding location in CT, and dynamically overlap the 3D virtual model onto patients to provide accurate and intuitive AR navigation. SIGNIFICANCE: This study demonstrates the clinical potential of our system by assisting physicians during biliary interventional procedures. Our system enables dynamic visualization of virtual model on patients, reducing the reliance on contrast agents and X-ray usage.


Subject(s)
Augmented Reality , Surgery, Computer-Assisted , Humans , Imaging, Three-Dimensional , Liver , Phantoms, Imaging , Tomography, X-Ray Computed/methods , Surgery, Computer-Assisted/methods
3.
Comput Biol Med ; 169: 107766, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38150885

ABSTRACT

Automatic vessel segmentation is a critical area of research in medical image analysis, as it can greatly assist doctors in accurately and efficiently diagnosing vascular diseases. However, accurately extracting the complete vessel structure from images remains a challenge due to issues such as uneven contrast and background noise. Existing methods primarily focus on segmenting individual pixels and often fail to consider vessel features and morphology. As a result, these methods often produce fragmented results and misidentify vessel-like background noise, leading to missing and outlier points in the overall segmentation. To address these issues, this paper proposes a novel approach called the progressive edge information aggregation network for vessel segmentation (PEA-Net). The proposed method consists of several key components. First, a dual-stream receptive field encoder (DRE) is introduced to preserve fine structural features and mitigate false positive predictions caused by background noise. This is achieved by combining vessel morphological features obtained from different receptive field sizes. Second, a progressive complementary fusion (PCF) module is designed to enhance fine vessel detection and improve connectivity. This module complements the decoding path by combining features from previous iterations and the DRE, incorporating nonsalient information. Additionally, segmentation-edge decoupling enhancement (SDE) modules are employed as decoders to integrate upsampling features with nonsalient information provided by the PCF. This integration enhances both edge and segmentation information. The features in the skip connection and decoding path are iteratively updated to progressively aggregate fine structure information, thereby optimizing segmentation results and reducing topological disconnections. Experimental results on multiple datasets demonstrate that the proposed PEA-Net model and strategy achieve optimal performance in both pixel-level and topology-level metrics.


Subject(s)
Benchmarking , Pisum sativum , Image Processing, Computer-Assisted
4.
Phys Med Biol ; 68(17)2023 08 22.
Article in English | MEDLINE | ID: mdl-37549676

ABSTRACT

Objective.In computer-assisted minimally invasive surgery, the intraoperative x-ray image is enhanced by overlapping it with a preoperative CT volume to improve visualization of vital anatomical structures. Therefore, accurate and robust 3D/2D registration of CT volume and x-ray image is highly desired in clinical practices. However, previous registration methods were prone to initial misalignments and struggled with local minima, leading to issues of low accuracy and vulnerability.Approach.To improve registration performance, we propose a novel CT/x-ray image registration agent (CT2X-IRA) within a task-driven deep reinforcement learning framework, which contains three key strategies: (1) a multi-scale-stride learning mechanism provides multi-scale feature representation and flexible action step size, establishing fast and globally optimal convergence of the registration task. (2) A domain adaptation module reduces the domain gap between the x-ray image and digitally reconstructed radiograph projected from the CT volume, decreasing the sensitivity and uncertainty of the similarity measurement. (3) A weighted reward function facilitates CT2X-IRA in searching for the optimal transformation parameters, improving the estimation accuracy of out-of-plane transformation parameters under large initial misalignments.Main results.We evaluate the proposed CT2X-IRA on both the public and private clinical datasets, achieving target registration errors of 2.13 mm and 2.33 mm with the computation time of 1.5 s and 1.1 s, respectively, showing an accurate and fast workflow for CT/x-ray image rigid registration.Significance.The proposed CT2X-IRA obtains the accurate and robust 3D/2D registration of CT and x-ray images, suggesting its potential significance in clinical applications.


Subject(s)
Algorithms , Imaging, Three-Dimensional , X-Rays , Imaging, Three-Dimensional/methods , Tomography, X-Ray Computed/methods , Radiography , Image Processing, Computer-Assisted
5.
World J Gastroenterol ; 29(20): 3157-3167, 2023 May 28.
Article in English | MEDLINE | ID: mdl-37346159

ABSTRACT

BACKGROUND: It has been confirmed that three-dimensional (3D) imaging allows easier identification of bile duct anatomy and intraoperative guidance of endoscopic retrograde cholangiopancreatography (ERCP), which reduces the radiation dose and procedure time with improved safety. However, current 3D biliary imaging does not have good real-time fusion with intraoperative imaging, a process meant to overcome the influence of intraoperative respiratory motion and guide navigation. The present study explored the feasibility of real-time continuous image-guided ERCP. AIM: To explore the feasibility of real-time continuous image-guided ERCP. METHODS: We selected 2 3D-printed abdominal biliary tract models with different structures to simulate different patients. The ERCP environment was simulated for the biliary phantom experiment to create a navigation system, which was further tested in patients. In addition, based on the estimation of the patient's respiratory motion, preoperative 3D biliary imaging from computed tomography of 18 patients with cholelithiasis was registered and fused in real-time with 2D fluoroscopic sequence generated by the C-arm unit during ERCP. RESULTS: Continuous image-guided ERCP was applied in the biliary phantom with a registration error of 0.46 mm ± 0.13 mm and a tracking error of 0.64 mm ± 0.24 mm. After estimating the respiratory motion, 3D/2D registration accurately transformed preoperative 3D biliary images to each image in the X-ray image sequence in real-time in 18 patients, with an average fusion rate of 88%. CONCLUSION: Continuous image-guided ERCP may be an effective approach to assist the operator and reduce the use of X-ray and contrast agents.


Subject(s)
Biliary Tract , Cholangiopancreatography, Endoscopic Retrograde , Humans , Cholangiopancreatography, Endoscopic Retrograde/adverse effects , Biliary Tract/diagnostic imaging , Bile Ducts/diagnostic imaging , Bile Ducts/surgery , Contrast Media , Fluoroscopy
6.
Comput Biol Med ; 148: 105826, 2022 09.
Article in English | MEDLINE | ID: mdl-35810696

ABSTRACT

BACKGROUND: Marker-based augmented reality (AR) calibration methods for surgical navigation often require a second computed tomography scan of the patient, and their clinical application is limited due to high manufacturing costs and low accuracy. METHODS: This work introduces a novel type of AR calibration framework that combines a Microsoft HoloLens device with a single camera registration module for surgical navigation. A camera is used to gather multi-view images of a patient for reconstruction in this framework. A shape feature matching-based search method is proposed to adjust the size of the reconstructed model. The double clustering-based 3D point cloud segmentation method and 3D line segment detection method are also proposed to extract the corner points of the image marker. The corner points are the registration data of the image marker. A feature triangulation iteration-based registration method is proposed to quickly and accurately calibrate the pose relationship between the image marker and the patient in the virtual and real space. The patient model after registration is wirelessly transmitted to the HoloLens device to display the AR scene. RESULTS: The proposed approach was used to conduct accuracy verification experiments on the phantoms and volunteers, which were compared with six advanced AR calibration methods. The proposed method obtained average fusion errors of 0.70 ± 0.16 and 0.91 ± 0.13 mm in phantom and volunteer experiments, respectively. The fusion accuracy of the proposed method is the highest among all comparison methods. A volunteer liver puncture clinical simulation experiment was also conducted to show the clinical feasibility. CONCLUSIONS: Our experiments proved the effectiveness of the proposed AR calibration method, and revealed a considerable potential for improving surgical performance.


Subject(s)
Augmented Reality , Surgery, Computer-Assisted , Calibration , Humans , Imaging, Three-Dimensional , Phantoms, Imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...