Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
Article in English | MEDLINE | ID: mdl-38867107

ABSTRACT

PURPOSE: Fluorescence imaging-guided surgery has been used in oncology. However, for tiny tumors, the current imaging probes are still difficult to achieve high-contrast imaging, leading to incomplete resection. In this study, we achieved precise surgical resection of tiny metastatic cancers by constructing an engineering erythrocyte membrane-camouflaged bioprobe (AR-M@HMSN@P). METHODS: AR-M@HMSN@P combined the properties of aggregation-induced emission luminogens (AIEgens) named PF3-PPh3 (P), with functional erythrocyte membrane modified by a modular peptide (AR). Interestingly, AR was composed of an asymmetric tripodal pentapeptide scaffold (GGKGG) with three appended modulars: KPSSPPEE (A6) peptide, RRRR (R4) peptide and cholesterol. To verify the specificity of the probe in vitro, SKOV3 cells with overexpression of CD44 were used as the positive group, and HLF cells with low expression of CD44 were devoted as the control group. The AR-M@HMSN@P fluorescence imaging was utilized to provide surgical guidance for the removal of micro-metastatic lesions. RESULTS: In vivo, the clearance of AR-M@HMSN@P by the immune system was reduced due to the natural properties inherited from erythrocytes. Meanwhile, the A6 peptide on AR-M@HMSN@P was able to specifically target CD44 on ovarian cancer cells, and the electrostatic attraction between the R4 peptide and the cell membrane enhanced the firmness of this targeting. Benefiting from these multiple effects, AR-M@HMSN@P achieved ultra-precise tumor imaging with a signal-to-noise ratio (SNR) of 15.2, making it possible to surgical resection of tumors < 1 mm by imaging guidance. CONCLUSION: We have successfully designed an engineered fluorescent imaging bioprobe (AR-M@HMSN@P), which can target CD44-overexpressing ovarian cancers for precise imaging and guide the resection of minor tumors. Notably, this work holds significant promise for developing biomimetic probes for clinical imaging-guided precision cancer surgery by exploiting their externally specified functional modifications.

2.
Eur J Med Chem ; 271: 116452, 2024 May 05.
Article in English | MEDLINE | ID: mdl-38685142

ABSTRACT

Despite advancements in colorectal cancer (CRC) treatment, the prognosis remains unfavorable for patients with distant liver metastasis. Fluorescence molecular imaging with specific probes is increasingly used to guide CRC surgical resection in real-time and treatment planning. Here, we demonstrate the targeted imaging capacity of an MPA-PEG4-N3-Ang II probe labeled with near-infrared (NIR) fluorescent dye targeting the angiotensin II (Ang II) type 1 receptor (AGTR1) that is significantly upregulated in CRC. MPA-PEG4-N3-Ang II was highly selective and specific to in vitro tumor cells and in vivo tumors in a mouse CRC xenograft model. The favorable ex vivo imaging and in vivo biodistribution of MPA-PEG4-N3-Ang II afforded tumor-specific accumulation with low background and >10 contrast tumor-to-colorectal values in multiple subcutaneous CRC models at 8 h following injection. Biodistribution analysis confirmed the probe's high uptake in HT29 and HCT116 orthotopic and liver metastatic models of CRC with signal-to-noise ratio (SNR) values of tumor-to-colorectal and -liver fluorescence of 5.8 ± 0.6, 5.3 ± 0.7, and 2.7 ± 0.5, 2.6 ± 0.5, respectively, enabling high-contrast intraoperative tumor visualization for surgical navigation. Given its rapid tumor targeting, precise tumor boundary delineation, durable tumor retention and docking study, MPA-PEG4-N3-Ang II is a promising high-contrast imaging agent for the clinical detection of CRC.


Subject(s)
Colorectal Neoplasms , Liver Neoplasms , Molecular Probes , Optical Imaging , Receptor, Angiotensin, Type 1 , Animals , Colorectal Neoplasms/pathology , Humans , Mice , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/secondary , Molecular Probes/chemistry , Molecular Probes/chemical synthesis , Molecular Probes/pharmacokinetics , Receptor, Angiotensin, Type 1/metabolism , Fluorescent Dyes/chemistry , Fluorescent Dyes/chemical synthesis , Molecular Structure , Tissue Distribution , Mice, Nude
3.
J Clin Med ; 13(7)2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38610801

ABSTRACT

Intraoperative navigation is critical during spine surgery to ensure accurate instrumentation placement. From the early era of fluoroscopy to the current advancement in robotics, spinal navigation has continued to evolve. By understanding the variations in system protocols and their respective usage in the operating room, the surgeon can use and maximize the potential of various image guidance options more effectively. At the same time, maintaining navigation accuracy throughout the procedure is of the utmost importance, which can be confirmed intraoperatively by using an internal fiducial marker, as demonstrated herein. This technology can reduce the need for revision surgeries, minimize postoperative complications, and enhance the overall efficiency of operating rooms.

4.
Int J Med Robot ; : e2612, 2023 Dec 19.
Article in English | MEDLINE | ID: mdl-38113328

ABSTRACT

BACKGROUND: In order to provide accurate and reliable image guidance for augmented reality (AR) spinal surgery navigation, a spatial registration method has been proposed. METHODS: In the AR spinal surgery navigation system, grayscale-based 2D/3D registration technology has been used to register preoperative computed tomography images with intraoperative X-ray images to complete the spatial registration, and then the fusion of virtual image and real spine has been realised. RESULTS: In the image registration experiment, the success rate of spine model registration was 90%. In the spinal model verification experiment, the surface registration error of the spinal model ranged from 0.361 to 0.612 mm, and the total average surface registration error was 0.501 mm. CONCLUSION: The spatial registration method based on 2D/3D registration technology can be used in AR spinal surgery navigation systems and is highly accurate and minimally invasive.

5.
Nan Fang Yi Ke Da Xue Xue Bao ; 43(9): 1636-1643, 2023 Sep 20.
Article in Chinese | MEDLINE | ID: mdl-37814880

ABSTRACT

OBJECTIVE: To establish a 3D/2D registration method for preoperative CT and intra-operative X-ray images in imageguided spine surgery. METHODS: We propose a 3D/2D registration algorithm based on 3D image reconstruction. The algorithm performs 3D image reconstruction of 2D orthogonal view X-ray images, thus converting the problem into 3D/3D registration. By constructing an end-to-end framework that combines the two tasks of reconstruction and registration, the geodesic distance is measured in the 3D manifold space to complete the registration. RESULTS: We conducted experiments on the public dataset CTSpine1k. The tests on two test sets with different initial registration errors showed that for data with small initial errors, the proposed algorithm achieved a rotation estimation error of 0.115±0.095° and a translation estimation error of 0.144±0.124 mm; for data with larger initial errors, a rotation estimation error of 0.792±0.659° and a translation estimation error of 0.867±0.701 mm were achieved. CONCLUSION: The proposed method can achieve robust and accurate 3D/2D registration at a speed that meets real-time requirements to improve the performance of spine surgery navigation.


Subject(s)
Algorithms , Imaging, Three-Dimensional , X-Rays , Imaging, Three-Dimensional/methods
6.
Int J Comput Assist Radiol Surg ; 18(7): 1135-1142, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37160580

ABSTRACT

PURPOSE: Recent advances in computer vision and machine learning have resulted in endoscopic video-based solutions for dense reconstruction of the anatomy. To effectively use these systems in surgical navigation, a reliable image-based technique is required to constantly track the endoscopic camera's position within the anatomy, despite frequent removal and re-insertion. In this work, we investigate the use of recent learning-based keypoint descriptors for six degree-of-freedom camera pose estimation in intraoperative endoscopic sequences and under changes in anatomy due to surgical resection. METHODS: Our method employs a dense structure from motion (SfM) reconstruction of the preoperative anatomy, obtained with a state-of-the-art patient-specific learning-based descriptor. During the reconstruction step, each estimated 3D point is associated with a descriptor. This information is employed in the intraoperative sequences to establish 2D-3D correspondences for Perspective-n-Point (PnP) camera pose estimation. We evaluate this method in six intraoperative sequences that include anatomical modifications obtained from two cadaveric subjects. RESULTS: Show that this approach led to translation and rotation errors of 3.9 mm and 0.2 radians, respectively, with 21.86% of localized cameras averaged over the six sequences. In comparison to an additional learning-based descriptor (HardNet++), the selected descriptor can achieve a better percentage of localized cameras with similar pose estimation performance. We further discussed potential error causes and limitations of the proposed approach. CONCLUSION: Patient-specific learning-based descriptors can relocalize images that are well distributed across the inspected anatomy, even where the anatomy is modified. However, camera relocalization in endoscopic sequences remains a persistently challenging problem, and future research is necessary to increase the robustness and accuracy of this technique.


Subject(s)
Endoscopy , Surgery, Computer-Assisted , Humans , Endoscopy/methods , Rotation
7.
Phys Med Biol ; 68(13)2023 06 21.
Article in English | MEDLINE | ID: mdl-37141893

ABSTRACT

Objective.One of the essential technologies in various image-guided spine surgeries is the rigid registration of 3D pre-operative CT and 2D intra-operative X-ray images. The 3D/2D registration is patterned as two essential tasks, that is, dimensional correspondence establishment and estimation of the 3D pose. 3D data is projected to 2D for dimensional correspondence by most of the existing methods, which makes pose parameters difficult to estimate caused by the loss of spatial information. This work aims to develop a reconstruction based 3D/2D registration method for spine surgery navigation.Approach.A novel segmentation-guided 3D/2D registration (SGReg) method for orthogonal X-ray and CT images was proposed based on reconstruction. SGReg consists of a bi-path segmentation network and an inter-path multi-scale pose estimation module. The X-ray segmentation path in the bi-path segmentation network reconstructs 3D spatial information from 2D orthogonal X-ray images to segmentation masks; meanwhile, the CT segmentation path predicts segmentation masks from 3D CT images, thereby bringing the 3D/2D data into dimensional correspondence. In the inter-path multi-scale pose estimation module, the features from the two segmentation paths are integrated, and the pose parameters are directly regressed under the guidance of the coordinate information.Main result.We evaluated SGReg using a public dataset CTSpine1k and compared the registration performance with other methods. SGReg achieved considerable improvement over other methods with great robustness.SignificanceWe have proposed an end-to-end 3D/2D registration framework named SGReg. Based on the idea of reconstruction, SGReg performs a unified framework between dimensional correspondence establishment and direct pose estimation in 3D space, showing significant potential in spine surgery navigation.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Tomography, X-Ray Computed/methods , X-Rays , Radiography , Imaging, Three-Dimensional/methods , Spine/diagnostic imaging , Spine/surgery
8.
Int J Comput Assist Radiol Surg ; 18(12): 2155-2166, 2023 Dec.
Article in English | MEDLINE | ID: mdl-36892722

ABSTRACT

PURPOSE: Minimally invasive total hip arthroplasty (MITHA) is a treatment for hip arthritis, and it causes less tissue trauma, blood loss, and recovery time. However, the limited incision makes it difficult for surgeons to perceive the instruments' location and orientation. Computer-assisted navigation systems can help improve the medical outcome of MITHA. Directly applying existing navigation systems for MITHA, however, suffers from problems of bulky fiducial marker, severe feature-loss, multiple instruments tracking confusion, and radiation exposure. To tackle these problems, we propose an image-guided navigation system for MITHA using a novel position-sensing marker. METHODS: A position-sensing marker is proposed to serve as the fiducial marker with high-density and multi-fold ID tags. It results in less feature span and enables the use of ID for each feature, overcoming the problem of bulky fiducial markers and multiple instruments tracking confusion. And the marker can be recognized even when a large part of locating features is obscured. As for the elimination of intraoperative radiation exposure, we propose a point-based method to achieve patient-image registration based on anatomical landmarks. RESULTS: Quantitative experiments are conducted to evaluate the feasibility of our system. The accuracy of instrument positioning is achieved at 0.33 ± 0.18 mm, and that of patient-image registration is achieved at 0.79 ± 0.15 mm. And qualitative experiments are also performed, verifying that our system can be used in compact surgical spatial volume and can address severe feature-loss and tracking confusion problems. In addition, our system does not require any intraoperative medical scans. CONCLUSION: Experimental results indicate that our proposed system can assist surgeons without larger space occupations, radiation exposure, and extra incision, showing its potential application value in MITHA.


Subject(s)
Arthroplasty, Replacement, Hip , Surgery, Computer-Assisted , Humans , Surgery, Computer-Assisted/methods , Algorithms , Tomography, X-Ray Computed/methods , Phantoms, Imaging
9.
Mater Today Bio ; 16: 100397, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36081578

ABSTRACT

In vivo fluorescent imaging by using the new contrast agents emitted at short-wavelength infrared region (NIR II, 1000-1700 â€‹nm) presents an unprecedent advantages in imaging sensitivity and spatial resolution over traditional near-infrared (NIR) light. Recently, Nd-based rare-earth nanocrystals have attracted considerable attention due to the high quantum yield (∼40%) of their emission at NIR II. However, undesirable capture by reticuloendothelial system to bring strong background signal is unsatisfying for tumor discrimination. Here, GSH-sensitive tetrasulfide bond incorporated mesoporous silica shell has entrusted onto Nd-based down-conversion nanocrystals (DCNPs) surface to totally quench the fluorescence of DCNPs. After RGD conjugation on the silica surface, the NIR II contrast agents could actively target to liver tumors. Then tetrasulfide bonds can be broken during the silica framework decomposing in cytoplasm under high GSH concentration to result in NIR II fluorescence explosive recover. Benefiting from this specific response under tumor microenvironment, the NIR II signal in other organs was markedly reduced, while the signal-to-background ratio is prominently enhanced in tumors. Then, solid liver tumors were successfully resected under the guidance of our GSH responsive NIR II fluorescent imaging with no recurrence after 20-day of surgery. Meanwhile, by combining with the ignorable side effects, the Nd-based nanoprobes vastly improved the imaging resolution of tumor margin, opening a paradigm of NIR II fluorescent imaging-guided surgery.

10.
Talanta ; 250: 123715, 2022 Dec 01.
Article in English | MEDLINE | ID: mdl-35868149

ABSTRACT

Breast cancer is a highly lethal and aggressive form of cancer. Early-stager diagnosis and intraoperative guidance are important endeavors for reducing associated morbidity and mortality among breast cancer patients. Epithelial cell adhesion molecule (EpCAM) is aberrantly expressed in the majority of breast carcinoma, making it an attractive imaging biomarker. Herein, we have designed novel EpCAM-targeting peptides (denoted as YQ-S) for precise breast carcinoma detection. The greater binding affinity of the designed peptide YQ-S2 over YQ-S1 and the reported peptide SNF was displayed on different cell lines with flow cytometry analysis, showing a positive correlation with the expression of EpCAM. Besides, YQ-S2 displayed an ideal biosafety profile with no evidence of any acute toxicity. Thus, YQ-S2 was chosen to represent YQ-S. By linking with the near-infrared fluorescent dye (MPA), we further developed the EpCAM-targeting probe (YQ-S2-MPA) for real-time imaging and fluorescence-guided resection of breast cancer tumors. In vivo imaging of the MCF-7 tumor-bearing model demonstrated higher tumor uptake of YQ-S2-MPA compared with that of SNF-MPA. The maximum tumor-to-normal tissue signal ratio of YQ-S2-MPA was 5.1, which was about 2 times that of SNF-MPA. Meanwhile, the metastatic lesions in 4T1 lung metastasis, and lymph node metastasis (LNM) mice were successfully detected under this imaging system. Notably, YQ-S2-MPA had excellent performance in surgical navigation studies in the preclinical models. Moreover, we exploited the 99mTc-HYNIC-YQ-S2 to localize EpCAM positive tumors successfully. These data proved that YQ-S2 can distinguish EpCAM-positive orthotopic and metastatic tumors from surrounding normal tissues accurately, and possesses the clinical potential as a surgical navigation probe.


Subject(s)
Fluorescent Dyes , Neoplasms , Animals , Cell Line , Cell Line, Tumor , Epithelial Cell Adhesion Molecule/metabolism , Mice , Multimodal Imaging
11.
Elife ; 112022 05 20.
Article in English | MEDLINE | ID: mdl-35594135

ABSTRACT

Background: Deep brain stimulation (DBS) electrode implant trajectories are stereotactically defined using preoperative neuroimaging. To validate the correct trajectory, microelectrode recordings (MERs) or local field potential recordings can be used to extend neuroanatomical information (defined by MRI) with neurophysiological activity patterns recorded from micro- and macroelectrodes probing the surgical target site. Currently, these two sources of information (imaging vs. electrophysiology) are analyzed separately, while means to fuse both data streams have not been introduced. Methods: Here, we present a tool that integrates resources from stereotactic planning, neuroimaging, MER, and high-resolution atlas data to create a real-time visualization of the implant trajectory. We validate the tool based on a retrospective cohort of DBS patients (N = 52) offline and present single-use cases of the real-time platform. Results: We establish an open-source software tool for multimodal data visualization and analysis during DBS surgery. We show a general correspondence between features derived from neuroimaging and electrophysiological recordings and present examples that demonstrate the functionality of the tool. Conclusions: This novel software platform for multimodal data visualization and analysis bears translational potential to improve accuracy of DBS surgery. The toolbox is made openly available and is extendable to integrate with additional software packages. Funding: Deutsche Forschungsgesellschaft (410169619, 424778381), Deutsches Zentrum für Luft- und Raumfahrt (DynaSti), National Institutes of Health (2R01 MH113929), and Foundation for OCD Research (FFOR).


Deep brain stimulation is an established therapy for patients with Parkinson's disease and an emerging option for other neurological conditions. Electrodes are implanted deep in the brain to stimulate precise brain regions and control abnormal brain activity in those areas. The most common target for Parkinson's disease, for instance, is a structure called the subthalamic nucleus, which sits at the base of the brain, just above the brain stem. To ensure electrodes are placed correctly, surgeons use various sources of information to characterize the patient's brain anatomy and decide on an implant site. These data include brain scans taken before surgery and recordings of brain activity taken during surgery to confirm the intended implant site. Sometimes, the brain activity signals from this last confirmation step may slightly alter surgical plans. It represents one of many challenges for clinical teams: to analyse, assimilate, and communicate data as it is collected during the procedure. Oxenford et al. developed a software pipeline to aggregate the data surgeons use to implant electrodes. The open-source platform, dubbed Lead-OR, visualises imaging data and brain activity recordings (termed electrophysiology data) in real time. The current set-up integrates with commercial tools and existing software for surgical planning. Oxenford et al. tested Lead-OR on data gathered retrospectively from 32 patients with Parkinson's who had electrodes implanted in their subthalamic nucleus. The platform showed good agreement between imaging and electrophysiology data, although there were some unavoidable discrepancies, arising from limitations in the imaging pipeline and from the surgical procedure. Lead-OR was also able to correct for brain shift, which is where the brain moves ever so slightly in the skull. With further validation, this proof-of-concept software could serve as a useful decision-making tool for surgical teams implanting electrodes for deep brain stimulation. In time, if implemented, its use could improve the accuracy of electrode placement, translating into better surgical outcomes for patients. It also has the potential to integrate forthcoming ultra-high-resolution data from current brain mapping projects, and other commercial surgical planning tools.


Subject(s)
Deep Brain Stimulation , Deep Brain Stimulation/methods , Electrodes, Implanted , Humans , Magnetic Resonance Imaging/methods , Microelectrodes , Neuroimaging/methods , Retrospective Studies
12.
Int J Comput Assist Radiol Surg ; 17(9): 1663-1672, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35588339

ABSTRACT

PURPOSE: Ultrasound-based navigation is a promising method in breast-conserving surgery, but tumor contouring often requires a radiologist at the time of surgery. Our goal is to develop a real-time automatic neural network-based tumor contouring process for intraoperative guidance. Segmentation accuracy is evaluated by both pixel-based metrics and expert visual rating. METHODS: This retrospective study includes 7318 intraoperative ultrasound images acquired from 33 breast cancer patients, randomly split between 80:20 for training and testing. We implement a u-net architecture to label each pixel on ultrasound images as either tumor or healthy breast tissue. Quantitative metrics are calculated to evaluate the model's accuracy. Contour quality and usability are also assessed by fellowship-trained breast radiologists and surgical oncologists. Additionally, the viability of using our u-net model in an existing surgical navigation system is evaluated by measuring the segmentation frame rate. RESULTS: The mean dice similarity coefficient of our u-net model is 0.78, with an area under the receiver-operating characteristics curve of 0.94, sensitivity of 0.95, and specificity of 0.67. Expert visual ratings are positive, with 93% of responses rating tumor contour quality at or above 7/10, and 75% of responses rating contour quality at or above 8/10. Real-time tumor segmentation achieved a frame rate of 16 frames-per-second, sufficient for clinical use. CONCLUSION: Neural networks trained with intraoperative ultrasound images provide consistent tumor segmentations that are well received by clinicians. These findings suggest that neural networks are a promising adjunct to alleviate radiologist workload as well as improving efficiency in breast-conserving surgery navigation systems.


Subject(s)
Breast Neoplasms , Mastectomy, Segmental , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/surgery , Female , Humans , Image Processing, Computer-Assisted/methods , Retrospective Studies , Ultrasonography, Interventional
13.
Minim Invasive Ther Allied Technol ; 31(7): 981-991, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35337249

ABSTRACT

PURPOSE: In this study, we aimed to examine the effectiveness of augmented reality (AR)-assisted technology in laparoscopic partial nephrectomy (LPN) compared to conventional technique. MATERIAL AND METHODS: We performed a systematic search through the Cochrane Library, PubMed, Embase, Web of Science, ScienceDirect, and Wanfang for eligible studies published up to 30 September 2021. The literature quality was independently evaluated by two investigators, and all statistical analysis was performed using STATA version 12.0 (STATA Corporation, Houston, TX, USA). According to the different surgical techniques and surgical risks, the included articles were divided into two subgroups. RESULTS: The reviewed studies included five retrospective comparative studies (RCS), two prospective controlled studies (PCS), one randomized controlled trial (RCT) including 548 patients. The AR-assisted LPN had following distinctive advantages compared to conventional nephrectomy: shorter procedure times and lower intraoperative blood loss. Additionally, there was no statistically significant difference in margin-positive resection rate, warm ischemia time, complications, eGFR decline and length of stay between these two technologies. CONCLUSION: Compared to the conventional technique, AR-assisted LPN appears to be a preferable treatment. However, a high degree of statistical heterogeneity was observed in the meta-analysis. Well-designed prospective trials and large-scale RCTs are needed to draw definitive findings from this analysis.


Subject(s)
Augmented Reality , Kidney Neoplasms , Laparoscopy , Humans , Kidney Neoplasms/surgery , Laparoscopy/methods , Nephrectomy/methods , Technology , Treatment Outcome , Warm Ischemia
14.
Adv Sci (Weinh) ; 9(7): e2104935, 2022 03.
Article in English | MEDLINE | ID: mdl-35023300

ABSTRACT

Surgeons face challenges in intraoperatively defining margin of brain tumors due to its infiltrative nature. Extracellular acidosis caused by metabolic reprogramming of cancer cells is a reliable marker for tumor infiltrative regions. Although the acidic margin-guided surgery shows promise in improving surgical prognosis, its clinical transition is delayed by having the exogenous probes approved by the drug supervision authority. Here, an intelligent surface-enhanced Raman scattering (SERS) navigation system delineating glioma acidic margins without administration of exogenous probes is reported. With assistance of this system, the metabolites at the tumor cutting edges can be nondestructively transferred within a water droplet to a SERS chip with pH sensitivity. Homemade deep learning model automatically processes the Raman spectra collected from the SERS chip and delineates the pH map of tumor resection bed with increased speed. Acidity correlated cancer cell density and proliferation level are demonstrated in tumor cutting edges of animal models and excised tissues from glioma patients. The overall survival of animal models post the SERS system guided surgery is significantly increased in comparison to the conventional strategy used in clinical practice. This SERS system holds the promise in accelerating clinical transition of acidic margin-guided surgery for solid tumors with infiltrative nature.


Subject(s)
Acidosis , Brain Neoplasms , Glioma , Animals , Brain Neoplasms/surgery , Glioma/pathology , Glioma/surgery , Humans , Margins of Excision , Spectrum Analysis, Raman
15.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-904734

ABSTRACT

Objective@#To explore the clinical application value of mixed reality technology in locating perforator vessels and assisting perforator vessel dissection to harvest anterolateral thigh flaps.@*Methods@#Six patients who needed anterolateral thigh flap repair after resection of oral and maxillofacial tumors were recruited from the Department of Oral and Maxillofacial Surgery of Nanchong Central Hospital from January 2020 to January 2021. Before surgery, the CT angiography data of the lower limbs of the patients carrying the calibration points were imported into the data workstation to perform 3D reconstruction of the perforator vessels and surrounding tissues of the thigh, and the reconstruction results were imported into Microsoft HoloLens 2 glasses. During the operation, calibration was performed at the calibration point of the operative area so that the preoperative reconstruction results were superimposed on the operative area through Microsoft HoloLens 2 glasses. The clinical application value of mixed reality technology assisted perforator vessel location and anatomy of anterolateral femoral perforator flap was discussed from six aspects: whether the perforator vessel was reconstructed preoperatively, intraoperative calibration time, whether the actual position of the perforating vessels passing through the fascia lata fulcrum deviated from the preoperative reconstruction result within 1 cm, time required to harvest the flap, and whether the actual route of the perforator vessel was consistent with the reconstruction result, and whether the postoperative flap survived.@*Results @# The position and course of perforating vessels were successfully reconstructed in 6 cases before the operation. The actual course of perforating vessels during the operation was consistent with the reconstruction results. The deviation between the actual position of the perforating points and the preoperative reconstruction results was within 1 cm, which met the requirements of the actual asisting of the anterolateral thigh flap. The average time of flap harvest was (70.50 ± 7.20) min. The average calibration time was (13.33 ± 5.50) min. All flaps survived.@* Conclusions @# Mixed reality technology projects the reconstruction results of anterolateral femoral perforator vessels directly into the operative area, which provides a new method for asisting localization and anatomy of anterolateral femoral flap perforator vessels and reduces the possibility of injury to perforator vessels.

16.
Proc Inst Mech Eng H ; 235(12): 1386-1398, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34304631

ABSTRACT

Image-guided therapies have been on the rise in recent years as they can achieve higher accuracy and are less invasive than traditional methods. By combining augmented reality technology with image-guided therapy, more organs, and tissues can be observed by surgeons to improve surgical accuracy. In this review, 233 publications (dated from 2015 to 2020) on the design and application of augmented reality-based systems for image-guided therapy, including both research prototypes and commercial products, were considered for review. Based on their functions and applications. Sixteen studies were selected. The engineering specifications and applications were analyzed and summarized for each study. Finally, future directions and existing challenges in the field were summarized and discussed.


Subject(s)
Augmented Reality , Humans , Radiology, Interventional , Surgery, Computer-Assisted , Technology
17.
Eur J Med Chem ; 219: 113440, 2021 Jul 05.
Article in English | MEDLINE | ID: mdl-33892274

ABSTRACT

Breast cancer is the most dangerous, among all malignant tumors that threaten women's lives and health. Surgical resection can effectively prolong the survival time of patients with early breast cancer. Insulin-like growth factor type 1 receptor (IGF1R) is a member of the large family of receptor tyrosine kinases, and it's significantly overexpressed in breast cancer cells, which make them ideal biomarkers for the diagnosis and surgery navigation of breast cancer. Herein, we developed a series of IGF1R-targeted probes (YQ-L) for fluorescent imaging in breast cancer based on the strategy of drug repositioning. YQ-L exhibited specific IGF1R binding both in vitro and in vivo, especially probe 5d exhibited higher tumor uptake with a high tumor/normal ratio in the MCF-7 tumor bearing mouse. The maximum T/N ratio of probe 5d was 4.9, which was about 3 times that of indocyanine green (ICG). Meanwhile, probe 5d displayed more favorable in vivo pharmacokinetic properties than that of ICG with less hepatic and intestinal uptake. Convenient preparation, excellent IGF1R specificity in breast cancer, rapid clearance from normal organs and good biosafety profiles of probe 5d warrant further investigations for clinical translation in detection and surgery navigation of breast cancer.


Subject(s)
Breast Neoplasms/diagnosis , Fluorescent Dyes/chemistry , Receptor, IGF Type 1/metabolism , Animals , Breast Neoplasms/pathology , Cell Line, Tumor , Female , Fluorescent Dyes/metabolism , Gene Expression , Humans , Mice , Mice, Nude , Optical Imaging , Protein Binding , Receptor, IGF Type 1/chemistry , Receptor, IGF Type 1/genetics , Tissue Distribution , Transplantation, Heterologous
18.
Front Oncol ; 11: 638327, 2021.
Article in English | MEDLINE | ID: mdl-33718233

ABSTRACT

Surgery with fluorescence equipment has improved to treat the malignant viscera, including hepatobiliary and pancreatic neoplasms. In both open and minimally invasive surgeries, optical imaging using near-infrared (NIR) fluorescence is used to assess anatomy and function in real time. Here, we review a variety of publications related to clinical applications of NIR fluorescence imaging in liver surgery. We have developed a novel nanoparticle (indocyanine green lactosome) that is biocompatible and can be used for imaging cancer tissues and also as a drug delivery system. To date, stable particles are formed in blood and have an ~10-20 h half-life. Particles labeled with a NIR fluorescent agent have been applied to cancer tissues by the enhanced permeability and retention effect in animals. Furthermore, this article reviews recent developments in photodynamic therapy with NIR fluorescence imaging, which may contribute and accelerate the innovative treatments for liver tumors.

19.
J Endourol ; 33(8): 641-646, 2019 08.
Article in English | MEDLINE | ID: mdl-30565487

ABSTRACT

Purpose: To evaluate the feasibility and effectiveness of the navigation of intelligent/interactive qualitative and quantitative analysis (IQQA) three-dimensional (3D) reconstruction technique in laparoscopic or robotic assisted partial nephrectomy (LPN or RAPN) for renal hilar tumors. Patients and Methods: The study retrospectively reviewed 26 patients with hilar tumors from February 2016 to February 2018. IQQA 3D reconstruction technique was applied for the purpose of navigation and resection of the tumors. Relevant clinical parameters and surgical outcomes were recorded. Results: All 26 LPN or RAPN were effectively completed without conversion to a hand-assisted or an open approach. Under the navigation of IQQA, all tumors were found precisely at the first time during surgeries. The mean operative time was 142 minutes (142 ± 35), with a mean warm ischemia time of 24.3 minutes (24.3 ± 9.5). The estimated blood loss was 156 mL (156 ± 112). No intraoperative complications occurred. Two patients suffered from postoperative complications. All patients had negative margins on the final pathological examination. At a mean follow-up period of 3 months, the mean glomerular filtration rate is 22.5 mL/min (22.5 ± 7.1) without tumor recurrence. Conclusions: With peculiar features, such as accurate location, complete resection, and fewer perioperative complications, the navigation of IQQA 3D reconstruction technique in partial nephrectomy represents a safe and effective procedure for hilar tumors.


Subject(s)
Carcinoma, Renal Cell/surgery , Kidney Neoplasms/surgery , Laparoscopy/methods , Nephrectomy/methods , Robotic Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Adult , Aged , Blood Loss, Surgical , Carcinoma, Renal Cell/diagnostic imaging , Carcinoma, Renal Cell/pathology , Female , Glomerular Filtration Rate , Humans , Imaging, Three-Dimensional , Kidney Neoplasms/diagnostic imaging , Kidney Neoplasms/pathology , Male , Middle Aged , Multidetector Computed Tomography , Operative Time , Postoperative Complications/epidemiology , Retrospective Studies , Warm Ischemia/statistics & numerical data
20.
ACS Nano ; 12(4): 3629-3637, 2018 04 24.
Article in English | MEDLINE | ID: mdl-29595962

ABSTRACT

Distinguishing tumor cells from normal cells holds the key to precision diagnosis and effective intervention of cancers. The fundamental difficulties, however, are the heterogeneity of tumor cells and the lack of truly specific and ideally universal cancer biomarkers. Here, we report a concept of tumor cell detection, bypassing the specific genotypic and phenotypic features of different tumor cell types and directly going toward the hallmark of cancer, uncontrollable growth. Combining spherical nucleic acids (SNAs) with exquisitely engineered molecular beacons (SNA beacons, dubbed SNAB technology) is capable of identifying tumor cells from normal cells based on the molecular phenotype of telomerase activity, largely bypassing the heterogeneity problem of cancers. Owing to the cell-entry capability of SNAs, the SNAB probe readily achieves tumor cell detection across multiple platforms, ranging from solution-based assay, to single cell imaging and in vivo solid tumor imaging (unlike PCR that is restricted to cell lysates). We envision the SNAB technology will impact cancer diagnosis, therapeutic response assessment, and image-guided surgery.


Subject(s)
Neoplasms/diagnostic imaging , Nucleic Acids/chemistry , Telomerase/chemistry , Animals , Cells, Cultured , Humans , Mice , Mice, Nude , Neoplasms/metabolism , Nucleic Acids/metabolism , Optical Imaging , Telomerase/metabolism
SELECTION OF CITATIONS
SEARCH DETAIL
...