Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 897
Filter
1.
J Med Syst ; 48(1): 66, 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38976137

ABSTRACT

Three-dimensional (3D) printing has gained popularity across various domains but remains less integrated into medical surgery due to its complexity. Existing literature primarily discusses specific applications, with limited detailed guidance on the entire process. The methodological details of converting Computed Tomography (CT) images into 3D models are often found in amateur 3D printing forums rather than scientific literature. To address this gap, we present a comprehensive methodology for converting CT images of bone fractures into 3D-printed models. This involves transferring files in Digital Imaging and Communications in Medicine (DICOM) format to stereolithography format, processing the 3D model, and preparing it for printing. Our methodology outlines step-by-step guidelines, time estimates, and software recommendations, prioritizing free open-source tools. We also share our practical experience and outcomes, including the successful creation of 72 models for surgical planning, patient education, and teaching. Although there are challenges associated with utilizing 3D printing in surgery, such as the requirement for specialized expertise and equipment, the advantages in surgical planning, patient education, and improved outcomes are evident. Further studies are warranted to refine and standardize these methodologies for broader adoption in medical practice.


Subject(s)
Fractures, Bone , Printing, Three-Dimensional , Tomography, X-Ray Computed , Humans , Fractures, Bone/diagnostic imaging , Fractures, Bone/surgery , Tomography, X-Ray Computed/methods , Imaging, Three-Dimensional/methods , Traumatology , Radiology Information Systems/organization & administration , Models, Anatomic
2.
Surg Endosc ; 2024 Jun 24.
Article in English | MEDLINE | ID: mdl-38913120

ABSTRACT

INTRODUCTION: Communication is fundamental to effective surgical coaching. This can be challenging for training during image-guided procedures where coaches and trainees need to articulate technical details on a monitor. Telestration devices that annotate on monitors remotely could potentially overcome these limitations and enhance the coaching experience. This study aims to evaluate the value of a novel telestration device in surgical coaching. METHODS: A randomized-controlled trial was designed. All participants watched a video demonstrating the task followed by a baseline performance assessment and randomization into either control group (conventional verbal coaching without telestration) or telestration group (verbal coaching with telestration). Coaching for a simulated laparoscopic small bowel anastomosis on a dry lab model was done by a faculty surgeon. Following the coaching session, participants underwent a post-coaching performance assessment of the same task. Assessments were recorded and rated by blinded reviewers using a modified Global Rating Scale of the Objective Structured Assessment of Technical Skills (OSATS). Coaching sessions were also recorded and compared in terms of mentoring moments; guidance misinterpretations, questions/clarifications by trainees, and task completion time. A 5-point Likert scale was administered to obtain feedback. RESULTS: Twenty-four residents participated (control group 13, telestration group 11). Improvements in some elements of the OSATS scale were noted in the Telestration arm but there was no statistical significance in the overall score between the two groups. Mentoring moments were more in the telestration Group. Amongst the telestration Group, 55% felt comfortable that they could perform this task independently, compared to only 8% amongst the control group and 82% would recommend the use of telestration tools here. CONCLUSION: There is demonstrated educational value of this novel telestration device mainly in the non-technical aspects of the interaction by enhancing the coaching experience with improvement in communication and greater mentoring moments between coach and trainee.

3.
Med Phys ; 51(7): 4554-4566, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38856158

ABSTRACT

BACKGROUND: Image-to-patient registration aligns preoperative images to intra-operative anatomical structures and it is a critical step in image-guided surgery (IGS). The accuracy and speed of this step significantly influence the performance of IGS systems. Rigid registration based on paired points has been widely used in IGS, but studies have shown its limitations in terms of cost, accuracy, and registration time. Therefore, rigid registration of point clouds representing the human anatomical surfaces has become an alternative way for image-to-patient registration in the IGS systems. PURPOSE: We propose a novel correspondence-based rigid point cloud registration method that can achieve global registration without the need for pose initialization. The proposed method is less sensitive to outliers compared to the widely used RANSAC-based registration methods and it achieves high accuracy at a high speed, which is particularly suitable for the image-to-patient registration in IGS. METHODS: We use the rotation axis and angle to represent the rigid spatial transformation between two coordinate systems. Given a set of correspondences between two point clouds in two coordinate systems, we first construct a 3D correspondence cloud (CC) from the inlier correspondences and prove that the CC distributes on a plane, whose normal is the rotation axis between the two point clouds. Thus, the rotation axis can be estimated by fitting the CP. Then, we further show that when projecting the normals of a pair of corresponding points onto the CP, the angle between the projected normal pairs is equal to the rotation angle. Therefore, the rotation angle can be estimated from the angle histogram. Besides, this two-stage estimation also produces a high-quality correspondence subset with high inlier rate. With the estimated rotation axis, rotation angle, and the correspondence subset, the spatial transformation can be computed directly, or be estimated using RANSAC in a fast and robust way within only 100 iterations. RESULTS: To validate the performance of the proposed registration method, we conducted experiments on the CT-Skull dataset. We first conducted a simulation experiment by controlling the initial inlier rate of the correspondence set, and the results showed that the proposed method can effectively obtain a correspondence subset with much higher inlier rate. We then compared our method with traditional approaches such as ICP, Go-ICP, and RANSAC, as well as recently proposed methods like TEASER, SC2-PCR, and MAC. Our method outperformed all traditional methods in terms of registration accuracy and speed. While achieving a registration accuracy comparable to the recently proposed methods, our method demonstrated superior speed, being almost three times faster than TEASER. CONCLUSIONS: Experiments on the CT-Skull dataset demonstrate that the proposed method can effectively obtain a high-quality correspondence subset with high inlier rate, and a tiny RANSAC with 100 iterations is sufficient to estimate the optimal transformation for point cloud registration. Our method achieves higher registration accuracy and faster speed than existing widely used methods, demonstrating great potential for the image-to-patient registration, where a rigid spatial transformation is needed to align preoperative images to intra-operative patient anatomy.


Subject(s)
Surgery, Computer-Assisted , Humans , Surgery, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Tomography, X-Ray Computed
4.
Mol Pharm ; 21(7): 3296-3309, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38861020

ABSTRACT

Cetuximab (Cet)-IRDye800CW, among other antibody-IRDye800CW conjugates, is a potentially effective tool for delineating tumor margins during fluorescence image-guided surgery (IGS). However, residual disease often leads to recurrence. Photodynamic therapy (PDT) following IGS is proposed as an approach to eliminate residual disease but suffers from a lack of molecular specificity for cancer cells. Antibody-targeted PDT offers a potential solution for this specificity problem. In this study, we show, for the first time, that Cet-IRDye800CW is capable of antibody-targeted PDT in vitro when the payload of dye molecules is increased from 2 (clinical version) to 11 per antibody. Cet-IRDye800CW (1:11) produces singlet oxygen, hydroxyl radicals, and peroxynitrite upon activation with 810 nm light. In vitro assays on FaDu head and neck cancer cells confirm that Cet-IRDye800CW (1:11) maintains cancer cell binding specificity and is capable of inducing up to ∼90% phototoxicity in FaDu cancer cells. The phototoxicity of Cet-IRDye800CW conjugates using 810 nm light follows a dye payload-dependent trend. Cet-IRDye800CW (1:11) is also found to be more phototoxic to FaDu cancer cells and less toxic in the dark than the approved chromophore indocyanine green, which can also act as a PDT agent. We propose that antibody-targeted PDT using high-payload Cet-IRDye800CW (1:11) could hold potential for eliminating residual disease postoperatively when using sustained illumination devices, such as fiber optic patches and implantable surgical bed balloon applicators. This approach could also potentially be applicable to a wide variety of resectable cancers that are amenable to IGS-PDT, using their respective approved full-length antibodies as a template for high-payload IRDye800CW conjugation.


Subject(s)
Cetuximab , Indoles , Photochemotherapy , Humans , Photochemotherapy/methods , Indoles/chemistry , Cetuximab/chemistry , Cetuximab/pharmacology , Cell Line, Tumor , Head and Neck Neoplasms/drug therapy , Photosensitizing Agents/chemistry , Benzenesulfonates
5.
Diagnostics (Basel) ; 14(12)2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38928666

ABSTRACT

The aim of this study was to assess the efficacy of hyperspectral imaging (HSI) as an intraoperative perfusion imaging modality during gender affirmation surgery (GAS). The hypothesis posited that HSI could quantify perfusion to the clitoral complex, thereby enabling the prediction of either uneventful wound healing or the occurrence of necrosis. In this non-randomised prospective clinical study, we enrolled 30 patients who underwent GAS in the form of vaginoplasty with the preparation of a clitoral complex from 2020 to 2024 and compared patients' characteristics as well as HSI data regarding clitoris necrosis. Individuals demonstrating uneventful wound healing pertaining to the clitoral complex were designated as Group A. Patients with complete necrosis of the neo-clitoris were assigned to Group B. Patient characteristics were collected and subsequently a comparative analysis carried out. No significant difference in patient characteristics was observed between the two groups. Necrosis occurred when both StO2 and NIR PI parameters fell below 40%. For the simultaneous occurrence of StO2 and NIR PI of 40% or less, a sensitivity of 92% and specificity of 72% was calculated. Intraoperatively, the onset of necrosis in the clitoral complex can be reliably predicted with the assistance of HSI.

6.
Adv Mater ; : e2405275, 2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38897213

ABSTRACT

The development of minimally invasive surgery has greatly advanced precision tumor surgery, but sometimes suffers from restricted visualization of the surgical field, especially during the removal of abdominal tumors. A 3-D inspection of tumors could be achieved by intravenously injecting tumor-selective fluorescent probes, whereas most of which are unable to instantly distinguish tumors via in situ spraying, which is urgently needed in the process of surgery in a convenient manner. In this study, we have designed an injectable and sprayable fluorescent nanoprobe, termed Poly-g-BAT, to realize rapid tumor imaging in freshly dissected human colorectal tumors and animal models. Mechanistically, the incorporation of γ-glutamyl group facilitates the rapid internalization of Poly-g-BAT, and these internalized nanoprobes can be subsequently activated by intracellular NAD(P)H: quinone oxidoreductase-1 to release near-infrared fluorophores. As a result, Poly-g-BAT can achieve a superior tumor-to-normal ratio (TNR) up to 12.3 and enable a fast visualization (3 min after in situ spraying) of tumor boundaries in the xenograft tumor models, Apcmin/+ mice models and fresh human tumor tissues. In addition, Poly-g-BAT is capable of identifying minimal premalignant lesions via intravenous injection. This article is protected by copyright. All rights reserved.

7.
Article in English | MEDLINE | ID: mdl-38900308

ABSTRACT

To meet the growing demand for intraoperative molecular imaging, the development of compatible imaging agents plays a crucial role. Given the unique requirements of surgical applications compared to diagnostics and therapy, maximizing translational potential necessitates distinctive imaging agent designs. For effective surgical guidance, exogenous signatures are essential and are achievable through a diverse range of imaging labels such as (radio)isotopes, fluorescent dyes, or combinations thereof. To achieve optimal in vivo utility a balanced molecular design of the tracer as a whole is required, which ensures a harmonious effect of the imaging label with the affinity and specificity (e.g., pharmacokinetics) of a pharmacophore/targeting moiety. This review outlines common design strategies and the effects of refinements in the molecular imaging agent design on the agent's pharmacological profile. This includes the optimization of affinity, pharmacokinetics (including serum binding and target mediated background), biological clearance route, the achievable signal intensity, and the effect of dosing hereon.

8.
Vet Med Sci ; 10(4): e1506, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38853600

ABSTRACT

A 7-year-old castrated male Golden Retriever weighing 36.8 kg presented to the Veterinary Teaching Hospital with vomiting, anorexia and depression. After blood tests, radiographic, ultrasound and computed tomography examinations, a 7.85 × 5.90 × 8.75 cm mass was identified in the caecum. To visualise the tumour margin and improve the accuracy of tumour resection, intraoperative short-wave infrared imaging using indocyanine green was performed during surgery. An indocyanine green solution was injected intravenously as a bolus of 5 mg/kg 24 h before surgery. Tumour resection was performed with a 0.5 cm margin from the fluorescent-marked tissues. Histopathological examination revealed a diagnosis of a gastrointestinal stromal tumour (GIST) and the absence of neoplastic cells in the surgical margin, indicating a successful surgery. To our knowledge, this is the first case of a GIST resection in a dog using intraoperative short-wave infrared imaging.


Subject(s)
Dog Diseases , Gastrointestinal Stromal Tumors , Indocyanine Green , Animals , Dogs , Male , Gastrointestinal Stromal Tumors/veterinary , Gastrointestinal Stromal Tumors/surgery , Gastrointestinal Stromal Tumors/diagnostic imaging , Gastrointestinal Stromal Tumors/diagnosis , Dog Diseases/surgery , Dog Diseases/diagnostic imaging , Optical Imaging/veterinary , Optical Imaging/methods
9.
Article in English | MEDLINE | ID: mdl-38839534

ABSTRACT

Surgical navigation, despite its potential benefits, faces challenges in widespread adoption in clinical practice. Possible reasons include the high cost, increased surgery time, attention shifts during surgery, and the mental task of mapping from the monitor to the patient. To address these challenges, a portable, all-in-one surgical navigation system using augmented reality (AR) was developed, and its feasibility and accuracy were investigated. The system achieves AR visualization by capturing a live video stream of the actual surgical field using a visible light camera and merging it with preoperative virtual images. A skull model with reference spheres was used to evaluate the accuracy. After registration, virtual models were overlaid on the real skull model. The discrepancies between the centres of the real spheres and the virtual model were measured to assess the AR visualization accuracy. This AR surgical navigation system demonstrated precise AR visualization, with an overall overlap error of 0.53 ± 0.21 mm. By seamlessly integrating the preoperative virtual plan with the intraoperative field of view in a single view, this novel AR navigation system could provide a feasible solution for the use of AR visualization to guide the surgeon in performing the operation as planned.

10.
Front Artif Intell ; 7: 1406806, 2024.
Article in English | MEDLINE | ID: mdl-38873177

ABSTRACT

Background: Bladder cancer, specifically transitional cell carcinoma (TCC) polyps, presents a significant healthcare challenge worldwide. Accurate segmentation of TCC polyps in cystoscopy images is crucial for early diagnosis and urgent treatment. Deep learning models have shown promise in addressing this challenge. Methods: We evaluated deep learning architectures, including Unetplusplus_vgg19, Unet_vgg11, and FPN_resnet34, trained on a dataset of annotated cystoscopy images of low quality. Results: The models showed promise, with Unetplusplus_vgg19 and FPN_resnet34 exhibiting precision of 55.40 and 57.41%, respectively, suitable for clinical application without modifying existing treatment workflows. Conclusion: Deep learning models demonstrate potential in TCC polyp segmentation, even when trained on lower-quality images, suggesting their viability in improving timely bladder cancer diagnosis without impacting the current clinical processes.

11.
Article in English | MEDLINE | ID: mdl-38922721

ABSTRACT

OBJECTIVE: Segmentation, the partitioning of patient imaging into multiple, labeled segments, has several potential clinical benefits but when performed manually is tedious and resource intensive. Automated deep learning (DL)-based segmentation methods can streamline the process. The objective of this study was to evaluate a label-efficient DL pipeline that requires only a small number of annotated scans for semantic segmentation of sinonasal structures in CT scans. STUDY DESIGN: Retrospective cohort study. SETTING: Academic institution. METHODS: Forty CT scans were used in this study including 16 scans in which the nasal septum (NS), inferior turbinate (IT), maxillary sinus (MS), and optic nerve (ON) were manually annotated using an open-source software. A label-efficient DL framework was used to train jointly on a few manually labeled scans and the remaining unlabeled scans. Quantitative analysis was then performed to obtain the number of annotated scans needed to achieve submillimeter average surface distances (ASDs). RESULTS: Our findings reveal that merely four labeled scans are necessary to achieve median submillimeter ASDs for large sinonasal structures-NS (0.96 mm), IT (0.74 mm), and MS (0.43 mm), whereas eight scans are required for smaller structures-ON (0.80 mm). CONCLUSION: We have evaluated a label-efficient pipeline for segmentation of sinonasal structures. Empirical results demonstrate that automated DL methods can achieve submillimeter accuracy using a small number of labeled CT scans. Our pipeline has the potential to improve pre-operative planning workflows, robotic- and image-guidance navigation systems, computer-assisted diagnosis, and the construction of statistical shape models to quantify population variations. LEVEL OF EVIDENCE: N/A.

12.
Front Surg ; 11: 1386722, 2024.
Article in English | MEDLINE | ID: mdl-38933651

ABSTRACT

Introduction: Infrared thermography (IT) is a non-invasive real-time imaging technique with potential application in different areas of neurosurgery. Despite technological advances in the field, intraoperative IT (IIT) has been an underestimated tool with scarce reports on its usefulness during intracranial tumor resection. We aimed to evaluate the usefulness of high-resolution IIT with static and dynamic thermographic maps for transdural lesion localization, and diagnosis, to assess the extent of resection, and the occurrence of perioperative acute ischemia. Methods: In a prospective study, 15 patients affected by intracranial tumors (six gliomas, four meningiomas, and five brain metastases) were examined with a high-resolution thermographic camera after craniotomy, after dural opening, and at the end of tumor resection. Results: Tumors were transdurally located with 93.3% sensitivity and 100% specificity (p < 0.00001), as well as cortical arteries and veins. Gliomas were consistently hypothermic, while metastases and meningiomas exhibited highly variable thermographic maps on static (p = 0.055) and dynamic (p = 0.015) imaging. Residual tumors revealed non-specific static but characteristic dynamic thermographic maps. Ischemic injuries were significantly hypothermic (p < 0.001). Conclusions: High-resolution IIT is a non-invasive alternative intraoperative imaging method for lesion localization, diagnosis, assessing the extent of tumor resection, and identifying acute ischemia changes with static and dynamic thermographic maps.

13.
Comput Assist Surg (Abingdon) ; 29(1): 2355897, 2024 Dec.
Article in English | MEDLINE | ID: mdl-38794834

ABSTRACT

Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.


Subject(s)
Augmented Reality , Operating Rooms , Surgery, Computer-Assisted , Humans , Surgery, Computer-Assisted/methods
14.
Am J Otolaryngol ; 45(5): 104360, 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38754261

ABSTRACT

INTRODUCTION: Robot-assisted cochlear implant surgery (RACIS) as defined by the HEARO®-procedure performs minimal invasive cochlear implant (CI) surgery by directly drilling a keyhole trajectory towards the inner ear. Hitherto, an entirely robotic automation including electrode insertion has not been described yet. The feasability of using a newly developed, dedicated motorised device for automated electrode insertion in the first clinical case of entirely robotic cochlear implant surgery was investigated. AIM: The aim is to report the first experience of entirely robotic cochlear implantation surgery. INTERVENTION: RACIS with a straight flexible lateral wall electrode. PRIMARY OUTCOME MEASUREMENTS: Electrode cochlear insertion depth. SECONDARY OUTCOME MEASUREMENTS: The audiological outcome in terms of mean hearing thresholds. CONCLUSION: Here, we report on a cochlear implant robot that performs the most complex surgical steps to place a cochlear implant array successfully in the inner ear and render similar audiological results as in conventional surgery. Robots can execute tasks beyond human dexterity and will probably pave the way to standardize residual hearing preservation and broadening the indication for electric-acoustic stimulation in the same ear with hybrid implants.

15.
Article in English | MEDLINE | ID: mdl-38745863

ABSTRACT

Augmented reality (AR) has seen increased interest and attention for its application in surgical procedures. AR-guided surgical systems can overlay segmented anatomy from pre-operative imaging onto the user's environment to delineate hard-to-see structures and subsurface lesions intraoperatively. While previous works have utilized pre-operative imaging such as computed tomography or magnetic resonance images, registration methods still lack the ability to accurately register deformable anatomical structures without fiducial markers across modalities and dimensionalities. This is especially true of minimally invasive abdominal surgical techniques, which often employ a monocular laparoscope, due to inherent limitations. Surgical scene reconstruction is a critical component towards accurate registrations needed for AR-guided surgery and other downstream AR applications such as remote assistance or surgical simulation. In this work, we utilize a state-of-the-art (SOTA) deep-learning-based visual simultaneous localization and mapping (vSLAM) algorithm to generate a dense 3D reconstruction with camera pose estimations and depth maps from video obtained with a monocular laparoscope. The proposed method can robustly reconstruct surgical scenes using real-time data and provide camera pose estimations without stereo or additional sensors, which increases its usability and is less intrusive. We also demonstrate a framework to evaluate current vSLAM algorithms on non-Lambertian, low-texture surfaces and explore using its outputs on downstream tasks. We expect these evaluation methods can be utilized for the continual refinement of newer algorithms for AR-guided surgery.

16.
Article in English | MEDLINE | ID: mdl-38745746

ABSTRACT

Hyperspectral imaging (HSI) is an emerging imaging modality in medical applications, especially for intraoperative image guidance. A surgical microscope improves surgeons' visualization with fine details during surgery. The combination of HSI and surgical microscope can provide a powerful tool for surgical guidance. However, to acquire high-resolution hyperspectral images, the long integration time and large image file size can be a burden for intraoperative applications. Super-resolution reconstruction allows acquisition of low-resolution hyperspectral images and generates high-resolution HSI. In this work, we developed a hyperspectral surgical microscope and employed our unsupervised super-resolution neural network, which generated high-resolution hyperspectral images with fine textures and spectral characteristics of tissues. The proposed method can reduce the acquisition time and save storage space taken up by hyperspectral images without compromising image quality, which will facilitate the adaptation of hyperspectral imaging technology in intraoperative image guidance.

17.
Expert Rev Med Devices ; 21(5): 349-358, 2024 May.
Article in English | MEDLINE | ID: mdl-38722051

ABSTRACT

INTRODUCTION: Surgery and biomedical imaging encompass a big share of the medical-device market. The ever-mounting demand for precision surgery has driven the integration of these two into the field of image-guided surgery. A key-question herein is how imaging modalities can guide the surgical decision-making process. Through performance-based design, chemists, engineers, and doctors need to build a bridge between imaging technologies and surgical challenges. AREAS-COVERED: This perspective article highlights the complementary nature between the technological design of an image-guidance modality and the type of procedure performed. The specific roles of the involved professionals, imaging technologies, and surgical indications are addressed. EXPERT-OPINION: Molecular-image-guided surgery has the potential to advance pre-, intra- and post-operative tissue characterization. To achieve this, surgeons need the access to well-designed indication-specific chemical-agents and detection modalities. Hereby, some technologies stimulate exploration ('go'), while others stimulate caution ('stop'). However, failing to adequately address the indication-specific needs rises the risk of incorrect tool employment and sub-optimal surgical performance. Therefore, besides the availability of new technologies, market growth is highly dependent on the practical nature and impact on real-life clinical care. While urology currently takes the lead in the widespread implementation of image-guidance technologies, the topic is generic and its popularity spreads rapidly within surgical oncology.


Subject(s)
Surgery, Computer-Assisted , Humans , Surgery, Computer-Assisted/instrumentation , Surgery, Computer-Assisted/methods , Diagnostic Imaging/methods , Diagnostic Imaging/instrumentation , Precision Medicine/methods , Precision Medicine/instrumentation , Equipment and Supplies
18.
J Robot Surg ; 18(1): 212, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38753180

ABSTRACT

Endometriosis is a benign inflammatory onco-mimetic disease affecting 10-15% of women in the world. When it is refractory to medical treatments, surgery may be required. Usually, laparoscopy is the preferred approach, but robotic surgery has gained popularity in the last 15 years. This study aims to evaluate the safety and efficacy of robotic-assisted laparoscopic surgery (RAS) versus conventional laparoscopic surgery (LPS) in the treatment of endometriosis. This study adheres to PRISMA guidelines and is registered with PROSPERO. Studies reporting perioperative data comparing RAS and LPS surgery in patients with endometriosis querying PubMed, Google Scholar and ClinicalTrials.gov were included in the analysis. The Quality Assessment of Diagnostic Accuracy Studies 2 tool (QUADAS-2) was used for the quality assessment of the selected articles. Fourteen studies were identified, including 2709 patients with endometriosis stage I-IV for the meta-analysis. There were no significant differences between RAS and LPS in terms of intraoperative and postoperative complications, conversion rate and estimated blood loss. However, patients in the RAS group have a longer operative time (p < 0.0001) and longer hospital stay (p = 0.020) than those in the laparoscopic group. Robotic surgery is not inferior to laparoscopy in patients with endometriosis in terms of surgical outcomes; however, RAS requires longer operative times and longer hospital stay. The benefits of robotic surgery should be sought in the easiest potential integration of robotic platforms with new technologies. Prospective studies comparing laparoscopy to the new robotic systems are desirable for greater robustness of scientific evidence.


Subject(s)
Endometriosis , Laparoscopy , Operative Time , Robotic Surgical Procedures , Endometriosis/surgery , Humans , Female , Robotic Surgical Procedures/methods , Laparoscopy/methods , Treatment Outcome , Postoperative Complications/etiology , Postoperative Complications/epidemiology , Length of Stay , Blood Loss, Surgical/statistics & numerical data
19.
Surg Innov ; : 15533506241256827, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38785116

ABSTRACT

BACKGROUND: In the digital age, patients are increasingly turning to the Internet to seek medical information to aid in their decision-making process before undergoing medical treatments. Fluorescence imaging is an emerging technological tool that holds promise in enhancing intra-operative decision-making during surgical procedures. This study aims to evaluate the quality of patient information available online regarding fluorescence imaging in surgery and assesses whether it adequately supports informed decision-making. METHOD: The term "patient information on fluorescence imaging in surgery" was searched on Google. The websites that fulfilled the inclusion criteria were assessed using 2 scoring instruments. DISCERN was used to evaluate the reliability of consumer health information. QUEST was used to assess authorship, tone, conflict of interest and complementarity. RESULTS: Out of the 50 websites identified from the initial search, 10 fulfilled the inclusion criteria. Only two of these websites were updated in the last two years. The definition of fluorescence imaging was stated in only 50% of the websites. Although all websites mentioned the benefits of fluorescence imaging, none mentioned potential risks. Assessment by DISCERN showed that 30% of the websites were rated low and 70% were rated moderate. With QUEST, the websites demonstrated an average score of 62.5%. CONCLUSION: This study highlights the importance of providing patients with accurate and balanced information about medical technologies and procedures they may undergo. Fluorescence imaging in surgery is a promising technology that can potentially improve surgical outcomes. However, patients need to be well-informed about its benefits and limitations in order to make informed decisions about their healthcare.

20.
Angew Chem Int Ed Engl ; : e202406651, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38781352

ABSTRACT

Organic phosphorescent materials are excellent candidates for use in tumor imaging. However, a systematic comparison of the effects of the intensity, lifetime, and wavelength of phosphorescent emissions on bioimaging performance has not yet been undertaken. In addition, there have been few reports on organic phosphorescent materials that specifically distinguish tumors from normal tissues. This study addresses these gaps and reveals that longer lifetimes effectively increase the signal intensity, whereas longer wavelengths enhance the penetration depth. Conversely, a strong emission intensity with a short lifetime does not necessarily yield robust imaging signals. Building upon these findings, an organo-phosphorescent material with a lifetime of 0.94 s was designed for tumor imaging. Remarkably, the phosphorescent signals of various organic nanoparticles are nearly extinguished in blood-rich organs because of the quenching effect of iron ions. Moreover, for the first time, we demonstrated that iron ions universally quench the phosphorescence of organic room-temperature phosphorescent materials, which is an inherent property of such substances. Leveraging this property, both the normal liver and hepatitis tissues exhibit negligible phosphorescent signals, whereas liver tumors display intense phosphorescence. Therefore, phosphorescent materials, unlike chemiluminescent or fluorescent materials, can exploit this unique inherent property to selectively distinguish liver tumor tissues from normal tissues without additional modifications or treatments.

SELECTION OF CITATIONS
SEARCH DETAIL
...