Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Neurospine ; 21(1): 57-67, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38317546

ABSTRACT

OBJECTIVE: Virtual and augmented reality have enjoyed increased attention in spine surgery. Preoperative planning, pedicle screw placement, and surgical training are among the most studied use cases. Identifying osseous structures is a key aspect of navigating a 3-dimensional virtual reconstruction. To automate the otherwise time-consuming process of labeling vertebrae on each slice individually, we propose a fully automated pipeline that automates segmentation on computed tomography (CT) and which can form the basis for further virtual or augmented reality application and radiomic analysis. METHODS: Based on a large public dataset of annotated vertebral CT scans, we first trained a YOLOv8m (You-Only-Look-Once algorithm, Version 8 and size medium) to detect each vertebra individually. On the then cropped images, a 2D-U-Net was developed and externally validated on 2 different public datasets. RESULTS: Two hundred fourteen CT scans (cervical, thoracic, or lumbar spine) were used for model training, and 40 scans were used for external validation. Vertebra recognition achieved a mAP50 (mean average precision with Jaccard threshold of 0.5) of over 0.84, and the segmentation algorithm attained a mean Dice score of 0.75 ± 0.14 at internal, 0.77 ± 0.12 and 0.82 ± 0.14 at external validation, respectively. CONCLUSION: We propose a 2-stage approach consisting of single vertebra labeling by an object detection algorithm followed by semantic segmentation. In our externally validated pilot study, we demonstrate robust performance for our object detection network in identifying individual vertebrae, as well as for our segmentation model in precisely delineating the bony structures.

2.
Neurospine ; 21(1): 68-75, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38317547

ABSTRACT

OBJECTIVE: Computed tomography (CT) imaging is a cornerstone in the assessment of patients with spinal trauma and in the planning of spinal interventions. However, CT studies are associated with logistical problems, acquisition costs, and radiation exposure. In this proof-of-concept study, the feasibility of generating synthetic spinal CT images using biplanar radiographs was explored. This could expand the potential applications of x-ray machines pre-, post-, and even intraoperatively. METHODS: A cohort of 209 patients who underwent spinal CT imaging from the VerSe2020 dataset was used to train the algorithm. The model was subsequently evaluated using an internal and external validation set containing 55 from the VerSe2020 dataset and a subset of 56 images from the CTSpine1K dataset, respectively. Digitally reconstructed radiographs served as input for training and evaluation of the 2-dimensional (2D)-to-3-dimentional (3D) generative adversarial model. Model performance was assessed using peak signal to noise ratio (PSNR), structural similarity index (SSIM), and cosine similarity (CS). RESULTS: At external validation, the developed model achieved a PSNR of 21.139 ± 1.018 dB (mean ± standard deviation). The SSIM and CS amounted to 0.947 ± 0.010 and 0.671 ± 0.691, respectively. CONCLUSION: Generating an artificial 3D output from 2D imaging is challenging, especially for spinal imaging, where x-rays are known to deliver insufficient information frequently. Although the synthetic CT scans derived from our model do not perfectly match their ground truth CT, our proof-of-concept study warrants further exploration of the potential of this technology.

3.
Acta Neurochir (Wien) ; 165(9): 2445-2460, 2023 09.
Article in English | MEDLINE | ID: mdl-37555999

ABSTRACT

BACKGROUND: Although there is an increasing body of evidence showing gender differences in various medical domains as well as presentation and biology of pituitary adenoma (PA), gender differences regarding outcome of patients who underwent transsphenoidal resection of PA are poorly understood. The aim of this study was to identify gender differences in PA surgery. METHODS: The PubMed/MEDLINE database was searched up to April 2023 to identify eligible articles. Quality appraisal and extraction were performed in duplicate. RESULTS: A total of 40 studies including 4989 patients were included in this systematic review and meta-analysis. Our analysis showed odds ratio of postoperative biochemical remission in males vs. females of 0.83 (95% CI 0.59-1.15, P = 0.26), odds ratio of gross total resection in male vs. female patients of 0.68 (95% CI 0.34-1.39, P = 0.30), odds ratio of postoperative diabetes insipidus in male vs. female patients of 0.40 (95% CI 0.26-0.64, P < 0.0001), and a mean difference of preoperative level of prolactin in male vs. female patients of 11.62 (95% CI - 119.04-142.27, P = 0.86). CONCLUSIONS: There was a significantly higher rate of postoperative DI in female patients after endoscopic or microscopic transsphenoidal PA surgery, and although there was some data in isolated studies suggesting influence of gender on postoperative biochemical remission, rate of GTR, and preoperative prolactin levels, these findings could not be confirmed in this meta-analysis and demonstrated no statistically significant effect. Further research is needed and future studies concerning PA surgery should report their data by gender or sexual hormones and ideally further assess their impact on PA surgery.


Subject(s)
Adenoma , Pituitary Neoplasms , Humans , Male , Female , Treatment Outcome , Prolactin , Retrospective Studies , Pituitary Neoplasms/surgery , Adenoma/surgery , Hormones , Postoperative Complications/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL
...