Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Med Phys ; 51(4): 2665-2677, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37888789

RESUMO

BACKGROUND: Accurate segmentation of the clinical target volume (CTV) corresponding to the prostate with or without proximal seminal vesicles is required on transrectal ultrasound (TRUS) images during prostate brachytherapy procedures. Implanted needles cause artifacts that may make this task difficult and time-consuming. Thus, previous studies have focused on the simpler problem of segmentation in the absence of needles at the cost of reduced clinical utility. PURPOSE: To use a convolutional neural network (CNN) algorithm for segmentation of the prostatic CTV in TRUS images post-needle insertion obtained from prostate brachytherapy procedures to better meet the demands of the clinical procedure. METHODS: A dataset consisting of 144 3-dimensional (3D) TRUS images with implanted metal brachytherapy needles and associated manual CTV segmentations was used for training a 2-dimensional (2D) U-Net CNN using a Dice Similarity Coefficient (DSC) loss function. These were split by patient, with 119 used for training and 25 reserved for testing. The 3D TRUS training images were resliced at radial (around the axis normal to the coronal plane) and oblique angles through the center of the 3D image, as well as axial, coronal, and sagittal planes to obtain 3689 2D TRUS images and masks for training. The network generated boundary predictions on 300 2D TRUS images obtained from reslicing each of the 25 3D TRUS images used for testing into 12 radial slices (15° apart), which were then reconstructed into 3D surfaces. Performance metrics included DSC, recall, precision, unsigned and signed volume percentage differences (VPD/sVPD), mean surface distance (MSD), and Hausdorff distance (HD). In addition, we studied whether providing algorithm-predicted boundaries to the physicians and allowing modifications increased the agreement between physicians. This was performed by providing a subset of 3D TRUS images of five patients to five physicians who segmented the CTV using clinical software and repeated this at least 1 week apart. The five physicians were given the algorithm boundary predictions and allowed to modify them, and the resulting inter- and intra-physician variability was evaluated. RESULTS: Median DSC, recall, precision, VPD, sVPD, MSD, and HD of the 3D-reconstructed algorithm segmentations were 87.2 [84.1, 88.8]%, 89.0 [86.3, 92.4]%, 86.6 [78.5, 90.8]%, 10.3 [4.5, 18.4]%, 2.0 [-4.5, 18.4]%, 1.6 [1.2, 2.0] mm, and 6.0 [5.3, 8.0] mm, respectively. Segmentation time for a set of 12 2D radial images was 2.46 [2.44, 2.48] s. With and without U-Net starting points, the intra-physician median DSCs were 97.0 [96.3, 97.8]%, and 94.4 [92.5, 95.4]% (p < 0.0001), respectively, while the inter-physician median DSCs were 94.8 [93.3, 96.8]% and 90.2 [88.7, 92.1]%, respectively (p < 0.0001). The median segmentation time for physicians, with and without U-Net-generated CTV boundaries, were 257.5 [211.8, 300.0] s and 288.0 [232.0, 333.5] s, respectively (p = 0.1034). CONCLUSIONS: Our algorithm performed at a level similar to physicians in a fraction of the time. The use of algorithm-generated boundaries as a starting point and allowing modifications reduced physician variability, although it did not significantly reduce the time compared to manual segmentations.


Assuntos
Braquiterapia , Aprendizado Profundo , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Braquiterapia/métodos , Ultrassonografia , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia
2.
Int J Comput Assist Radiol Surg ; 18(7): 1225-1233, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37222930

RESUMO

PURPOSE: Existing field generators (FGs) for magnetic tracking cause severe image artifacts in X-ray images. While FG with radio-lucent components significantly reduces these imaging artifacts, traces of coils and electronic components may still be visible to trained professionals. In the context of X-ray-guided interventions using magnetic tracking, we introduce a learning-based approach to further reduce traces of field-generator components from X-ray images to improve visualization and image guidance. METHODS: An adversarial decomposition network was trained to separate the residual FG components (including fiducial points introduced for pose estimation), from the X-ray images. The main novelty of our approach lies in the proposed data synthesis method, which combines existing 2D patient chest X-ray and FG X-ray images to generate 20,000 synthetic images, along with ground truth (images without the FG) to effectively train the network. RESULTS: For 30 real images of a torso phantom, our enhanced X-ray image after image decomposition obtained an average local PSNR of 35.04 and local SSIM of 0.97, whereas the unenhanced X-ray images averaged a local PSNR of 31.16 and local SSIM of 0.96. CONCLUSION: In this study, we proposed an X-ray image decomposition method to enhance X-ray image for magnetic navigation by removing FG-induced artifacts, using a generative adversarial network. Experiments on both synthetic and real phantom data demonstrated the efficacy of our method.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Raios X , Radiografia , Imagens de Fantasmas
3.
Int J Comput Assist Radiol Surg ; 18(7): 1159-1166, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37162735

RESUMO

PURPOSE: US-guided percutaneous focal liver tumor ablations have been considered promising curative treatment techniques. To address cases with invisible or poorly visible tumors, registration of 3D US with CT or MRI is a critical step. By taking advantage of deep learning techniques to efficiently detect representative features in both modalities, we aim to develop a 3D US-CT/MRI registration approach for liver tumor ablations. METHODS: Facilitated by our nnUNet-based 3D US vessel segmentation approach, we propose a coarse-to-fine 3D US-CT/MRI image registration pipeline based on the liver vessel surface and centerlines. Then, phantom, healthy volunteer and patient studies are performed to demonstrate the effectiveness of our proposed registration approach. RESULTS: Our nnUNet-based vessel segmentation model achieved a Dice score of 0.69. In healthy volunteer study, 11 out of 12 3D US-MRI image pairs were successfully registered with an overall centerline distance of 4.03±2.68 mm. Two patient cases achieved target registration errors (TRE) of 4.16 mm and 5.22 mm. CONCLUSION: We proposed a coarse-to-fine 3D US-CT/MRI registration pipeline based on nnUNet vessel segmentation models. Experiments based on healthy volunteers and patient trials demonstrated the effectiveness of our registration workflow. Our code and example data are publicly available in this r epository.


Assuntos
Neoplasias Hepáticas , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Neoplasias Hepáticas/patologia , Imageamento Tridimensional/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
IEEE Trans Med Imaging ; 41(11): 3344-3356, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35724283

RESUMO

Complete tumor coverage by the thermal ablation zone and with a safety margin (5 or 10 mm) is required to achieve the entire tumor eradication in liver tumor ablation procedures. However, 2D ultrasound (US) imaging has limitations in evaluating the tumor coverage by imaging only one or multiple planes, particularly for cases with multiple inserted applicators or irregular tumor shapes. In this paper, we evaluate the intra-procedural tumor coverage using 3D US imaging and investigate whether it can provide clinically needed information. Using data from 14 cases, we employed surface- and volume-based evaluation metrics to provide information on any uncovered tumor region. For cases with incomplete tumor coverage or uneven ablation margin distribution, we also proposed a novel margin uniformity -based approach to provide quantitative applicator adjustment information for optimization of tumor coverage. Both the surface- and volume-based metrics showed that 5 of 14 cases had incomplete tumor coverage according to the estimated ablation zone. After applying our proposed applicator adjustment approach, the simulated results showed that 92.9% (13 of 14) cases achieved 100% tumor coverage and the remaining case can benefit by increasing the ablation time or power. Our proposed method can evaluate the intra-procedural tumor coverage and intuitively provide applicator adjustment information for the physician. Our 3D US-based method is compatible with the constraints of conventional US-guided ablation procedures and can be easily integrated into the clinical workflow.


Assuntos
Ablação por Cateter , Neoplasias Hepáticas , Humanos , Ultrassonografia , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Imageamento Tridimensional/métodos , Cintilografia , Ablação por Cateter/métodos
5.
Med Phys ; 49(6): 3944-3962, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35319105

RESUMO

BACKGROUND: Mammographic screening has reduced mortality in women through the early detection of breast cancer. However, the sensitivity for breast cancer detection is significantly reduced in women with dense breasts, in addition to being an independent risk factor. Ultrasound (US) has been proven effective in detecting small, early-stage, and invasive cancers in women with dense breasts. PURPOSE: To develop an alternative, versatile, and cost-effective spatially tracked three-dimensional (3D) US system for whole-breast imaging. This paper describes the design, development, and validation of the spatially tracked 3DUS system, including its components for spatial tracking, multi-image registration and fusion, feasibility for whole-breast 3DUS imaging and multi-planar visualization in tissue-mimicking phantoms, and a proof-of-concept healthy volunteer study. METHODS: The spatially tracked 3DUS system contains (a) a six-axis manipulator and counterbalanced stabilizer, (b) an in-house quick-release 3DUS scanner, adaptable to any commercially available US system, and removable, allowing for handheld 3DUS acquisition and two-dimensional US imaging, and (c) custom software for 3D tracking, 3DUS reconstruction, visualization, and spatial-based multi-image registration and fusion of 3DUS images for whole-breast imaging. Spatial tracking of the 3D position and orientation of the system and its joints (J1-6 ) were evaluated in a clinically accessible workspace for bedside point-of-care (POC) imaging. Multi-image registration and fusion of acquired 3DUS images were assessed with a quadrants-based protocol in tissue-mimicking phantoms and the target registration error (TRE) was quantified. Whole-breast 3DUS imaging and multi-planar visualization were evaluated with a tissue-mimicking breast phantom. Feasibility for spatially tracked whole-breast 3DUS imaging was assessed in a proof-of-concept healthy male and female volunteer study. RESULTS: Mean tracking errors were 0.87 ± 0.52, 0.70 ± 0.46, 0.53 ± 0.48, 0.34 ± 0.32, 0.43 ± 0.28, and 0.78 ± 0.54 mm for joints J1-6 , respectively. Lookup table (LUT) corrections minimized the error in joints J1 , J2 , and J5 . Compound motions exercising all joints simultaneously resulted in a mean tracking error of 1.08 ± 0.88 mm (N = 20) within the overall workspace for bedside 3DUS imaging. Multi-image registration and fusion of two acquired 3DUS images resulted in a mean TRE of 1.28 ± 0.10 mm. Whole-breast 3DUS imaging and multi-planar visualization in axial, sagittal, and coronal views were demonstrated with the tissue-mimicking breast phantom. The feasibility of the whole-breast 3DUS approach was demonstrated in healthy male and female volunteers. In the male volunteer, the high-resolution whole-breast 3DUS acquisition protocol was optimized without the added complexities of curvature and tissue deformations. With small post-acquisition corrections for motion, whole-breast 3DUS imaging was performed on the healthy female volunteer showing relevant anatomical structures and details. CONCLUSIONS: Our spatially tracked 3DUS system shows potential utility as an alternative, accurate, and feasible whole-breast approach with the capability for bedside POC imaging. Future work is focused on reducing misregistration errors due to motion and tissue deformations, to develop a robust spatially tracked whole-breast 3DUS acquisition protocol, then exploring its clinical utility for screening high-risk women with dense breasts.


Assuntos
Neoplasias da Mama , Densidade da Mama , Neoplasias da Mama/diagnóstico por imagem , Detecção Precoce de Câncer , Feminino , Humanos , Imageamento Tridimensional/métodos , Masculino , Mamografia , Imagens de Fantasmas , Sistemas Automatizados de Assistência Junto ao Leito
6.
Adv Exp Med Biol ; 1093: 207-224, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30306484

RESUMO

The human-machine interface (HMI) is an essential part of image-guided orthopedic navigation systems. HMI provides a primary platform to merge surgically relevant pre- and intraoperative images from different modalities and 3D models including anatomical structures and implants to support surgical planning and navigation. With the various input-output techniques of HMI, surgeons can intuitively manipulate anatomical models generated from medical images and/or implant models for surgical planning. Furthermore, HMI recreates sight, sound, and touch feedback for the guidance of surgery operations which helps surgeons to sense more relevant information, e.g., anatomical structures and surrounding tissue, the mechanical axis of limbs, and even the mechanical properties of tissue. Thus, with the help of interactive HMI, precision operations, such as cutting, drilling, and implantation, can be performed more easily and safely.Classic HMI is based on 2D displays and standard input devices of computers. In contrast, modern visual reality (VR) and augmented reality (AR) techniques allow the showing more information for surgical navigation. Various attempts have been applied to image-guided orthopedic therapy. In order to realize rapid image-based modeling and to create effective interaction and feedback, intelligent algorithms have been developed. Intelligent algorithms can realize fast registration of image to image and image to patients, and the algorithms to compensate the visual offset in AR display have been investigated. In order to accomplish more effective human-computer interaction, various input methods and force sensing/force reflecting methods have been developed. This chapter reviews related human-machine interface techniques for image-guided orthopedic navigation, analyzes several examples of clinical applications, and discusses the trend of intelligent HMI in orthopedic navigation.


Assuntos
Procedimentos Ortopédicos , Cirurgia Assistida por Computador , Interface Usuário-Computador , Algoritmos , Humanos , Imageamento Tridimensional , Modelos Anatômicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...