Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.768
Filtrar
1.
Sci Rep ; 14(1): 15238, 2024 07 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956282

RESUMO

The vector forces at the human-mattress interface are not only crucial for understanding the distribution of vertical and shear forces exerted on the human body during sleep but also serves as a significant input for biomechanical models of sleeping positions, whose accuracy determines the credibility of predicting musculoskeletal system loads. In this study, we introduce a novel method for calculating the interface vector forces. By recording indentations after supine and lateral positions using a vacuum mattress and 3D scanner, we utilize image registration techniques to align body pressure distribution with the mattress deformation scanning images, thereby calculating the vector force values for each unit area (36.25 mm × 36.25 mm). This method was validated through five participants attendance from two perspectives, revealing that (1) the mean summation of the vertical force components is 98.67% ± 7.21% body weight, exhibiting good consistency, and mean ratio of horizontal component force to body weight is 2.18% ± 1.77%. (2) the predicted muscle activity using the vector forces as input to the sleep position model aligns with the measured muscle activity (%MVC), with correlation coefficient over 0.7. The proposed method contributes to the vector force distribution understanding and the analysis of musculoskeletal loads during sleep, providing valuable insights for mattress design and evaluation.


Assuntos
Leitos , Sono , Humanos , Sono/fisiologia , Masculino , Fenômenos Biomecânicos , Adulto , Feminino , Postura/fisiologia , Adulto Jovem , Imageamento Tridimensional/métodos
2.
Acad Radiol ; 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38955592

RESUMO

RATIONALE AND OBJECTIVE: Stroke-associated pneumonia (SAP) often appears as a complication following intracerebral hemorrhage (ICH), leading to poor prognosis and increased mortality rates. Previous studies have typically developed prediction models based on clinical data alone, without considering that ICH patients often undergo CT scans immediately upon admission. As a result, these models are subjective and lack real-time applicability, with low accuracy that does not meet clinical needs. Therefore, there is an urgent need for a quick and reliable model to timely predict SAP. METHODS: In this retrospective study, we developed an image-based model (DeepSAP) using brain CT scans from 244 ICH patients to classify the presence and severity of SAP. First, DeepSAP employs MRI-template-based image registration technology to eliminate structural differences between samples, achieving statistical quantification and spatial standardization of cerebral hemorrhage. Subsequently, the processed images and filtered clinical data were simultaneously input into a deep-learning neural network for training and analysis. The model was tested on a test set to evaluate diagnostic performance, including accuracy, specificity, and sensitivity. RESULTS: Brain CT scans from 244 ICH patients (mean age, 60.24; 66 female) were divided into a training set (n = 170) and a test set (n = 74). The cohort included 143 SAP patients, accounting for 58.6% of the total, with 66 cases classified as moderate or above, representing 27% of the total. Experimental results showed an AUC of 0.93, an accuracy of 0.84, a sensitivity of 0.79, and a precision of 0.95 for classifying the presence of SAP. In comparison, the model relying solely on clinical data showed an AUC of only 0.76, while the radiomics method had an AUC of 0.74. Additionally, DeepSAP achieved an optimal AUC of 0.84 for the SAP grading task. CONCLUSION: DeepSAP's accuracy in predicting SAP stems from its spatial normalization and statistical quantification of the ICH region. DeepSAP is expected to be an effective tool for predicting and grading SAP in clinic.

3.
Med Biol Eng Comput ; 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-38969811

RESUMO

Retinal image registration is of utmost importance due to its wide applications in medical practice. In this context, we propose ConKeD, a novel deep learning approach to learn descriptors for retinal image registration. In contrast to current registration methods, our approach employs a novel multi-positive multi-negative contrastive learning strategy that enables the utilization of additional information from the available training samples. This makes it possible to learn high-quality descriptors from limited training data. To train and evaluate ConKeD, we combine these descriptors with domain-specific keypoints, particularly blood vessel bifurcations and crossovers, that are detected using a deep neural network. Our experimental results demonstrate the benefits of the novel multi-positive multi-negative strategy, as it outperforms the widely used triplet loss technique (single-positive and single-negative) as well as the single-positive multi-negative alternative. Additionally, the combination of ConKeD with the domain-specific keypoints produces comparable results to the state-of-the-art methods for retinal image registration, while offering important advantages such as avoiding pre-processing, utilizing fewer training samples, and requiring fewer detected keypoints, among others. Therefore, ConKeD shows a promising potential towards facilitating the development and application of deep learning-based methods for retinal image registration.

4.
Med Image Anal ; 97: 103249, 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38963972

RESUMO

Image registration is an essential step in many medical image analysis tasks. Traditional methods for image registration are primarily optimization-driven, finding the optimal deformations that maximize the similarity between two images. Recent learning-based methods, trained to directly predict transformations between two images, run much faster, but suffer from performance deficiencies due to domain shift. Here we present a new neural network based image registration framework, called NIR (Neural Image Registration), which is based on optimization but utilizes deep neural networks to model deformations between image pairs. NIR represents the transformation between two images with a continuous function implemented via neural fields, receiving a 3D coordinate as input and outputting the corresponding deformation vector. NIR provides two ways of generating deformation field: directly output a displacement vector field for general deformable registration, or output a velocity vector field and integrate the velocity field to derive the deformation field for diffeomorphic image registration. The optimal registration is discovered by updating the parameters of the neural field via stochastic mini-batch gradient descent. We describe several design choices that facilitate model optimization, including coordinate encoding, sinusoidal activation, coordinate sampling, and intensity sampling. NIR is evaluated on two 3D MR brain scan datasets, demonstrating highly competitive performance in terms of both registration accuracy and regularity. Compared to traditional optimization-based methods, our approach achieves better results in shorter computation times. In addition, our methods exhibit performance on a cross-dataset registration task, compared to the pre-trained learning-based methods.

5.
Med Image Anal ; 97: 103257, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38981282

RESUMO

The alignment of tissue between histopathological whole-slide-images (WSI) is crucial for research and clinical applications. Advances in computing, deep learning, and availability of large WSI datasets have revolutionised WSI analysis. Therefore, the current state-of-the-art in WSI registration is unclear. To address this, we conducted the ACROBAT challenge, based on the largest WSI registration dataset to date, including 4,212 WSIs from 1,152 breast cancer patients. The challenge objective was to align WSIs of tissue that was stained with routine diagnostic immunohistochemistry to its H&E-stained counterpart. We compare the performance of eight WSI registration algorithms, including an investigation of the impact of different WSI properties and clinical covariates. We find that conceptually distinct WSI registration methods can lead to highly accurate registration performances and identify covariates that impact performances across methods. These results provide a comparison of the performance of current WSI registration methods and guide researchers in selecting and developing methods.

6.
Phys Med Biol ; 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38981595

RESUMO

Head and neck cancer patients experience systematic anatomical changes as well as random day to day anatomical changes during fractionated radiotherapy treatment. Modelling the expected systematic anatomical changes could aid in creating treatment plans which are more robust against such changes. A patient specific (SM) and population average (AM) model are presented which are able to capture the systematic anatomical changes of some head and neck cancer patients over the course of radiotherapy treatment. Inter- patient correspondence aligned all patients to a model space. Intra- patient correspondence between each planning CT scan and on treatment cone beam CT scans was obtained using diffeomorphic deformable image registration. The stationary velocity fields were then used to develop B-Spline based SMs and AMs. The models were evaluated geometrically and dosimetrically. A leave-one-out method was used to compare the training and testing accuracy of the models. Both SMs and AMs were able to capture systematic changes. The average surface distance between the registration propagated contours and the contours generated by the SM was less than 2mm, showing that the SM are able to capture the anatomical changes which a patient experiences during the course of radiotherapy. The testing accuracy was lower than the training accuracy of the SM, suggesting that the model overfits to the limited data available and therefore also captures some of the random day to day changes. For most patients the AMs were a better estimate of the anatomical changes than assuming there were no changes, but the AMs could not capture the variability in the anatomical changes seen in all patients. No difference was seen in the training and testing accuracy of the AMs. These observations were highlighted in both the geometric and dosimetric evaluations and comparisons. The large patient variability highlights the need for more complex, capable population models.

7.
J Biomed Opt ; 29(6): 066007, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38868496

RESUMO

Significance: The accurate correlation between optical measurements and pathology relies on precise image registration, often hindered by deformations in histology images. We investigate an automated multi-modal image registration method using deep learning to align breast specimen images with corresponding histology images. Aim: We aim to explore the effectiveness of an automated image registration technique based on deep learning principles for aligning breast specimen images with histology images acquired through different modalities, addressing challenges posed by intensity variations and structural differences. Approach: Unsupervised and supervised learning approaches, employing the VoxelMorph model, were examined using a dataset featuring manually registered images as ground truth. Results: Evaluation metrics, including Dice scores and mutual information, demonstrate that the unsupervised model exceeds the supervised (and manual) approaches significantly, achieving superior image alignment. The findings highlight the efficacy of automated registration in enhancing the validation of optical technologies by reducing human errors associated with manual registration processes. Conclusions: This automated registration technique offers promising potential to enhance the validation of optical technologies by minimizing human-induced errors and inconsistencies associated with manual image registration processes, thereby improving the accuracy of correlating optical measurements with pathology labels.


Assuntos
Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Feminino , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Algoritmos , Imagem Multimodal/métodos
8.
Phys Imaging Radiat Oncol ; 30: 100588, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38883145

RESUMO

Background and Purpose: Application of different deformable dose accumulation (DDA) solutions makes institutional comparisons after online-adaptive magnetic resonance-guided radiotherapy (OA-MRgRT) challenging. The aim of this multi-institutional study was to analyze accuracy and agreement of DDA-implementations in OA-MRgRT. Material and Methods: One gold standard (GS) case deformed with a biomechanical-model and five clinical cases consisting of prostate (2x), cervix, liver, and lymph node cancer, treated with OA-MRgRT, were analyzed. Six centers conducted DDA using institutional implementations. Deformable image registration (DIR) and DDA results were compared using the contour metrics Dice Similarity Coefficient (DSC), surface-DSC, Hausdorff-distance (HD95%), and accumulated dose-volume histograms (DVHs) analyzed via intraclass correlation coefficient (ICC) and clinical dosimetric criteria (CDC). Results: For the GS, median DDA errors ranged from 0.0 to 2.8 Gy across contours and implementations. DIR of clinical cases resulted in DSC > 0.8 for up to 81.3% of contours and a variability of surface-DSC values depending on the implementation. Maximum HD95%=73.3 mm was found for duodenum in the liver case. Although DVH ICC > 0.90 was found after DDA for all but two contours, relevant absolute CDC differences were observed in clinical cases: Prostate I/II showed maximum differences in bladder V28Gy (10.2/7.6%), while for cervix, liver, and lymph node the highest differences were found for rectum D2cm3 (2.8 Gy), duodenum Dmax (7.1 Gy), and rectum D0.5cm3 (4.6 Gy). Conclusion: Overall, high agreement was found between the different DIR and DDA implementations. Case- and algorithm-dependent differences were observed, leading to potentially clinically relevant results. Larger studies are needed to define future DDA-guidelines.

9.
Front Oncol ; 14: 1287479, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38884083

RESUMO

Purpose: To identify significant relationships between quantitative cytometric tissue features and quantitative MR (qMRI) intratumorally in preclinical undifferentiated pleomorphic sarcomas (UPS). Materials and methods: In a prospective study of genetically engineered mouse models of UPS, we registered imaging libraries consisting of matched multi-contrast in vivo MRI, three-dimensional (3D) multi-contrast high-resolution ex vivo MR histology (MRH), and two-dimensional (2D) tissue slides. From digitized histology we generated quantitative cytometric feature maps from whole-slide automated nuclear segmentation. We automatically segmented intratumoral regions of distinct qMRI values and measured corresponding cytometric features. Linear regression analysis was performed to compare intratumoral qMRI and tissue cytometric features, and results were corrected for multiple comparisons. Linear correlations between qMRI and cytometric features with p values of <0.05 after correction for multiple comparisons were considered significant. Results: Three features correlated with ex vivo apparent diffusion coefficient (ADC), and no features correlated with in vivo ADC. Six features demonstrated significant linear relationships with ex vivo T2*, and fifteen features correlated significantly with in vivo T2*. In both cases, nuclear Haralick texture features were the most prevalent type of feature correlated with T2*. A small group of nuclear topology features also correlated with one or both T2* contrasts, and positive trends were seen between T2* and nuclear size metrics. Conclusion: Registered multi-parametric imaging datasets can identify quantitative tissue features which contribute to UPS MR signal. T2* may provide quantitative information about nuclear morphology and pleomorphism, adding histological insights to radiological interpretation of UPS.

10.
Med Image Anal ; 97: 103242, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38901099

RESUMO

OBJECTIVE: The development of myopia is usually accompanied by changes in retinal vessels, optic disc, optic cup, fovea, and other retinal structures as well as the length of the ocular axis. And the accurate registration of retinal images is very important for the extraction and analysis of retinal structural changes. However, the registration of retinal images with myopia development faces a series of challenges, due to the unique curved surface of the retina, as well as the changes in fundus curvature caused by ocular axis elongation. Therefore, our goal is to improve the registration accuracy of the retinal images with myopia development. METHOD: In this study, we propose a 3D spatial model for the pair of retinal images with myopia development. In this model, we introduce a novel myopia development model that simulates the changes in the length of ocular axis and fundus curvature due to the development of myopia. We also consider the distortion model of the fundus camera during the imaging process. Based on the 3D spatial model, we further implement a registration framework, which utilizes corresponding points in the pair of retinal images to achieve registration in the way of 3D pose estimation. RESULTS: The proposed method is quantitatively evaluated on the publicly available dataset without myopia development and our Fundus Image Myopia Development (FIMD) dataset. The proposed method is shown to perform more accurate and stable registration than state-of-the-art methods, especially for retinal images with myopia development. SIGNIFICANCE: To the best of our knowledge, this is the first retinal image registration method for the study of myopia development. This method significantly improves the registration accuracy of retinal images which have myopia development. The FIMD dataset we constructed has been made publicly available to promote the study in related fields.

11.
Front Surg ; 11: 1389244, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38903864

RESUMO

Background: Surgical robots are gaining increasing popularity because of their capability to improve the precision of pedicle screw placement. However, current surgical robots rely on unimodal computed tomography (CT) images as baseline images, limiting their visualization to vertebral bone structures and excluding soft tissue structures such as intervertebral discs and nerves. This inherent limitation significantly restricts the applicability of surgical robots. To address this issue and further enhance the safety and accuracy of robot-assisted pedicle screw placement, this study will develop a software system for surgical robots based on multimodal image fusion. Such a system can extend the application range of surgical robots, such as surgical channel establishment, nerve decompression, and other related operations. Methods: Initially, imaging data of the patients included in the study are collected. Professional workstations are employed to establish, train, validate, and optimize algorithms for vertebral bone segmentation in CT and magnetic resonance (MR) images, intervertebral disc segmentation in MR images, nerve segmentation in MR images, and registration fusion of CT and MR images. Subsequently, a spine application model containing independent modules for vertebrae, intervertebral discs, and nerves is constructed, and a software system for surgical robots based on multimodal image fusion is designed. Finally, the software system is clinically validated. Discussion: We will develop a software system based on multimodal image fusion for surgical robots, which can be applied to surgical access establishment, nerve decompression, and other operations not only for robot-assisted nail placement. The development of this software system is important. First, it can improve the accuracy of pedicle screw placement, percutaneous vertebroplasty, percutaneous kyphoplasty, and other surgeries. Second, it can reduce the number of fluoroscopies, shorten the operation time, and reduce surgical complications. In addition, it would be helpful to expand the application range of surgical robots by providing key imaging data for surgical robots to realize surgical channel establishment, nerve decompression, and other operations.

12.
Comput Biol Med ; 178: 108673, 2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38905891

RESUMO

Deformable Image registration is a fundamental yet vital task for preoperative planning, intraoperative information fusion, disease diagnosis and follow-ups. It solves the non-rigid deformation field to align an image pair. Latest approaches such as VoxelMorph and TransMorph compute features from a simple concatenation of moving and fixed images. However, this often leads to weak alignment. Moreover, the convolutional neural network (CNN) or the hybrid CNN-Transformer based backbones are constrained to have limited sizes of receptive field and cannot capture long range relations while full Transformer based approaches are computational expensive. In this paper, we propose a novel multi-axis cross grating network (MACG-Net) for deformable medical image registration, which combats these limitations. MACG-Net uses a dual stream multi-axis feature fusion module to capture both long-range and local context relationships from the moving and fixed images. Cross gate blocks are integrated with the dual stream backbone to consider both independent feature extractions in the moving-fixed image pair and the relationship between features from the image pair. We benchmark our method on several different datasets including 3D atlas-based brain MRI, inter-patient brain MRI and 2D cardiac MRI. The results demonstrate that the proposed method has achieved state-of-the-art performance. The source code has been released at https://github.com/Valeyards/MACG.

13.
Brachytherapy ; 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38845268

RESUMO

PURPOSE: To investigate geometric and dosimetric inter-observer variability in needle reconstruction for temporary prostate brachytherapy. To assess the potential of registrations between transrectal ultrasound (TRUS) and cone-beam computed tomography (CBCT) to support implant reconstructions. METHODS AND MATERIALS: The needles implanted in 28 patients were reconstructed on TRUS by three physicists. Corresponding geometric deviations and associated dosimetric variations to prostate and organs at risk (urethra, bladder, rectum) were analyzed. To account for the found inter-observer variability, various approaches (template-based, probe-based, marker-based) for registrations of CBCT to TRUS were investigated regarding the respective needle transfer accuracy in a phantom study. Three patient cases were examined to assess registration accuracy in-vivo. RESULTS: Geometric inter-observer deviations >1 mm and >3 mm were found for 34.9% and 3.5% of all needles, respectively. Prostate dose coverage (changes up to 7.2%) and urethra dose (partly exceeding given dose constraints) were most affected by associated dosimetric changes. Marker-based and probe-based registrations resulted in the phantom study in high mean needle transfer accuracies of 0.73 mm and 0.12 mm, respectively. In the patient cases, the marker-based approach was the superior technique for CBCT-TRUS fusions. CONCLUSION: Inter-observer variability in needle reconstruction can substantially affect dosimetry for individual patients. Especially marker-based CBCT-TRUS registrations can help to ensure accurate reconstructions for improved treatment planning.

14.
Comput Biol Med ; 178: 108785, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38925089

RESUMO

Variational Autoencoders (VAEs) are an efficient variational inference technique coupled with the generated network. Due to the uncertainty provided by variational inference, VAEs have been applied in medical image registration. However, a critical problem in VAEs is that the simple prior cannot provide suitable regularization, which leads to the mismatch between the variational posterior and prior. An optimal prior can close the gap between the evidence's real and variational posterior. In this paper, we propose a multi-stage VAE to learn the optimal prior, which is the aggregated posterior. A lightweight VAE is used to generate the aggregated posterior as a whole. It is an effective way to estimate the distribution of the high-dimensional aggregated posterior that commonly exists in medical image registration based on VAEs. A factorized telescoping classifier is trained to estimate the density ratio of a simple given prior and aggregated posterior, aiming to calculate the KL divergence between the variational and aggregated posterior more accurately. We analyze the KL divergence and find that the finer the factorization, the smaller the KL divergence is. However, too fine a partition is not conducive to registration accuracy. Moreover, the diagonal hypothesis of the variational posterior's covariance ignores the relationship between latent variables in image registration. To address this issue, we learn a covariance matrix with low-rank information to enable correlations with each dimension of the variational posterior. The covariance matrix is further used as a measure to reduce the uncertainty of deformation fields. Experimental results on four public medical image datasets demonstrate that our proposed method outperforms other methods in negative log-likelihood (NLL) and achieves better registration accuracy.

15.
Sci Rep ; 14(1): 14470, 2024 06 24.
Artigo em Inglês | MEDLINE | ID: mdl-38914766

RESUMO

This study employed a commercial software velocity to perform deformable registration and dose calculation on deformed CT images, aiming to assess the accuracy of dose delivery during the radiotherapy for lung cancers. A total of 20 patients with lung cancer were enrolled in this study. Adaptive CT (ACT) was generated by deformed the planning CT (pCT) to the CBCT of initial radiotherapy fraction, followed by contour propagation and dose recalculation. There was not significant difference between volumes of GTV and CTV calculated from the ACT and pCT. However, significant differences in dice similarity coefficient (DSC) and coverage ratio (CR) between GTV and CTV were observed, with lower values for GTV volumes below 15 cc. The mean differences in dose corresponding to 95% of the GTV, GTV-P, CTV, and CTV-P between ACT and pCT were - 0.32%, 4.52%, 2.17%, and 4.71%, respectively. For the dose corresponding to 99%, the discrepancies were - 0.18%, 8.35%, 1.92%, and 24.96%, respectively. These differences in dose primarily appeared at the edges of the target areas. Notably, a significant enhancement of dose corresponding to 1 cc for spinal cord was observed in ACT, compared with pCT. There was no statistical difference in the mean dose of lungs and heart. In general, for lung cancer patients, anatomical motion may result in both CTV and GTV moving outside the original irradiation region. The dose difference within the original target area was small, but the difference in the planning target area was considerable.


Assuntos
Neoplasias Pulmonares , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Software , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/radioterapia , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Planejamento da Radioterapia Assistida por Computador/métodos , Masculino , Feminino , Idoso , Pessoa de Meia-Idade , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada de Feixe Cônico/métodos
16.
Cancers (Basel) ; 16(12)2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38927887

RESUMO

Sublobar resection has emerged as a standard treatment option for early-stage peripheral non-small cell lung cancer. Achieving an adequate resection margin is crucial to prevent local tumor recurrence. However, gross measurement of the resection margin may lack accuracy due to the elasticity of lung tissue and interobserver variability. Therefore, this study aimed to develop an objective measurement method, the CT-based 3D reconstruction algorithm, to quantify the resection margin following sublobar resection in lung cancer patients through pre- and post-operative CT image comparison. An automated subvascular matching technique was first developed to ensure accuracy and reproducibility in the matching process. Following the extraction of matched feature points, another key technique involves calculating the displacement field within the image. This is particularly important for mapping discontinuous deformation fields around the surgical resection area. A transformation based on thin-plate spline is used for medical image registration. Upon completing the final step of image registration, the distance at the resection margin was measured. After developing the CT-based 3D reconstruction algorithm, we included 12 cases for resection margin distance measurement, comprising 4 right middle lobectomies, 6 segmentectomies, and 2 wedge resections. The outcomes obtained with our method revealed that the target registration error for all cases was less than 2.5 mm. Our method demonstrated the feasibility of measuring the resection margin following sublobar resection in lung cancer patients through pre- and post-operative CT image comparison. Further validation with a multicenter, large cohort, and analysis of clinical outcome correlation is necessary in future studies.

17.
J Imaging Inform Med ; 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38874699

RESUMO

Retinal diseases stand as a primary cause of childhood blindness. Analyzing the progression of these diseases requires close attention to lesion morphology and spatial information. Standard image registration methods fail to accurately reconstruct pediatric fundus images containing significant distortion and blurring. To address this challenge, we proposed a robust deep learning-based image registration method (RDLR). The method consisted of two modules: registration module (RM) and panoramic view module (PVM). RM effectively integrated global and local feature information and learned prior information related to the orientation of images. PVM was capable of reconstructing spatial information in panoramic images. Furthermore, as the registration model was trained on over 280,000 pediatric fundus images, we introduced a registration annotation automatic generation process coupled with a quality control module to ensure the reliability of training data. We compared the performance of RDLR to the other methods, including conventional registration pipeline (CRP), voxel morph (WM), generalizable image matcher (GIM), and self-supervised techniques (SS). RDLR achieved significantly higher registration accuracy (average Dice score of 0.948) than the other methods (ranging from 0.491 to 0.802). The resulting panoramic retinal maps reconstructed by RDLR also demonstrated substantially higher fidelity (average Dice score of 0.960) compared to the other methods (ranging from 0.720 to 0.783). Overall, the proposed method addressed key challenges in pediatric retinal imaging, providing an effective solution to enhance disease diagnosis. Our source code is available at https://github.com/wuwusky/RobustDeepLeraningRegistration .

18.
J Med Phys ; 49(1): 64-72, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38828076

RESUMO

Purpose: Image registration is a crucial component of the adaptive radiotherapy workflow. This study investigates the accuracy of the deformable image registration (DIR) and contour propagation features of SmartAdapt, an application in the Eclipse treatment planning system (TPS) version 16.1. Materials and Methods: The registration accuracy was validated using the Task Group No. 132 (TG-132) virtual phantom, which features contour evaluation and landmark analysis based on the quantitative criteria recommended in the American Association of Physicists in Medicine TG-132 report. The target registration error, Dice similarity coefficient (DSC), and center of mass displacement were used as quantitative validation metrics. The performance of the contour propagation feature was evaluated using clinical datasets (head and neck, pelvis, and chest) and an additional four-dimensional computed tomography (CT) dataset from TG-132. The primary planning and the second CT images were appropriately registered and deformed. The DSC was used to find the volume overlapping between the deformed contours and the radiation oncologist (RO)-drawn contour. The clinical value of the DIR-generated structure was reviewed and scored by an experienced RO to make a qualitative assessment. Results: The registration accuracy fell within the specified tolerances. SmartAdapt exhibited a reasonably propagated contour for the chest and head-and-neck regions, with DSC values of 0.80 for organs at risk. Misregistration is frequently observed in the pelvic region, which is specified as a low-contrast region. However, 78% of structures required no modification or minor modification, demonstrating good agreement between contour comparison and the qualitative analysis. Conclusions: SmartAdapt has adequate efficiency for image registration and contour propagation for adaptive purposes in various anatomical sites. However, there should be concern about its performance in regions with low contrast and small volumes.

19.
Spectrochim Acta A Mol Biomol Spectrosc ; 320: 124558, 2024 Nov 05.
Artigo em Inglês | MEDLINE | ID: mdl-38870695

RESUMO

Nowadays, for detecting breast cancer in its early stages, the focus is on multispectral transmission imaging. Frame accumulation is a promising technique to enhance the grayscale level of the multispectral transmission images. Still, during the image acquisition process, human respiration or camera jitter causes the displacement of the frame's sequence which leads to the loss of accuracy and image quality of the frame accumulated image is reduced. In this article, we have proposed a new method named "repeated pair image registration and accumulation "to resolve the issue. In this method first pair of images from the sequence is first registered and accumulated followed by the next pair to be registered and accumulated. Then these two accumulated frames are registered and accumulated again. This process is repeated until all the frames from the sequence are processed and the final image is obtained. This method is tested on the sequence of breast frames taken at 600 nm, 620 nm, 670 nm, and 760 nm wavelength of light and proved the enhancement of quality, accuracy, and grayscale by various mathematical assessments. Furthermore, the processing time of our proposed method is very low because descent gradient optimization algorithm is used here for image registration purpose. This optimization algorithm has high speed as compared to other methods and is verified by registering a single image of each wavelength by three different methods. It has laid the foundations of early detection of breast cancer using multispectral transmission imaging.


Assuntos
Algoritmos , Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aumento da Imagem/métodos
20.
EJNMMI Rep ; 8(1): 17, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38872028

RESUMO

OBJECTIVES: 3D-visualization of the segmented contacts of directional deep brain stimulation (DBS) electrodes is desirable since knowledge about the position of every segmented contact could shorten the timespan for electrode programming. CT cannot yield images fitting that purpose whereas highly resolved flat detector computed tomography (FDCT) can accurately image the inner structure of the electrode. This study aims to demonstrate the applicability of image fusion of highly resolved FDCT and CT to produce highly resolved images that preserve anatomical context for subsequent fusion to preoperative MRI for eventually displaying segmented contactswithin anatomical context in future studies. MATERIAL AND METHODS: Retrospectively collected datasets from 15 patients who underwent bilateral directional DBS electrode implantation were used. Subsequently, after image analysis, a semi-automated 3D-registration of CT and highly resolved FDCT followed by image fusion was performed. The registration accuracy was assessed by computing the target registration error. RESULTS: Our work demonstrated the feasibility of highly resolved FDCT to visualize segmented electrode contacts in 3D. Semiautomatic image registration to CT was successfully implemented in all cases. Qualitative evaluation by two experts revealed good alignment regarding intracranial osseous structures. Additionally, the average for the mean of the target registration error over all patients, based on the assessments of two raters, was computed to be 4.16 mm. CONCLUSION: Our work demonstrated the applicability of image fusion of highly resolved FDCT to CT for a potential workflow regarding subsequent fusion to MRI in the future to put the electrodes in an anatomical context.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...