Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 113
Filtrar
1.
Front Neurosci ; 18: 1411797, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38988766

RESUMO

Neuroimaging-based prediction of neurocognitive measures is valuable for studying how the brain's structure relates to cognitive function. However, the accuracy of prediction using popular linear regression models is relatively low. We propose a novel deep regression method, namely TractoSCR, that allows full supervision for contrastive learning in regression tasks using diffusion MRI tractography. TractoSCR performs supervised contrastive learning by using the absolute difference between continuous regression labels (i.e., neurocognitive scores) to determine positive and negative pairs. We apply TractoSCR to analyze a large-scale dataset including multi-site harmonized diffusion MRI and neurocognitive data from 8,735 participants in the Adolescent Brain Cognitive Development (ABCD) Study. We extract white matter microstructural measures using a fine parcellation of white matter tractography into fiber clusters. Using these measures, we predict three scores related to domains of higher-order cognition (general cognitive ability, executive function, and learning/memory). To identify important fiber clusters for prediction of these neurocognitive scores, we propose a permutation feature importance method for high-dimensional data. We find that TractoSCR obtains significantly higher accuracy of neurocognitive score prediction compared to other state-of-the-art methods. We find that the most predictive fiber clusters are predominantly located within the superficial white matter and projection tracts, particularly the superficial frontal white matter and striato-frontal connections. Overall, our results demonstrate the utility of contrastive representation learning methods for regression, and in particular for improving neuroimaging-based prediction of higher-order cognitive abilities. Our code will be available at: https://github.com/SlicerDMRI/TractoSCR.

2.
Sci Data ; 11(1): 494, 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38744868

RESUMO

The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n = 92), metastases (n = 11), and others (n = 11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.


Assuntos
Neoplasias Encefálicas , Bases de Dados Factuais , Imageamento por Ressonância Magnética , Imagem Multimodal , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/cirurgia , Encéfalo/diagnóstico por imagem , Encéfalo/cirurgia , Glioma/diagnóstico por imagem , Glioma/cirurgia , Ultrassonografia , Neuronavegação/métodos
3.
medRxiv ; 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-37745329

RESUMO

The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n=92), metastases (n=11), and others (n=11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.

4.
IEEE J Biomed Health Inform ; 27(9): 4352-4361, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37276107

RESUMO

Lung ultrasound (LUS) is an important imaging modality used by emergency physicians to assess pulmonary congestion at the patient bedside. B-line artifacts in LUS videos are key findings associated with pulmonary congestion. Not only can the interpretation of LUS be challenging for novice operators, but visual quantification of B-lines remains subject to observer variability. In this work, we investigate the strengths and weaknesses of multiple deep learning approaches for automated B-line detection and localization in LUS videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising 1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines. Based on this dataset, we present a benchmark of established deep learning methods applied to the task of B-line detection. To pave the way for interpretable quantification of B-lines, we propose a novel "single-point" approach to B-line localization using only the point of origin. Our results show that (a) the area under the receiver operating characteristic curve ranges from 0.864 to 0.955 for the benchmarked detection methods, (b) within this range, the best performance is achieved by models that leverage multiple successive frames as input, and (c) the proposed single-point approach for B-line localization reaches an F 1-score of 0.65, performing on par with the inter-observer agreement. The dataset and developed methods can facilitate further biomedical research on automated interpretation of lung ultrasound with the potential to expand the clinical utility.


Assuntos
Aprendizado Profundo , Edema Pulmonar , Humanos , Pulmão/diagnóstico por imagem , Ultrassonografia/métodos , Edema Pulmonar/diagnóstico , Tórax
5.
Med Image Comput Comput Assist Interv ; 2023: 448-458, 2023 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-38655383

RESUMO

We introduce MHVAE, a deep hierarchical variational autoencoder (VAE) that synthesizes missing images from various modalities. Extending multi-modal VAEs with a hierarchical latent structure, we introduce a probabilistic formulation for fusing multi-modal images in a common latent representation while having the flexibility to handle incomplete image sets as input. Moreover, adversarial learning is employed to generate sharper images. Extensive experiments are performed on the challenging problem of joint intra-operative ultrasound (iUS) and Magnetic Resonance (MR) synthesis. Our model outperformed multi-modal VAEs, conditional GANs, and the current state-of-the-art unified method (ResViT) for synthesizing missing images, demonstrating the advantage of using a hierarchical latent representation and a principled probabilistic fusion operation. Our code is publicly available.

6.
Med Image Comput Comput Assist Interv ; 14228: 227-237, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38371724

RESUMO

We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.

7.
Biomed Image Regist (2022) ; 13386: 103-115, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36383500

RESUMO

In recent years, learning-based image registration methods have gradually moved away from direct supervision with target warps to instead use self-supervision, with excellent results in several registration benchmarks. These approaches utilize a loss function that penalizes the intensity differences between the fixed and moving images, along with a suitable regularizer on the deformation. However, since images typically have large untextured regions, merely maximizing similarity between the two images is not sufficient to recover the true deformation. This problem is exacerbated by texture in other regions, which introduces severe non-convexity into the landscape of the training objective and ultimately leads to overfitting. In this paper, we argue that the relative failure of supervised registration approaches can in part be blamed on the use of regular U-Nets, which are jointly tasked with feature extraction, feature matching and deformation estimation. Here, we introduce a simple but crucial modification to the U-Net that disentangles feature extraction and matching from deformation prediction, allowing the U-Net to warp the features, across levels, as the deformation field is evolved. With this modification, direct supervision using target warps begins to outperform self-supervision approaches that require segmentations, presenting new directions for registration when images do not have segmentations. We hope that our findings in this preliminary workshop paper will re-ignite research interest in supervised image registration techniques. Our code is publicly available from http://github.com/balbasty/superwarp.

8.
Bioinformatics ; 38(7): 2015-2021, 2022 03 28.
Artigo em Inglês | MEDLINE | ID: mdl-35040929

RESUMO

MOTIVATION: Mass spectrometry imaging (MSI) provides rich biochemical information in a label-free manner and therefore holds promise to substantially impact current practice in disease diagnosis. However, the complex nature of MSI data poses computational challenges in its analysis. The complexity of the data arises from its large size, high-dimensionality and spectral nonlinearity. Preprocessing, including peak picking, has been used to reduce raw data complexity; however, peak picking is sensitive to parameter selection that, perhaps prematurely, shapes the downstream analysis for tissue classification and ensuing biological interpretation. RESULTS: We propose a deep learning model, massNet, that provides the desired qualities of scalability, nonlinearity and speed in MSI data analysis. This deep learning model was used, without prior preprocessing and peak picking, to classify MSI data from a mouse brain harboring a patient-derived tumor. The massNet architecture established automatically learning of predictive features, and automated methods were incorporated to identify peaks with potential for tumor delineation. The model's performance was assessed using cross-validation, and the results demonstrate higher accuracy and a substantial gain in speed compared to the established classical machine learning method, support vector machine. AVAILABILITY AND IMPLEMENTATION: https://github.com/wabdelmoula/massNet. The data underlying this article are available in the NIH Common Fund's National Metabolomics Data Repository (NMDR) Metabolomics Workbench under project id (PR001292) with http://dx.doi.org/10.21228/M8Q70T. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Aprendizado Profundo , Neoplasias , Animais , Camundongos , Espectrometria de Massas/métodos , Metabolômica/métodos , Aprendizado de Máquina , Neoplasias/diagnóstico por imagem
9.
IEEE Trans Med Imaging ; 41(6): 1454-1467, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34968177

RESUMO

In this paper, we present a deep learning method, DDMReg, for accurate registration between diffusion MRI (dMRI) datasets. In dMRI registration, the goal is to spatially align brain anatomical structures while ensuring that local fiber orientations remain consistent with the underlying white matter fiber tract anatomy. DDMReg is a novel method that uses joint whole-brain and tract-specific information for dMRI registration. Based on the successful VoxelMorph framework for image registration, we propose a novel registration architecture that leverages not only whole brain information but also tract-specific fiber orientation information. DDMReg is an unsupervised method for deformable registration between pairs of dMRI datasets: it does not require nonlinearly pre-registered training data or the corresponding deformation fields as ground truth. We perform comparisons with four state-of-the-art registration methods on multiple independently acquired datasets from different populations (including teenagers, young and elderly adults) and different imaging protocols and scanners. We evaluate the registration performance by assessing the ability to align anatomically corresponding brain structures and ensure fiber spatial agreement between different subjects after registration. Experimental results show that DDMReg obtains significantly improved registration performance compared to the state-of-the-art methods. Importantly, we demonstrate successful generalization of DDMReg to dMRI data from different populations with varying ages and acquired using different acquisition protocols and different scanners.


Assuntos
Aprendizado Profundo , Substância Branca , Adolescente , Adulto , Idoso , Encéfalo/anatomia & histologia , Encéfalo/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos
10.
Artigo em Inglês | MEDLINE | ID: mdl-37250854

RESUMO

In order to tackle the difficulty associated with the ill-posed nature of the image registration problem, regularization is often used to constrain the solution space. For most learning-based registration approaches, the regularization usually has a fixed weight and only constrains the spatial transformation. Such convention has two limitations: (i) Besides the laborious grid search for the optimal fixed weight, the regularization strength of a specific image pair should be associated with the content of the images, thus the "one value fits all" training scheme is not ideal; (ii) Only spatially regularizing the transformation may neglect some informative clues related to the ill-posedness. In this study, we propose a mean-teacher based registration framework, which incorporates an additional temporal consistency regularization term by encouraging the teacher model's prediction to be consistent with that of the student model. More importantly, instead of searching for a fixed weight, the teacher enables automatically adjusting the weights of the spatial regularization and the temporal consistency regularization by taking advantage of the transformation uncertainty and appearance uncertainty. Extensive experiments on the challenging abdominal CT-MRI registration show that our training strategy can promisingly advance the original learning-based method in terms of efficient hyperparameter tuning and a better tradeoff between accuracy and smoothness.

11.
Nat Commun ; 12(1): 5544, 2021 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-34545087

RESUMO

Mass spectrometry imaging (MSI) is an emerging technology that holds potential for improving, biomarker discovery, metabolomics research, pharmaceutical applications and clinical diagnosis. Despite many solutions being developed, the large data size and high dimensional nature of MSI, especially 3D datasets, still pose computational and memory complexities that hinder accurate identification of biologically relevant molecular patterns. Moreover, the subjectivity in the selection of parameters for conventional pre-processing approaches can lead to bias. Therefore, we assess if a probabilistic generative model based on a fully connected variational autoencoder can be used for unsupervised analysis and peak learning of MSI data to uncover hidden structures. The resulting msiPL method learns and visualizes the underlying non-linear spectral manifold, revealing biologically relevant clusters of tissue anatomy in a mouse kidney and tumor heterogeneity in human prostatectomy tissue, colorectal carcinoma, and glioblastoma mouse model, with identification of underlying m/z peaks. The method is applied for the analysis of MSI datasets ranging from 3.3 to 78.9 GB, without prior pre-processing and peak picking, and acquired using different mass spectrometers at different centers.


Assuntos
Imageamento Tridimensional , Redes Neurais de Computação , Espectrometria de Massas por Ionização e Dessorção a Laser Assistida por Matriz , Algoritmos , Animais , Tecido Conjuntivo/diagnóstico por imagem , Tecido Conjuntivo/patologia , Aprendizado Profundo , Modelos Animais de Doenças , Humanos , Rim/diagnóstico por imagem , Metabolômica , Camundongos , Neoplasias/diagnóstico por imagem , Neoplasias/metabolismo , Dinâmica não Linear , Reprodutibilidade dos Testes , alfa-Defensinas/metabolismo
12.
Med Image Anal ; 69: 101939, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33388458

RESUMO

In this work, we propose a theoretical framework based on maximum profile likelihood for pairwise and groupwise registration. By an asymptotic analysis, we demonstrate that maximum profile likelihood registration minimizes an upper bound on the joint entropy of the distribution that generates the joint image data. Further, we derive the congealing method for groupwise registration by optimizing the profile likelihood in closed form, and using coordinate ascent, or iterative model refinement. We also describe a method for feature based registration in the same framework and demonstrate it on groupwise tractographic registration. In the second part of the article, we propose an approach to deep metric registration that implements maximum likelihood registration using deep discriminative classifiers. We show further that this approach can be used for maximum profile likelihood registration to discharge the need for well-registered training data, using iterative model refinement. We demonstrate that the method succeeds on a challenging registration problem where the standard mutual information approach does not perform well.


Assuntos
Aprendizado Profundo , Algoritmos , Entropia , Humanos , Interpretação de Imagem Assistida por Computador , Imageamento Tridimensional
13.
Proc IEEE Int Symp Biomed Imaging ; 2021: 443-447, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36225596

RESUMO

Prostate cancer is the second most prevalent cancer in men worldwide. Deep neural networks have been successfully applied for prostate cancer diagnosis in magnetic resonance images (MRI). Pathology results from biopsy procedures are often used as ground truth to train such systems. There are several sources of noise in creating ground truth from biopsy data including sampling and registration errors. We propose: 1) A fully convolutional neural network (FCN) to produce cancer probability maps across the whole prostate gland in MRI; 2) A Gaussian weighted loss function to train the FCN with sparse biopsy locations; 3) A probabilistic framework to model biopsy location uncertainty and adjust cancer probability given the deep model predictions. We assess the proposed method on 325 biopsy locations from 203 patients. We observe that the proposed loss improves the area under the receiver operating characteristic curve and the biopsy location adjustment improves the sensitivity of the models.

14.
Artigo em Inglês | MEDLINE | ID: mdl-36282980

RESUMO

We propose and demonstrate a representation learning approach by maximizing the mutual information between local features of images and text. The goal of this approach is to learn useful image representations by taking advantage of the rich information contained in the free text that describes the findings in the image. Our method trains image and text encoders by encouraging the resulting representations to exhibit high local mutual information. We make use of recent advances in mutual information estimation with neural network discriminators. We argue that the sum of local mutual information is typically a lower bound on the global mutual information. Our experimental results in the downstream image classification tasks demonstrate the advantages of using local features for image-text representation learning. Our code is available at: https://github.com/RayRuizhiLiao/mutual_info_img_txt.

15.
IEEE Trans Med Imaging ; 39(12): 3868-3878, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32746129

RESUMO

Fully convolutional neural networks (FCNs), and in particular U-Nets, have achieved state-of-the-art results in semantic segmentation for numerous medical imaging applications. Moreover, batch normalization and Dice loss have been used successfully to stabilize and accelerate training. However, these networks are poorly calibrated i.e. they tend to produce overconfident predictions for both correct and erroneous classifications, making them unreliable and hard to interpret. In this paper, we study predictive uncertainty estimation in FCNs for medical image segmentation. We make the following contributions: 1) We systematically compare cross-entropy loss with Dice loss in terms of segmentation quality and uncertainty estimation of FCNs; 2) We propose model ensembling for confidence calibration of the FCNs trained with batch normalization and Dice loss; 3) We assess the ability of calibrated FCNs to predict segmentation quality of structures and detect out-of-distribution test examples. We conduct extensive experiments across three medical image segmentation applications of the brain, the heart, and the prostate to evaluate our contributions. The results of this study offer considerable insight into the predictive uncertainty estimation and out-of-distribution detection in medical image segmentation and provide practical recipes for confidence calibration. Moreover, we consistently demonstrate that model ensembling improves confidence calibration.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Encéfalo/diagnóstico por imagem , Calibragem , Humanos , Masculino , Incerteza
16.
Int J Comput Assist Radiol Surg ; 15(7): 1215-1223, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32372384

RESUMO

PURPOSE: The detection of clinically significant prostate cancer (PCa) is shown to greatly benefit from MRI-ultrasound fusion biopsy, which involves overlaying pre-biopsy MRI volumes (or targets) with real-time ultrasound images. In previous literature, machine learning models trained on either MRI or ultrasound data have been proposed to improve biopsy guidance and PCa detection. However, quantitative fusion of information from MRI and ultrasound has not been explored in depth in a large study. This paper investigates information fusion approaches between MRI and ultrasound to improve targeting of PCa foci in biopsies. METHODS: We build models of fully convolutional networks (FCN) using data from a newly proposed ultrasound modality, temporal enhanced ultrasound (TeUS), and apparent diffusion coefficient (ADC) from 107 patients with 145 biopsy cores. The architecture of our models is based on U-Net and U-Net with attention gates. Models are built using joint training through intermediate and late fusion of the data. We also build models with data from each modality, separately, to use as baseline. The performance is evaluated based on the area under the curve (AUC) for predicting clinically significant PCa. RESULTS: Using our proposed deep learning framework and intermediate fusion, integration of TeUS and ADC outperforms the individual modalities for cancer detection. We achieve an AUC of 0.76 for detection of all PCa foci, and 0.89 for PCa with larger foci. Results indicate a shared representation between multiple modalities outperforms the average unimodal predictions. CONCLUSION: We demonstrate the significant potential of multimodality integration of information from MRI and TeUS to improve PCa detection, which is essential for accurate targeting of cancer foci during biopsy. By using FCNs as the architecture of choice, we are able to predict the presence of clinically significant PCa in entire imaging planes immediately, without the need for region-based analysis. This reduces the overall computational time and enables future intra-operative deployment of this technology.


Assuntos
Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Ultrassonografia/métodos , Humanos , Biópsia Guiada por Imagem/métodos , Masculino , Modelos Teóricos , Neoplasias da Próstata/patologia
18.
Adv Neural Inf Process Syst ; 33: 8895-8906, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36415583

RESUMO

Ensembling is now recognized as an effective approach for increasing the predictive performance and calibration of deep networks. We introduce a new approach, Parameter Ensembling by Perturbation (PEP), that constructs an ensemble of parameter values as random perturbations of the optimal parameter set from training by a Gaussian with a single variance parameter. The variance is chosen to maximize the log-likelihood of the ensemble average ( L ) on the validation data set. Empirically, and perhaps surprisingly, L has a well-defined maximum as the variance grows from zero (which corresponds to the baseline model). Conveniently, calibration level of predictions also tends to grow favorably until the peak of L is reached. In most experiments, PEP provides a small improvement in performance, and, in some cases, a substantial improvement in empirical calibration. We show that this "PEP effect" (the gain in log-likelihood) is related to the mean curvature of the likelihood function and the empirical Fisher information. Experiments on ImageNet pre-trained networks including ResNet, DenseNet, and Inception showed improved calibration and likelihood. We further observed a mild improvement in classification accuracy on these networks. Experiments on classification benchmarks such as MNIST and CIFAR-10 showed improved calibration and likelihood, as well as the relationship between the PEP effect and overfitting; this demonstrates that PEP can be used to probe the level of overfitting that occurred during training. In general, no special training procedure or network architecture is needed, and in the case of pre-trained networks, no additional training is needed.

19.
Med Image Comput Comput Assist Interv ; 12264: 735-744, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33778818

RESUMO

Intra-operative brain shift is a well-known phenomenon that describes non-rigid deformation of brain tissues due to gravity and loss of cerebrospinal fluid among other phenomena. This has a negative influence on surgical outcome that is often based on pre-operative planning where the brain shift is not considered. We present a novel brain-shift aware Augmented Reality method to align pre-operative 3D data onto the deformed brain surface viewed through a surgical microscope. We formulate our non-rigid registration as a Shape-from-Template problem. A pre-operative 3D wire-like deformable model is registered onto a single 2D image of the cortical vessels, which is automatically segmented. This 3D/2D registration drives the underlying brain structures, such as tumors, and compensates for the brain shift in sub-cortical regions. We evaluated our approach on simulated and real data composed of 6 patients. It achieved good quantitative and qualitative results making it suitable for neurosurgical guidance.

20.
Artigo em Inglês | MEDLINE | ID: mdl-33840881

RESUMO

Brain shift is a non-rigid deformation of brain tissue that is affected by loss of cerebrospinal fluid, tissue manipulation and gravity among other phenomena. This deformation can negatively influence the outcome of a surgical procedure since surgical planning based on pre-operative image becomes less valid. We present a novel method to compensate for brain shift that maps preoperative image data to the deformed brain during intra-operative neurosurgical procedures and thus increases the likelihood of achieving a gross total resection while decreasing the risk to healthy tissue surrounding the tumor. Through a 3D/2D non-rigid registration process, a 3D articulated model derived from pre-operative imaging is aligned onto 2D images of the vessels viewed through the surgical miscroscopic intra-operatively. The articulated 3D vessels constrain a volumetric biomechanical model of the brain to propagate cortical vessel deformation to the parenchyma and in turn to the tumor. The 3D/2D non-rigid registration is performed using an energy minimization approach that satisfies both projective and physical constraints. Our method is evaluated on real and synthetic data of human brain showing both quantitative and qualitative results and exhibiting its particular suitability for real-time surgical guidance.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...