Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 104
Filtrar
1.
Med Phys ; 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38713916

RESUMO

BACKGROUND: Disease or injury may cause a change in the biomechanical properties of the lungs, which can alter lung function. Image registration can be used to measure lung ventilation and quantify volume change, which can be a useful diagnostic aid. However, lung registration is a challenging problem because of the variation in deformation along the lungs, sliding motion of the lungs along the ribs, and change in density. PURPOSE: Landmark correspondences have been used to make deformable image registration robust to large displacements. METHODS: To tackle the challenging task of intra-patient lung computed tomography (CT) registration, we extend the landmark correspondence prediction model deep convolutional neural network-Match by introducing a soft mask loss term to encourage landmark correspondences in specific regions and avoid the use of a mask during inference. To produce realistic deformations to train the landmark correspondence model, we use data-driven synthetic transformations. We study the influence of these learned landmark correspondences on lung CT registration by integrating them into intensity-based registration as a distance-based penalty. RESULTS: Our results on the public thoracic CT dataset COPDgene show that using learned landmark correspondences as a soft constraint can reduce median registration error from approximately 5.46 to 4.08 mm compared to standard intensity-based registration, in the absence of lung masks. CONCLUSIONS: We show that using landmark correspondences results in minor improvements in local alignment, while significantly improving global alignment.

2.
Comput Methods Programs Biomed ; 248: 108115, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38503072

RESUMO

BACKGROUND AND OBJECTIVE: As large sets of annotated MRI data are needed for training and validating deep learning based medical image analysis algorithms, the lack of sufficient annotated data is a critical problem. A possible solution is the generation of artificial data by means of physics-based simulations. Existing brain simulation data is limited in terms of anatomical models, tissue classes, fixed tissue characteristics, MR sequences and overall realism. METHODS: We propose a realistic simulation framework by incorporating patient-specific phantoms and Bloch equations-based analytical solutions for fast and accurate MRI simulations. A large number of labels are derived from open-source high-resolution T1w MRI data using a fully automated brain classification tool. The brain labels are taken as ground truth (GT) on which MR images are simulated using our framework. Moreover, we demonstrate that the T1w MR images generated from our framework along with GT annotations can be utilized directly to train a 3D brain segmentation network. To evaluate our model further on larger set of real multi-source MRI data without GT, we compared our model to existing brain segmentation tools, FSL-FAST and SynthSeg. RESULTS: Our framework generates 3D brain MRI for variable anatomy, sequence, contrast, SNR and resolution. The brain segmentation network for WM/GM/CSF trained only on T1w simulated data shows promising results on real MRI data from MRBrainS18 challenge dataset with a Dice scores of 0.818/0.832/0.828. On OASIS data, our model exhibits a close performance to FSL, both qualitatively and quantitatively with a Dice scores of 0.901/0.939/0.937. CONCLUSIONS: Our proposed simulation framework is the initial step towards achieving truly physics-based MRI image generation, providing flexibility to generate large sets of variable MRI data for desired anatomy, sequence, contrast, SNR, and resolution. Furthermore, the generated images can effectively train 3D brain segmentation networks, mitigating the reliance on real 3D annotated data.


Assuntos
Aprendizado Profundo , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/anatomia & histologia , Imageamento por Ressonância Magnética/métodos , Algoritmos , Neuroimagem/métodos , Processamento de Imagem Assistida por Computador/métodos
3.
Med Phys ; 51(4): 2367-2377, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38408022

RESUMO

BACKGROUND: Deep learning-based unsupervised image registration has recently been proposed, promising fast registration. However, it has yet to be adopted in the online adaptive magnetic resonance imaging-guided radiotherapy (MRgRT) workflow. PURPOSE: In this paper, we design an unsupervised, joint rigid, and deformable registration framework for contour propagation in MRgRT of prostate cancer. METHODS: Three-dimensional pelvic T2-weighted MRIs of 143 prostate cancer patients undergoing radiotherapy were collected and divided into 110, 13, and 20 patients for training, validation, and testing. We designed a framework using convolutional neural networks (CNNs) for rigid and deformable registration. We selected the deformable registration network architecture among U-Net, MS-D Net, and LapIRN and optimized the training strategy (end-to-end vs. sequential). The framework was compared against an iterative baseline registration. We evaluated registration accuracy (the Dice and Hausdorff distance of the prostate and bladder contours), structural similarity index, and folding percentage to compare the methods. We also evaluated the framework's robustness to rigid and elastic deformations and bias field perturbations. RESULTS: The end-to-end trained framework comprising LapIRN for the deformable component achieved the best median (interquartile range) prostate and bladder Dice of 0.89 (0.85-0.91) and 0.86 (0.80-0.91), respectively. This accuracy was comparable to the iterative baseline registration: prostate and bladder Dice of 0.91 (0.88-0.93) and 0.86 (0.80-0.92). The best models complete rigid and deformable registration in 0.002 (0.0005) and 0.74 (0.43) s (Nvidia Tesla V100-PCIe 32 GB GPU), respectively. We found that the models are robust to translations up to 52 mm, rotations up to 15 ∘ $^\circ$ , elastic deformations up to 40 mm, and bias fields. CONCLUSIONS: Our proposed unsupervised, deep learning-based registration framework can perform rigid and deformable registration in less than a second with contour propagation accuracy comparable with iterative registration.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Próstata/patologia , Pelve , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Neoplasias da Próstata/patologia , Planejamento da Radioterapia Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
4.
Comput Med Imaging Graph ; 112: 102332, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38245925

RESUMO

Accurate brain tumor segmentation is critical for diagnosis and treatment planning, whereby multi-modal magnetic resonance imaging (MRI) is typically used for analysis. However, obtaining all required sequences and expertly labeled data for training is challenging and can result in decreased quality of segmentation models developed through automated algorithms. In this work, we examine the possibility of employing a conditional generative adversarial network (GAN) approach for synthesizing multi-modal images to train deep learning-based neural networks aimed at high-grade glioma (HGG) segmentation. The proposed GAN is conditioned on auxiliary brain tissue and tumor segmentation masks, allowing us to attain better accuracy and control of tissue appearance during synthesis. To reduce the domain shift between synthetic and real MR images, we additionally adapt the low-frequency Fourier space components of synthetic data, reflecting the style of the image, to those of real data. We demonstrate the impact of Fourier domain adaptation (FDA) on the training of 3D segmentation networks and attain significant improvements in both the segmentation performance and prediction confidence. Similar outcomes are seen when such data is used as a training augmentation alongside the available real images. In fact, experiments on the BraTS2020 dataset reveal that models trained solely with synthetic data exhibit an improvement of up to 4% in Dice score when using FDA, while training with both real and FDA-processed synthetic data through augmentation results in an improvement of up to 5% in Dice compared to using real data alone. This study highlights the importance of considering image frequency in generative approaches for medical image synthesis and offers a promising approach to address data scarcity in medical imaging segmentation.


Assuntos
Neoplasias Encefálicas , Glioma , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Neoplasias Encefálicas/diagnóstico por imagem , Algoritmos , Imageamento por Ressonância Magnética/métodos
5.
IEEE J Biomed Health Inform ; 28(3): 1161-1172, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37878422

RESUMO

We introduce LYSTO, the Lymphocyte Assessment Hackathon, which was held in conjunction with the MICCAI 2019 Conference in Shenzhen (China). The competition required participants to automatically assess the number of lymphocytes, in particular T-cells, in images of colon, breast, and prostate cancer stained with CD3 and CD8 immunohistochemistry. Differently from other challenges setup in medical image analysis, LYSTO participants were solely given a few hours to address this problem. In this paper, we describe the goal and the multi-phase organization of the hackathon; we describe the proposed methods and the on-site results. Additionally, we present post-competition results where we show how the presented methods perform on an independent set of lung cancer slides, which was not part of the initial competition, as well as a comparison on lymphocyte assessment between presented methods and a panel of pathologists. We show that some of the participants were capable to achieve pathologist-level performance at lymphocyte assessment. After the hackathon, LYSTO was left as a lightweight plug-and-play benchmark dataset on grand-challenge website, together with an automatic evaluation platform.


Assuntos
Benchmarking , Neoplasias da Próstata , Masculino , Humanos , Linfócitos , Mama , China
6.
Photoacoustics ; 33: 100544, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37671317

RESUMO

Spectral photoacoustic imaging (sPAI) is an emerging modality that allows real-time, non-invasive, and radiation-free assessment of tissue, benefiting from their optical contrast. sPAI is ideal for morphology assessment in arterial plaques, where plaque composition provides relevant information on plaque progression and its vulnerability. However, since sPAI is affected by spectral coloring, general spectroscopy unmixing techniques cannot provide reliable identification of such complicated sample composition. In this study, we employ a convolutional neural network (CNN) for the classification of plaque composition using sPAI. For this study, nine carotid endarterectomy plaques were imaged and were then annotated and validated using multiple histological staining. Our results show that a CNN can effectively differentiate constituent regions within plaques without requiring fluence or spectra correction, with the potential to eventually support vulnerability assessment in plaques.

7.
NMR Biomed ; 36(12): e5019, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37622473

RESUMO

At ultrahigh field strengths images of the body are hampered by B1 -field inhomogeneities. These present themselves as inhomogeneous signal intensity and contrast, which is regarded as a "bias field" to the ideal image. Current bias field correction methods, such as the N4 algorithm, assume a low frequency bias field, which is not sufficiently valid for T2w images at 7 T. In this work we propose a deep learning based bias field correction method to address this issue for T2w prostate images at 7 T. By combining simulated B1 -field distributions of a multi-transmit setup at 7 T with T2w prostate images at 1.5 T, we generated artificial 7 T images for which the homogeneous counterpart was available. Using these paired data, we trained a neural network to correct the bias field. We predicted either a homogeneous image (t-Image neural network) or the bias field (t-Biasf neural network). In addition, we experimented with the single-channel images of the receive array and the corresponding sum of magnitudes of this array as the input image. Testing was carried out on four datasets: the test split of the synthetic training dataset, volunteer and patient images at 7 T, and patient images at 3 T. For the test split, the performance was evaluated using the structural similarity index measure, Wasserstein distance, and root mean squared error. For all other test data, the features Homogeneity and Energy derived from the gray level co-occurrence matrix (GLCM) were used to quantify the improvement. For each test dataset, the proposed method was compared with the current gold standard: the N4 algorithm. Additionally, a questionnaire was filled out by two clinical experts to assess the homogeneity and contrast preservation of the 7 T datasets. All four proposed neural networks were able to substantially reduce the B1 -field induced inhomogeneities in T2w 7 T prostate images. By visual inspection, the images clearly look more homogeneous, which is confirmed by the increase in Homogeneity and Energy in the GLCM, and the questionnaire scores from two clinical experts. Occasionally, changes in contrast within the prostate were observed, although much less for the t-Biasf network than for the t-Image network. Further, results on the 3 T dataset demonstrate that the proposed learning based approach is on par with the N4 algorithm. The results demonstrate that the trained networks were capable of reducing the B1 -field induced inhomogeneities for prostate imaging at 7 T. The quantitative evaluation showed that all proposed learning based correction techniques outperformed the N4 algorithm. Of the investigated methods, the single-channel t-Biasf neural network proves most reliable for bias field correction.


Assuntos
Aprendizado Profundo , Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Redes Neurais de Computação , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
8.
Comput Biol Med ; 161: 106973, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37209615

RESUMO

Cardiac magnetic resonance (CMR) image segmentation is an integral step in the analysis of cardiac function and diagnosis of heart related diseases. While recent deep learning-based approaches in automatic segmentation have shown great promise to alleviate the need for manual segmentation, most of these are not applicable to realistic clinical scenarios. This is largely due to training on mainly homogeneous datasets, without variation in acquisition, which typically occurs in multi-vendor and multi-site settings, as well as pathological data. Such approaches frequently exhibit a degradation in prediction performance, particularly on outlier cases commonly associated with difficult pathologies, artifacts and extensive changes in tissue shape and appearance. In this work, we present a model aimed at segmenting all three cardiac structures in a multi-center, multi-disease and multi-view scenario. We propose a pipeline, addressing different challenges with segmentation of such heterogeneous data, consisting of heart region detection, augmentation through image synthesis and a late-fusion segmentation approach. Extensive experiments and analysis demonstrate the ability of the proposed approach to tackle the presence of outlier cases during both training and testing, allowing for better adaptation to unseen and difficult examples. Overall, we show that the effective reduction of segmentation failures on outlier cases has a positive impact on not only the average segmentation performance, but also on the estimation of clinical parameters, leading to a better consistency in derived metrics.


Assuntos
Algoritmos , Cardiopatias , Humanos , Imageamento por Ressonância Magnética/métodos , Coração/diagnóstico por imagem , Radiografia , Processamento de Imagem Assistida por Computador/métodos
9.
Eur J Cancer ; 185: 167-177, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36996627

RESUMO

INTRODUCTION: Predicting checkpoint inhibitors treatment outcomes in melanoma is a relevant task, due to the unpredictable and potentially fatal toxicity and high costs for society. However, accurate biomarkers for treatment outcomes are lacking. Radiomics are a technique to quantitatively capture tumour characteristics on readily available computed tomography (CT) imaging. The purpose of this study was to investigate the added value of radiomics for predicting clinical benefit from checkpoint inhibitors in melanoma in a large, multicenter cohort. METHODS: Patients who received first-line anti-PD1±anti-CTLA4 treatment for advanced cutaneous melanoma were retrospectively identified from nine participating hospitals. For every patient, up to five representative lesions were segmented on baseline CT, and radiomics features were extracted. A machine learning pipeline was trained on the radiomics features to predict clinical benefit, defined as stable disease for more than 6 months or response per RECIST 1.1 criteria. This approach was evaluated using a leave-one-centre-out cross validation and compared to a model based on previously discovered clinical predictors. Lastly, a combination model was built on the radiomics and clinical model. RESULTS: A total of 620 patients were included, of which 59.2% experienced clinical benefit. The radiomics model achieved an area under the receiver operator characteristic curve (AUROC) of 0.607 [95% CI, 0.562-0.652], lower than that of the clinical model (AUROC=0.646 [95% CI, 0.600-0.692]). The combination model yielded no improvement over the clinical model in terms of discrimination (AUROC=0.636 [95% CI, 0.592-0.680]) or calibration. The output of the radiomics model was significantly correlated with three out of five input variables of the clinical model (p < 0.001). DISCUSSION: The radiomics model achieved a moderate predictive value of clinical benefit, which was statistically significant. However, a radiomics approach was unable to add value to a simpler clinical model, most likely due to the overlap in predictive information learned by both models. Future research should focus on the application of deep learning, spectral CT-derived radiomics, and a multimodal approach for accurately predicting benefit to checkpoint inhibitor treatment in advanced melanoma.


Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Melanoma/diagnóstico por imagem , Melanoma/tratamento farmacológico , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/tratamento farmacológico , Estudos Retrospectivos , Resultado do Tratamento , Tomografia Computadorizada por Raios X
10.
Med Image Anal ; 84: 102688, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36493702

RESUMO

Deep learning-based segmentation methods provide an effective and automated way for assessing the structure and function of the heart in cardiac magnetic resonance (CMR) images. However, despite their state-of-the-art performance on images acquired from the same source (same scanner or scanner vendor) as images used during training, their performance degrades significantly on images coming from different domains. A straightforward approach to tackle this issue consists of acquiring large quantities of multi-site and multi-vendor data, which is practically infeasible. Generative adversarial networks (GANs) for image synthesis present a promising solution for tackling data limitations in medical imaging and addressing the generalization capability of segmentation models. In this work, we explore the usability of synthesized short-axis CMR images generated using a segmentation-informed conditional GAN, to improve the robustness of heart cavity segmentation models in a variety of different settings. The GAN is trained on paired real images and corresponding segmentation maps belonging to both the heart and the surrounding tissue, reinforcing the synthesis of semantically-consistent and realistic images. First, we evaluate the segmentation performance of a model trained solely with synthetic data and show that it only slightly underperforms compared to the baseline trained with real data. By further combining real with synthetic data during training, we observe a substantial improvement in segmentation performance (up to 4% and 40% in terms of Dice score and Hausdorff distance) across multiple data-sets collected from various sites and scanner. This is additionally demonstrated across state-of-the-art 2D and 3D segmentation networks, whereby the obtained results demonstrate the potential of the proposed method in tackling the presence of the domain shift in medical data. Finally, we thoroughly analyze the quality of synthetic data and its ability to replace real MR images during training, as well as provide an insight into important aspects of utilizing synthetic images for segmentation.


Assuntos
Aprendizado Profundo , Humanos , Imageamento por Ressonância Magnética , Coração/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos
11.
IEEE Trans Med Imaging ; 42(3): 726-738, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36260571

RESUMO

One of the limiting factors for the development and adoption of novel deep-learning (DL) based medical image analysis methods is the scarcity of labeled medical images. Medical image simulation and synthesis can provide solutions by generating ample training data with corresponding ground truth labels. Despite recent advances, generated images demonstrate limited realism and diversity. In this work, we develop a flexible framework for simulating cardiac magnetic resonance (MR) images with variable anatomical and imaging characteristics for the purpose of creating a diversified virtual population. We advance previous works on both cardiac MR image simulation and anatomical modeling to increase the realism in terms of both image appearance and underlying anatomy. To diversify the generated images, we define parameters: 1)to alter the anatomy, 2) to assign MR tissue properties to various tissue types, and 3) to manipulate the image contrast via acquisition parameters. The proposed framework is optimized to generate a substantial number of cardiac MR images with ground truth labels suitable for downstream supervised tasks. A database of virtual subjects is simulated and its usefulness for aiding a DL segmentation method is evaluated. Our experiments show that training completely with simulated images can perform comparable with a model trained with real images for heart cavity segmentation in mid-ventricular slices. Moreover, such data can be used in addition to classical augmentation for boosting the performance when training data is limited, particularly by increasing the contrast and anatomical variation, leading to better regularization and generalization. The database is publicly available at https://osf.io/bkzhm/ and the simulation code will be available at https://github.com/sinaamirrajab/CMRI.


Assuntos
Coração , Imageamento por Ressonância Magnética , Humanos , Coração/diagnóstico por imagem , Simulação por Computador
12.
MAGMA ; 36(1): 79-93, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35904612

RESUMO

OBJECTIVES: Diffusion-weighted MRI can assist preoperative planning by reconstructing the trajectory of eloquent fiber pathways, such as the corticospinal tract (CST). However, accurate reconstruction of the full extent of the CST remains challenging with existing tractography methods. We suggest a novel tractography algorithm exploiting unused fiber orientations to produce more complete and reliable results. METHODS: Our novel approach, referred to as multi-level fiber tractography (MLFT), reconstructs fiber pathways by progressively considering previously unused fiber orientations at multiple levels of tract propagation. Anatomical priors are used to minimize the number of false-positive pathways. The MLFT method was evaluated on synthetic data and in vivo data by reconstructing the CST while compared to conventional tractography approaches. RESULTS: The radial extent of MLFT reconstructions is comparable to that of probabilistic reconstruction: [Formula: see text] for the left and [Formula: see text] for the right hemisphere according to Wilcoxon test, while achieving significantly higher topography preservation compared to probabilistic tractography: [Formula: see text]. DISCUSSION: MLFT provides a novel way to reconstruct fiber pathways by adding the capability of including branching pathways in fiber tractography. Thanks to its robustness, feasible reconstruction extent and topography preservation, our approach may assist in clinical practice as well as in virtual dissection studies.


Assuntos
Imagem de Tensor de Difusão , Processamento de Imagem Assistida por Computador , Imagem de Tensor de Difusão/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Algoritmos , Tratos Piramidais/diagnóstico por imagem
13.
Comput Med Imaging Graph ; 101: 102123, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36174308

RESUMO

Synthesis of a large set of high-quality medical images with variability in anatomical representation and image appearance has the potential to provide solutions for tackling the scarcity of properly annotated data in medical image analysis research. In this paper, we propose a novel framework consisting of image segmentation and synthesis based on mask-conditional GANs for generating high-fidelity and diverse Cardiac Magnetic Resonance (CMR) images. The framework consists of two modules: i) a segmentation module trained using a physics-based simulated database of CMR images to provide multi-tissue labels on real CMR images, and ii) a synthesis module trained using pairs of real CMR images and corresponding multi-tissue labels, to translate input segmentation masks to realistic-looking cardiac images. The anatomy of synthesized images is based on labels, whereas the appearance is learned from the training images. We investigate the effects of the number of tissue labels, quantity of training data, and multi-vendor data on the quality of the synthesized images. Furthermore, we evaluate the effectiveness and usability of the synthetic data for a downstream task of training a deep-learning model for cardiac cavity segmentation in the scenarios of data replacement and augmentation. The results of the replacement study indicate that segmentation models trained with only synthetic data can achieve comparable performance to the baseline model trained with real data, indicating that the synthetic data captures the essential characteristics of its real counterpart. Furthermore, we demonstrate that augmenting real with synthetic data during training can significantly improve both the Dice score (maximum increase of 4%) and Hausdorff Distance (maximum reduction of 40%) for cavity segmentation, suggesting a good potential to aid in tackling medical data scarcity.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Bases de Dados Factuais , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
14.
Sci Rep ; 12(1): 15102, 2022 09 06.
Artigo em Inglês | MEDLINE | ID: mdl-36068311

RESUMO

Breast cancer tumor grade is strongly associated with patient survival. In current clinical practice, pathologists assign tumor grade after visual analysis of tissue specimens. However, different studies show significant inter-observer variation in breast cancer grading. Computer-based breast cancer grading methods have been proposed but only work on specifically selected tissue areas and/or require labor-intensive annotations to be applied to new datasets. In this study, we trained and evaluated a deep learning-based breast cancer grading model that works on whole-slide histopathology images. The model was developed using whole-slide images from 706 young (< 40 years) invasive breast cancer patients with corresponding tumor grade (low/intermediate vs. high), and its constituents nuclear grade, tubule formation and mitotic rate. The performance of the model was evaluated using Cohen's kappa on an independent test set of 686 patients using annotations by expert pathologists as ground truth. The predicted low/intermediate (n = 327) and high (n = 359) grade groups were used to perform survival analysis. The deep learning system distinguished low/intermediate versus high tumor grade with a Cohen's Kappa of 0.59 (80% accuracy) compared to expert pathologists. In subsequent survival analysis the two groups predicted by the system were found to have a significantly different overall survival (OS) and disease/recurrence-free survival (DRFS/RFS) (p < 0.05). Univariate Cox hazard regression analysis showed statistically significant hazard ratios (p < 0.05). After adjusting for clinicopathologic features and stratifying for molecular subtype the hazard ratios showed a trend but lost statistical significance for all endpoints. In conclusion, we developed a deep learning-based model for automated grading of breast cancer on whole-slide images. The model distinguishes between low/intermediate and high grade tumors and finds a trend in the survival of the two predicted groups.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Neoplasias da Mama/patologia , Feminino , Humanos , Gradação de Tumores , Variações Dependentes do Observador , Patologistas , Análise de Sobrevida
15.
Eur J Cancer ; 175: 60-76, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36096039

RESUMO

BACKGROUND: Checkpoint inhibition has radically improved the perspective for patients with metastatic cancer, but predicting who will not respond with high certainty remains difficult. Imaging-derived biomarkers may be able to provide additional insights into the heterogeneity in tumour response between patients. In this systematic review, we aimed to summarise and qualitatively assess the current evidence on imaging biomarkers that predict response and survival in patients treated with checkpoint inhibitors in all cancer types. METHODS: PubMed and Embase were searched from database inception to 29th November 2021. Articles eligible for inclusion described baseline imaging predictive factors, radiomics and/or imaging machine learning models for predicting response and survival in patients with any kind of malignancy treated with checkpoint inhibitors. Risk of bias was assessed using the QUIPS and PROBAST tools and data was extracted. RESULTS: In total, 119 studies including 15,580 patients were selected. Of these studies, 73 investigated simple imaging factors. 45 studies investigated radiomic features or deep learning models. Predictors of worse survival were (i) higher tumour burden, (ii) presence of liver metastases, (iii) less subcutaneous adipose tissue, (iv) less dense muscle and (v) presence of symptomatic brain metastases. Hazard rate ratios did not exceed 2.00 for any predictor in the larger and higher quality studies. The added value of baseline fluorodeoxyglucose positron emission tomography parameters in predicting response to treatment was limited. Pilot studies of radioactive drug tracer imaging showed promising results. Reports on radiomics were almost unanimously positive, but numerous methodological concerns exist. CONCLUSIONS: There is well-supported evidence for several imaging biomarkers that can be used in clinical decision making. Further research, however, is needed into biomarkers that can more accurately identify which patients who will not benefit from checkpoint inhibition. Radiomics and radioactive drug labelling appear to be promising approaches for this purpose.


Assuntos
Neoplasias Encefálicas , Tomografia por Emissão de Pósitrons , Humanos , Compostos Radiofarmacêuticos
16.
Biomed Opt Express ; 13(5): 2683-2694, 2022 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-35774322

RESUMO

Correct Descemet Membrane Endothelial Keratoplasty (DMEK) graft orientation is imperative for success of DMEK surgery, but intraoperative evaluation can be challenging. We present a method for automatic evaluation of the graft orientation in intraoperative optical coherence tomography (iOCT), exploiting the natural rolling behavior of the graft. The method encompasses a deep learning model for graft segmentation, post-processing to obtain a smooth line representation, and curvature calculations to determine graft orientation. For an independent test set of 100 iOCT-frames, the automatic method correctly identified graft orientation in 78 frames and obtained an area under the receiver operating characteristic curve (AUC) of 0.84. When we replaced the automatic segmentation with the manual masks, the AUC increased to 0.92, corresponding to an accuracy of 86%. In comparison, two corneal specialists correctly identified graft orientation in 90% and 91% of the iOCT-frames.

17.
Neuroimage ; 257: 119327, 2022 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-35636227

RESUMO

Limitations in the accuracy of brain pathways reconstructed by diffusion MRI (dMRI) tractography have received considerable attention. While the technical advances spearheaded by the Human Connectome Project (HCP) led to significant improvements in dMRI data quality, it remains unclear how these data should be analyzed to maximize tractography accuracy. Over a period of two years, we have engaged the dMRI community in the IronTract Challenge, which aims to answer this question by leveraging a unique dataset. Macaque brains that have received both tracer injections and ex vivo dMRI at high spatial and angular resolution allow a comprehensive, quantitative assessment of tractography accuracy on state-of-the-art dMRI acquisition schemes. We find that, when analysis methods are carefully optimized, the HCP scheme can achieve similar accuracy as a more time-consuming, Cartesian-grid scheme. Importantly, we show that simple pre- and post-processing strategies can improve the accuracy and robustness of many tractography methods. Finally, we find that fiber configurations that go beyond crossing (e.g., fanning, branching) are the most challenging for tractography. The IronTract Challenge remains open and we hope that it can serve as a valuable validation tool for both users and developers of dMRI analysis methods.


Assuntos
Conectoma , Substância Branca , Encéfalo/diagnóstico por imagem , Conectoma/métodos , Difusão , Imagem de Difusão por Ressonância Magnética/métodos , Imagem de Tensor de Difusão/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos
18.
Artigo em Inglês | MEDLINE | ID: mdl-35452387

RESUMO

Lightweight segmentation models are becoming more popular for fast diagnosis on small and low cost medical imaging devices. This study focuses on the segmentation of the left ventricle (LV) in cardiac ultrasound (US) images. A new lightweight model [LV network (LVNet)] is proposed for segmentation, which gives the benefits of requiring fewer parameters but with improved segmentation performance in terms of Dice score (DS). The proposed model is compared with state-of-the-art methods, such as UNet, MiniNetV2, and fully convolutional dense dilated network (FCdDN). The model proposed comes with a post-processing pipeline that further enhances the segmentation results. In general, the training is done directly using the segmentation mask as the output and the US image as the input of the model. A new strategy for segmentation is also introduced in addition to the direct training method used. Compared with the UNet model, an improvement in DS performance as high as 5% for segmentation with papillary (WP) muscles was found, while showcasing an improvement of 18.5% when the papillary muscles are excluded. The model proposed requires only 5% of the memory required by a UNet model. LVNet achieves a better trade-off between the number of parameters and its segmentation performance as compared with other conventional models. The developed codes are available at https://github.com/navchetanawasthi/Left_Ventricle_Segmentation.


Assuntos
Ventrículos do Coração , Processamento de Imagem Assistida por Computador , Ecocardiografia , Ventrículos do Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Músculos , Ultrassonografia
19.
Phys Med Biol ; 67(2)2022 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-34891142

RESUMO

Breathing motion can displace internal organs by up to several cm; as such, it is a primary factor limiting image quality in medical imaging. Motion can also complicate matters when trying to fuse images from different modalities, acquired at different locations and/or on different days. Currently available devices for monitoring breathing motion often do so indirectly, by detecting changes in the outline of the torso rather than the internal motion itself, and these devices are often fixed to floors, ceilings or walls, and thus cannot accompany patients from one location to another. We have developed small ultrasound-based sensors, referred to as 'organ configuration motion' (OCM) sensors, that attach to the skin and provide rich motion-sensitive information. In the present work we tested the ability of OCM sensors to enable respiratory gating duringin vivoPET imaging. A motion phantom involving an FDG solution was assembled, and two cancer patients scheduled for a clinical PET/CT exam were recruited for this study. OCM signals were used to help reconstruct phantom andin vivodata into time series of motion-resolved images. As expected, the motion-resolved images captured the underlying motion. In Patient #1, a single large lesion proved to be mostly stationary through the breathing cycle. However, in Patient #2, several small lesions were mobile during breathing, and our proposed new approach captured their breathing-related displacements. In summary, a relatively inexpensive hardware solution was developed here for respiration monitoring. Because the proposed sensors attach to the skin, as opposed to walls or ceilings, they can accompany patients from one procedure to the next, potentially allowing data gathered in different places and at different times to be combined and compared in ways that account for breathing motion.


Assuntos
Imagem Multimodal , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Movimento (Física) , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/métodos
20.
Healthcare (Basel) ; 11(1)2022 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-36611583

RESUMO

Ultrasound (US) imaging is a medical imaging modality that uses the reflection of sound in the range of 2-18 MHz to image internal body structures. In US, the frequency bandwidth (BW) is directly associated with image resolution. BW is a property of the transducer and more bandwidth comes at a higher cost. Thus, methods that can transform strongly bandlimited ultrasound data into broadband data are essential. In this work, we propose a deep learning (DL) technique to improve the image quality for a given bandwidth by learning features provided by broadband data of the same field of view. Therefore, the performance of several DL architectures and conventional state-of-the-art techniques for image quality improvement and artifact removal have been compared on in vitro US datasets. Two training losses have been utilized on three different architectures: a super resolution convolutional neural network (SRCNN), U-Net, and a residual encoder decoder network (REDNet) architecture. The models have been trained to transform low-bandwidth image reconstructions to high-bandwidth image reconstructions, to reduce the artifacts, and make the reconstructions visually more attractive. Experiments were performed for 20%, 40%, and 60% fractional bandwidth on the original images and showed that the improvements obtained are as high as 45.5% in RMSE, and 3.85 dB in PSNR, in datasets with a 20% bandwidth limitation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...