Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Med Image Anal ; 97: 103269, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39024973

RESUMEN

Lesion volume is an important predictor for prognosis in breast cancer. However, it is currently impossible to compute lesion volumes accurately from digital mammography data, which is the most popular and readily available imaging modality for breast cancer. We make a step towards a more accurate lesion volume measurement on digital mammograms by developing a model that allows to estimate lesion volumes on processed mammogram. Processed mammograms are the images routinely used by radiologists in clinical practice as well as in breast cancer screening and are available in medical centers. Processed mammograms are obtained from raw mammograms, which are the X-ray data coming directly from the scanner, by applying certain vendor-specific non-linear transformations. At the core of our volume estimation method is a physics-based algorithm for measuring lesion volumes on raw mammograms. We subsequently extend this algorithm to processed mammograms via a deep learning image-to-image translation model that produces synthetic raw mammograms from processed mammograms in a multi-vendor setting. We assess the reliability and validity of our method using a dataset of 1778 mammograms with an annotated mass. Firstly, we investigate the correlations between lesion volumes computed from mediolateral oblique and craniocaudal views, with a resulting Pearson correlation of 0.93 [95% confidence interval (CI) 0.92 - 0.93]. Secondly, we compare the resulting lesion volumes from true and synthetic raw data, with a resulting Pearson correlation of 0.998 [95%CI 0.998 - 0.998] . Finally, for a subset of 100 mammograms with a malignant mass and concurrent MRI examination available, we analyze the agreement between lesion volume on mammography and MRI, resulting in an intraclass correlation coefficient of 0.81 [95%CI 0.73 - 0.87] for consistency and 0.78 [95%CI 0.66 - 0.86] for absolute agreement. In conclusion, we developed an algorithm to measure mammographic lesion volume that reached excellent reliability and good validity, when using MRI as ground truth. The algorithm may play a role in lesion characterization and breast cancer prognostication on mammograms.


Asunto(s)
Algoritmos , Neoplasias de la Mama , Mamografía , Mamografía/métodos , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Femenino , Reproducibilidad de los Resultados , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Carga Tumoral , Aprendizaje Profundo
2.
Med Phys ; 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38843540

RESUMEN

BACKGROUND: Computer algorithms that simulate lower-doses computed tomography (CT) images from clinical-dose images are widely available. However, most operate in the projection domain and assume access to the reconstruction method. Access to commercial reconstruction methods may often not be available in medical research, making image-domain noise simulation methods useful. However, the introduction of non-linear reconstruction methods, such as iterative and deep learning-based reconstruction, makes noise insertion in the image domain intractable, as it is not possible to determine the noise textures analytically. PURPOSE: To develop a deep learning-based image-domain method to generate low-dose CT images from clinical-dose CT (CDCT) images for non-linear reconstruction methods. METHODS: We propose a fully image domain-based method, utilizing a series of three convolutional neural networks (CNNs), which, respectively, denoise CDCT images, predict the standard deviation map of the low-dose image, and generate the noise power spectra (NPS) of local patches throughout the low-dose image. All three models have U-net-based architectures and are partly or fully three-dimensional. As a use case for this study and with no loss of generality, we use paired low-dose and clinical-dose brain CT scans. A dataset of 326 $\hskip.001pt 326$ paired scans was retrospectively obtained. All images were acquired with a wide-area detector clinical system and reconstructed using its standard clinical iterative algorithm. Each pair was registered using rigid registration to correct for motion between acquisitions. The data was randomly partitioned into training ( 251 $\hskip.001pt 251$ samples), validation ( 25 $\hskip.001pt 25$ samples), and test ( 50 $\hskip.001pt 50$ samples) sets. The performance of each of these three CNNs was validated separately. For the denoising CNN, the local standard deviation decrease, and bias were determined. For the standard deviation map CNN, the real and estimated standard deviations were compared locally. Finally, for the NPS CNN, the NPS of the synthetic and real low-dose noise were compared inside and outside the skull. Two proof-of-concept denoising studies were performed to determine if the performance of a CNN- or a gradient-based denoising filter on the synthetic low-dose data versus real data differed. RESULTS: The denoising network had a median decrease in noise in the cerebrospinal fluid by a factor of 1.71 $1.71$ and introduced a median bias of + 0.7 $ + 0.7$ HU. The network for standard deviation map estimation had a median error of + 0.1 $ + 0.1$ HU. The noise power spectrum estimation network was able to capture the anisotropic and shift-variant nature of the noise structure by showing good agreement between the synthetic and real low-dose noise and their corresponding power spectra. The two proof of concept denoising studies showed only minimal difference in standard deviation improvement ratio between the synthetic and real low-dose CT images with the median difference between the two being 0.0 and +0.05 for the CNN- and gradient-based filter, respectively. CONCLUSION: The proposed method demonstrated good performance in generating synthetic low-dose brain CT scans without access to the projection data or to the reconstruction method. This method can generate multiple low-dose image realizations from one clinical-dose image, so it is useful for validation, optimization, and repeatability studies of image-processing algorithms.

3.
Med Phys ; 51(3): 2081-2095, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37656009

RESUMEN

BACKGROUND: Simulated computed tomography (CT) images allow for knowledge of the underlying ground truth and for easy variation of imaging conditions, making them ideal for testing and optimization of new applications or algorithms. However, simulating all processes that affect CT images can result in simulations that are demanding in terms of processing time and computer memory. Therefore, it is of interest to determine how much the simulation can be simplified while still achieving realistic results. PURPOSE: To develop a scanner-specific CT simulation using physics-based simulations for the position-dependent effects and shift-invariant image corruption methods for the detector effects. And to investigate the impact on image realism of introducing simplifications in the simulation process that lead to faster and less memory-demanding simulations. METHODS: To make the simulator realistic and scanner-specific, the spatial resolution and noise characteristics, and the exposure-to-detector output relationship of a clinical CT system were determined. The simulator includes a finite focal spot size, raytracing of the digital phantom, gantry rotation during projection acquisition, and finite detector element size. Previously published spectral models were used to model the spectrum for the given tube voltage. The integrated energy at each element of the detector was calculated using the Beer-Lambert law. The resulting angular projections were subsequently corrupted by the detector modulation transfer function (MTF), and by addition of noise according to the noise power spectrum (NPS) and signal mean-variance relationship, which were measured for different scanner settings. The simulated sinograms were reconstructed on the clinical CT system and compared to real CT images in terms of CT numbers, noise magnitude using the standard deviation, noise frequency content using the NPS, and spatial resolution using the MTF throughout the field of view (FOV). The CT numbers were validated using a multi-energy CT phantom, the noise magnitude and frequency were validated with a water phantom, and the spatial resolution was validated with a tungsten wire. These metrics were compared at multiple scanner settings, and locations in the FOV. Once validated, the simulation was simplified by reducing the level of subsampling of the focal spot area, rotation and of detector pixel size, and the changes in MTFs were analyzed. RESULTS: The average relative errors for spatial resolution within and across image slices, noise magnitude, and noise frequency content within and across slices were 3.4%, 3.3%, 4.9%, 3.9%, and 6.2%, respectively. The average absolute difference in CT numbers was 10.2 HU and the maximum was 22.5 HU. The simulation simplification showed that all subsampling can be avoided, except for angular, while the error in frequency at 10% MTF would be maximum 16.3%. CONCLUSION: The simulation of a scanner-specific CT allows for the generation of realistic CT images by combining physics-based simulations for the position-dependent effects and image-corruption methods for the shift-invariant ones. Together with the available ground truth of the digital phantom, it results in a useful tool to perform quantitative analysis of reconstruction or post-processing algorithms. Some simulation simplifications allow for reduced time and computer power requirements with minimal loss of realism.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Simulación por Computador , Fantasmas de Imagen
4.
Med Phys ; 50(12): 7579-7593, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37846969

RESUMEN

BACKGROUND: Cone beam computed tomography (CBCT) plays an important role in many medical fields nowadays. Unfortunately, the potential of this imaging modality is hampered by lower image quality compared to the conventional CT, and producing accurate reconstructions remains challenging. A lot of recent research has been directed towards reconstruction methods relying on deep learning, which have shown great promise for various imaging modalities. However, practical application of deep learning to CBCT reconstruction is complicated by several issues, such as exceedingly high memory costs of deep learning methods when working with fully 3D data. Additionally, deep learning methods proposed in the literature are often trained and evaluated only on data from a specific region of interest, thus raising concerns about possible lack of generalization to other regions. PURPOSE: In this work, we aim to address these limitations and propose LIRE: a learned invertible primal-dual iterative scheme for CBCT reconstruction. METHODS: LIRE is a learned invertible primal-dual iterative scheme for CBCT reconstruction, wherein we employ a U-Net architecture in each primal block and a residual convolutional neural network (CNN) architecture in each dual block. Memory requirements of the network are substantially reduced while preserving its expressive power through a combination of invertible residual primal-dual blocks and patch-wise computations inside each of the blocks during both forward and backward pass. These techniques enable us to train on data with isotropic 2 mm voxel spacing, clinically-relevant projection count and detector panel resolution on current hardware with 24 GB video random access memory (VRAM). RESULTS: Two LIRE models for small and for large field-of-view (FoV) setting were trained and validated on a set of 260 + 22 thorax CT scans and tested using a set of 142 thorax CT scans plus an out-of-distribution dataset of 79 head and neck CT scans. For both settings, our method surpasses the classical methods and the deep learning baselines on both test sets. On the thorax CT set, our method achieves peak signal-to-noise ratio (PSNR) of 33.84 ± 2.28 for the small FoV setting and 35.14 ± 2.69 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 ± 1.75 and 34.29 ± 2.71 respectively. On the head and neck CT set, our method achieves PSNR of 39.35 ± 1.75 for the small FoV setting and 41.21 ± 1.41 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 ± 1.75 and 34.29 ± 2.71 respectively. Additionally, we demonstrate that LIRE can be finetuned to reconstruct high-resolution CBCT data with the same geometry but 1 mm voxel spacing and higher detector panel resolution, where it outperforms the U-Net baseline as well. CONCLUSIONS: Learned invertible primal-dual schemes with additional memory optimizations can be trained to reconstruct CBCT volumes directly from the projection data with clinically-relevant geometry and resolution. Such methods can offer better reconstruction quality and generalization compared to classical deep learning baselines.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Haz Cónico/métodos , Tomografía Computarizada por Rayos X , Redes Neurales de la Computación , Relación Señal-Ruido , Fantasmas de Imagen
5.
Front Neurosci ; 16: 919186, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35873808

RESUMEN

Deep-learning-based brain magnetic resonance imaging (MRI) reconstruction methods have the potential to accelerate the MRI acquisition process. Nevertheless, the scientific community lacks appropriate benchmarks to assess the MRI reconstruction quality of high-resolution brain images, and evaluate how these proposed algorithms will behave in the presence of small, but expected data distribution shifts. The multi-coil MRI (MC-MRI) reconstruction challenge provides a benchmark that aims at addressing these issues, using a large dataset of high-resolution, three-dimensional, T1-weighted MRI scans. The challenge has two primary goals: (1) to compare different MRI reconstruction models on this dataset and (2) to assess the generalizability of these models to data acquired with a different number of receiver coils. In this paper, we describe the challenge experimental design and summarize the results of a set of baseline and state-of-the-art brain MRI reconstruction models. We provide relevant comparative information on the current MRI reconstruction state-of-the-art and highlight the challenges of obtaining generalizable models that are required prior to broader clinical adoption. The MC-MRI benchmark data, evaluation code, and current challenge leaderboard are publicly available. They provide an objective performance assessment for future developments in the field of brain MRI reconstruction.

6.
Diagnostics (Basel) ; 12(7)2022 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-35885594

RESUMEN

Automatic breast and fibro-glandular tissue (FGT) segmentation in breast MRI allows for the efficient and accurate calculation of breast density. The U-Net architecture, either 2D or 3D, has already been shown to be effective at addressing the segmentation problem in breast MRI. However, the lack of publicly available datasets for this task has forced several authors to rely on internal datasets composed of either acquisitions without fat suppression (WOFS) or with fat suppression (FS), limiting the generalization of the approach. To solve this problem, we propose a data-centric approach, efficiently using the data available. By collecting a dataset of T1-weighted breast MRI acquisitions acquired with the use of the Dixon method, we train a network on both T1 WOFS and FS acquisitions while utilizing the same ground truth segmentation. Using the "plug-and-play" framework nnUNet, we achieve, on our internal test set, a Dice Similarity Coefficient (DSC) of 0.96 and 0.91 for WOFS breast and FGT segmentation and 0.95 and 0.86 for FS breast and FGT segmentation, respectively. On an external, publicly available dataset, a panel of breast radiologists rated the quality of our automatic segmentation with an average of 3.73 on a four-point scale, with an average percentage agreement of 67.5%.

7.
Med Image Anal ; 71: 102061, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33910108

RESUMEN

The two-dimensional nature of mammography makes estimation of the overall breast density challenging, and estimation of the true patient-specific radiation dose impossible. Digital breast tomosynthesis (DBT), a pseudo-3D technique, is now commonly used in breast cancer screening and diagnostics. Still, the severely limited 3rd dimension information in DBT has not been used, until now, to estimate the true breast density or the patient-specific dose. This study proposes a reconstruction algorithm for DBT based on deep learning specifically optimized for these tasks. The algorithm, which we name DBToR, is based on unrolling a proximal-dual optimization method. The proximal operators are replaced with convolutional neural networks and prior knowledge is included in the model. This extends previous work on a deep learning-based reconstruction model by providing both the primal and the dual blocks with breast thickness information, which is available in DBT. Training and testing of the model were performed using virtual patient phantoms from two different sources. Reconstruction performance, and accuracy in estimation of breast density and radiation dose, were estimated, showing high accuracy (density <±3%; dose <±20%) without bias, significantly improving on the current state-of-the-art. This work also lays the groundwork for developing a deep learning-based reconstruction algorithm for the task of image interpretation by radiologists.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Mama/diagnóstico por imagen , Densidad de la Mama , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Mamografía , Dosis de Radiación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA