Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Phys Med Biol ; 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38830366

ABSTRACT

OBJECTIVE: In quantitative dynamic positron emission tomography (PET), time series of images, reflecting the tissue response to the arterial tracer supply, are reconstructed. This response is described by kinetic parameters, which are commonly determined on basis of the tracer concentration in tissue and the arterial input function. In clinical routine the latter is estimated by arterial blood sampling and analysis, which is a challenging process and thus, attempted to be derived directly from reconstructed PET images. However, a mathematical analysis about the necessity of measurements of the common arterial whole blood activity concentration, and the concentration of free non-metabolized tracer in the arterial plasma, for a successful kinetic parameter identification does not exist. Here we aim to address this problem mathematically. Approach: We consider the identification problem in simultaneous pharmacokinetic modeling of multiple regions of interests of dynamic PET data using the irreversible two-tissue compartment model analytically. In addition to this consideration, the situation of noisy measurements is addressed using Tikhonov regularization. Furthermore, numerical simulations with a regularization approach are carried out to illustrate the analytical results in a synthetic application example. Main results: We provide mathematical proofs showing that, under reasonable assumptions, all metabolic tissue parameters can be uniquely identified without requiring additional blood samples to measure the arterial input function. A connection to noisy measurement data is made via a consistency result, showing that exact reconstruction of the ground-truth tissue parameters is stably maintained in the vanishing noise limit. Furthermore, our numerical experiments suggest that an approximate reconstruction of kinetic parameters according to our analytic results is also possible in practice for moderate noise levels. Significance: The analytical result, which holds in the idealized, noiseless scenario, suggests that for irreversible tracers, fully quantitative dynamic PET imaging is in principle possible without costly arterial blood sampling and metabolite analysis.

2.
IEEE Trans Biomed Eng ; 70(8): 2310-2317, 2023 08.
Article in English | MEDLINE | ID: mdl-37022425

ABSTRACT

OBJECTIVE: Exploit accelerometry data for an automatic, reliable, and prompt detection of spontaneous circulation during cardiac arrest, as this is both vital for patient survival and practically challenging. METHODS: We developed a machine learning algorithm to automatically predict the circulatory state during cardiopulmonary resuscitation from 4-second-long snippets of accelerometry and electrocardiogram (ECG) data from pauses of chest compressions of real-world defibrillator records. The algorithm was trained based on 422 cases from the German Resuscitation Registry, for which ground truth labels were created by a manual annotation of physicians. It uses a kernelized Support Vector Machine classifier based on 49 features, which partially reflect the correlation between accelerometry and electrocardiogram data. RESULTS: Evaluating 50 different test-training data splits, the proposed algorithm exhibits a balanced accuracy of 81.2%, a sensitivity of 80.6%, and a specificity of 81.8%, whereas using only ECG leads to a balanced accuracy of 76.5%, a sensitivity of 80.2%, and a specificity of 72.8%. CONCLUSION: The first method employing accelerometry for pulse/no-pulse decision yields a significant increase in performance compared to single ECG-signal usage. SIGNIFICANCE: This shows that accelerometry provides relevant information for pulse/no-pulse decisions. In application, such an algorithm may be used to simplify retrospective annotation for quality management and, moreover, to support clinicians to assess circulatory state during cardiac arrest treatment.


Subject(s)
Cardiopulmonary Resuscitation , Out-of-Hospital Cardiac Arrest , Humans , Out-of-Hospital Cardiac Arrest/diagnosis , Out-of-Hospital Cardiac Arrest/therapy , Retrospective Studies , Cardiopulmonary Resuscitation/methods , Heart Rate , Electrocardiography/methods
3.
Phys Med Biol ; 67(15)2022 07 27.
Article in English | MEDLINE | ID: mdl-35594853

ABSTRACT

Objective.Complete time of flight (TOF) sinograms of state-of-the-art TOF PET scanners have a large memory footprint. Currently, they contain ∼4 · 109data bins which amount to ∼17 GB in 32 bit floating point precision. Moreover, their size will continue to increase with advances in the achievable detector TOF resolution and increases in the axial field of view. Using iterative algorithms to reconstruct such enormous TOF sinograms becomes increasingly challenging due to the memory requirements and the computation time needed to evaluate the forward model for every data bin. This is especially true for more advanced optimization algorithms such as the stochastic primal-dual hybrid gradient (SPDHG) algorithm which allows for the use of non-smooth priors for regularization using subsets with guaranteed convergence. SPDHG requires the storage of additional sinograms in memory, which severely limits its application to data sets from state-of-the-art TOF PET systems using conventional computing hardware.Approach.Motivated by the generally sparse nature of the TOF sinograms, we propose and analyze a new listmode (LM) extension of the SPDHG algorithm for image reconstruction of sparse data following a Poisson distribution. The new algorithm is evaluated based on realistic 2D and 3D simulationsn, and a real data set acquired on a state-of-the-art TOF PET/CT system. The performance of the newly proposed LM SPDHG algorithm is compared against the conventional sinogram SPDHG and the listmode EM-TV algorithm.Main results.We show that the speed of convergence of the proposed LM-SPDHG is equivalent the original SPDHG operating on binned data (TOF sinograms). However, we find that for a TOF PET system with 400 ps TOF resolution and 25 cm axial FOV, the proposed LM-SPDHG reduces the required memory from approximately 56 to 0.7 GB for a short dynamic frame with 107prompt coincidences and to 12.4 GB for a long static acquisition with 5·108prompt coincidences.Significance.In contrast to SPDHG, the reduced memory requirements of LM-SPDHG enables a pure GPU implementation on state-of-the-art GPUs-avoiding memory transfers between host and GPU-which will substantially accelerate reconstruction times. This in turn will allow the application of LM-SPDHG in routine clinical practice where short reconstruction times are crucial.


Subject(s)
Image Processing, Computer-Assisted , Positron-Emission Tomography , Algorithms , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Positron Emission Tomography Computed Tomography , Positron-Emission Tomography/methods , Tomography, X-Ray Computed
4.
Data Brief ; 41: 107973, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35242950

ABSTRACT

This publication presents in detail five exemplary cases and the algorithm used in the article (Orlob et al. 2022). Defibrillator records for the five exemplary cases were obtained from the German Resuscitation Registry. They consist of accelerometry, electrocardiogram and capnography time series as well as defibrillation times, energies and impedance when recorded. For these cases, experienced physicians annotated time points of cardiac arrest and return of spontaneous circulation or termination of resuscitation attempts, as well as the beginning and ending of every single chest compression period in consensus, as described in Orlob et al. (2022). Furthermore, an algorithm was developed which reliably detects chest compression periods automatically without the time-consuming process of manual annotation. This algorithm allows for an usage in automatic resuscitation quality assessment, machine learning approaches, and handling of big amounts of data (Orlob et al. 2022).

5.
Resuscitation ; 172: 162-169, 2022 03.
Article in English | MEDLINE | ID: mdl-34995686

ABSTRACT

AIM: To introduce and evaluate a new, open-source algorithm to detect chest compression periods automatically by the rhythmic, high amplitude signals from an accelerometer, without processing single chest compression events, and to consecutively calculate the chest compression fraction (CCF). METHODS: A consecutive sample of defibrillator records from the German Resuscitation Registry was obtained and manually annotated in consensus as ground truth. Chest compression periods were determined by different automatic approaches, including the new algorithm. The diagnostic performance of these approaches was assessed. Further, using the different approaches in conjunction with different granularities of manual annotation, several CCF versions were calculated and compared by intraclass correlation coefficient (ICC). RESULTS: 131 defibrillator recordings with a total duration of 5755 minutes were analysed. The new algorithm had a sensitivity of 99.39 (95% CI 99.38, 99.41)% and specificity of 99.17 (95% CI 99.15; 99.18)% to detect chest compressions at any given timepoint. The ICC compared to ground truth was 0.998 for the new algorithm and 0.999 for manual annotation, while the ICC of the proposed algorithm compared to the proprietary software was 0.978. The time required for manual annotation to calculate CCF was reduced by 70.48 (22.55, [94.35, 14.45])%. CONCLUSION: The proposed algorithm reliably detects chest compressions in defibrillator recordings. It can markedly reduce the workload for manual annotation, which may facilitate uniform reporting of measured quality of cardiopulmonary resuscitation. The algorithm is made freely available and may be used in big data analysis and machine learning approaches.


Subject(s)
Cardiopulmonary Resuscitation , Out-of-Hospital Cardiac Arrest , Cardiopulmonary Resuscitation/methods , Defibrillators , Heart Massage/methods , Humans , Out-of-Hospital Cardiac Arrest/therapy , Thorax
6.
Nanoscale ; 11(12): 5617-5632, 2019 Mar 21.
Article in English | MEDLINE | ID: mdl-30864603

ABSTRACT

In multi-modal electron tomography, tilt series of several signals such as X-ray spectra, electron energy-loss spectra, annular dark-field, or bright-field data are acquired at the same time in a transmission electron microscope and subsequently reconstructed in three dimensions. However, the acquired data are often incomplete and suffer from noise, and generally each signal is reconstructed independently of all other signals, not taking advantage of correlation between different datasets. This severely limits both the resolution and validity of the reconstructed images. In this paper, we show how image quality in multi-modal electron tomography can be greatly improved by employing variational modeling and multi-channel regularization techniques. To achieve this aim, we employ a coupled Total Generalized Variation (TGV) regularization that exploits correlation between different channels. In contrast to other regularization methods, coupled TGV regularization allows to reconstruct both hard transitions and gradual changes inside each sample, and links different channels at the level of first and higher order derivatives. This favors similar interface positions for all reconstructions, thereby improving the image quality for all data, in particular, for 3D elemental maps. We demonstrate the joint multi-channel TGV reconstruction on tomographic energy-dispersive X-ray spectroscopy (EDXS) and high-angle annular dark field (HAADF) data, but the reconstruction method is generally applicable to all types of signals used in electron tomography, as well as all other types of projection-based tomographies.

7.
Magn Reson Med ; 81(2): 881-892, 2019 02.
Article in English | MEDLINE | ID: mdl-30444294

ABSTRACT

PURPOSE: Highly accelerated B 1 + -mapping based on the Bloch-Siegert shift to allow 3D acquisitions even within a brief period of a single breath-hold. THEORY AND METHODS: The B 1 + dependent Bloch-Siegert phase shift is measured within a highly subsampled 3D-volume and reconstructed using a two-step variational approach, exploiting the different spatial distribution of morphology and B 1 + -field. By appropriate variable substitution the basic non-convex optimization problem is transformed in a sequential solution of two convex optimization problems with a total generalized variation (TGV) regularization for the morphology part and a smoothness constraint for the B 1 + -field. The method is evaluated on 3D in vivo data with retro- and prospective subsampling. The reconstructed B 1 + -maps are compared to a zero-padded low resolution reconstruction and a fully sampled reference. RESULTS: The reconstructed B 1 + -field maps are in high accordance to the reference for all measurements with a mean error below 1% and a maximum of about 4% for acceleration factors up to 100. The minimal error for different sampling patterns was achieved by sampling a dense region in k-space center with acquisition times of around 10-12 s for 3D-acquistions. CONCLUSIONS: The proposed variational approach enables highly accelerated 3D acquisitions of Bloch-Siegert data and thus full liver coverage in a single breath hold.


Subject(s)
Breath Holding , Imaging, Three-Dimensional , Liver/diagnostic imaging , Adult , Algorithms , Healthy Volunteers , Humans , Image Processing, Computer-Assisted/methods , Linear Models , Male , Movement , Prospective Studies , Reproducibility of Results , Retrospective Studies
8.
IEEE Trans Med Imaging ; 37(2): 590-603, 2018 02.
Article in English | MEDLINE | ID: mdl-29408787

ABSTRACT

In this article, we evaluate Parallel Level Sets (PLS) and Bowsher's method as segmentation-free anatomical priors for regularized brain positron emission tomography (PET) reconstruction. We derive the proximity operators for two PLS priors and use the EM-TV algorithm in combination with the first order primal-dual algorithm by Chambolle and Pock to solve the non-smooth optimization problem for PET reconstruction with PLS regularization. In addition, we compare the performance of two PLS versions against the symmetric and asymmetric Bowsher priors with quadratic and relative difference penalty function. For this aim, we first evaluate reconstructions of 30 noise realizations of simulated PET data derived from a real hybrid positron emission tomography/magnetic resonance imaging (PET/MR) acquisition in terms of regional bias and noise. Second, we evaluate reconstructions of a real brain PET/MR data set acquired on a GE Signa time-of-flight PET/MR in a similar way. The reconstructions of simulated and real 3D PET/MR data show that all priors were superior to post-smoothed maximum likelihood expectation maximization with ordered subsets (OSEM) in terms of bias-noise characteristics in different regions of interest where the PET uptake follows anatomical boundaries. Our implementation of the asymmetric Bowsher prior showed slightly superior performance compared with the two versions of PLS and the symmetric Bowsher prior. At very high regularization weights, all investigated anatomical priors suffer from the transfer of non-shared gradients.


Subject(s)
Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography/methods , Algorithms , Humans , Magnetic Resonance Imaging , Phantoms, Imaging
9.
Inverse Probl ; 34(8)2018 Aug.
Article in English | MEDLINE | ID: mdl-30686851

ABSTRACT

We consider a class of regularization methods for inverse problems where a coupled regularization is employed for the simultaneous reconstruction of data from multiple sources. Applications for such a setting can be found in multi-spectral or multimodality inverse problems, but also in inverse problems with dynamic data. We consider this setting in a rather general framework and derive stability and convergence results, including convergence rates. In particular, we show how parameter choice strategies adapted to the interplay of different data channels allow to improve upon convergence rates that would be obtained by treating all channels equally. Motivated by concrete applications, our results are obtained under rather general assumptions that allow to include the Kullback-Leibler divergence as data discrepancy term. To simplify their application to concrete settings, we further elaborate several practically relevant special cases in detail. To complement the analytical results, we also provide an algorithmic framework and source code that allows to solve a class of jointly regularized inverse problems with any number of data discrepancies. As concrete applications, we show numerical results for multi-contrast MR and joint MR-PET reconstruction.

10.
IEEE Trans Image Process ; 27(1): 490-499, 2018 Jan.
Article in English | MEDLINE | ID: mdl-28991741

ABSTRACT

The DjVu file format and image compression techniques are widely used in the archival of digital documents. Its key ingredients are the separation of the document into fore- and background layers and a binary switching mask, followed by a lossy, transform-based compression of the former and a dictionary-based compression of the latter. The lossy compression of the layers is based on a wavelet decomposition and bit truncation, which leads, in particular at higher compression rates, to severe compression artifacts in the standard decompression of the layers. The aim of this paper is to break ground for the variational decompression of DjVu files. To this aim, we provide an in-depth analysis and discussion of the compression standard with a particular focus on modeling data constraints for decompression. This allows to carry out DjVu decompression as regularized inversion of the compression procedure. As particular example, we evaluate the performance of such a framework using total variation and total generalized variation regularization. Furthermore, we provide routines for obtaining the necessary data constraints from a compressed DjVu file and for the forward and adjoint transformation operator involved in DjVu compression.

11.
IEEE Trans Med Imaging ; 36(1): 1-16, 2017 01.
Article in English | MEDLINE | ID: mdl-28055827

ABSTRACT

While current state of the art MR-PET scanners enable simultaneous MR and PET measurements, the acquired data sets are still usually reconstructed separately. We propose a new multi-modality reconstruction framework using second order Total Generalized Variation (TGV) as a dedicated multi-channel regularization functional that jointly reconstructs images from both modalities. In this way, information about the underlying anatomy is shared during the image reconstruction process while unique differences are preserved. Results from numerical simulations and in-vivo experiments using a range of accelerated MR acquisitions and different MR image contrasts demonstrate improved PET image quality, resolution, and quantitative accuracy.


Subject(s)
Magnetic Resonance Imaging , Positron-Emission Tomography , Algorithms , Image Processing, Computer-Assisted , Tomography, X-Ray Computed
12.
Magn Reson Med ; 78(1): 142-155, 2017 07.
Article in English | MEDLINE | ID: mdl-27476450

ABSTRACT

PURPOSE: To accelerate dynamic MR applications using infimal convolution of total generalized variation functionals (ICTGV) as spatio-temporal regularization for image reconstruction. THEORY AND METHODS: ICTGV comprises a new image prior tailored to dynamic data that achieves regularization via optimal local balancing between spatial and temporal regularity. Here it is applied for the first time to the reconstruction of dynamic MRI data. CINE and perfusion scans were investigated to study the influence of time dependent morphology and temporal contrast changes. ICTGV regularized reconstruction from subsampled MR data is formulated as a convex optimization problem. Global solutions are obtained by employing a duality based non-smooth optimization algorithm. RESULTS: The reconstruction error remains on a low level with acceleration factors up to 16 for both CINE and dynamic contrast-enhanced MRI data. The GPU implementation of the algorithm suites clinical demands by reducing reconstruction times of one dataset to less than 4 min. CONCLUSION: ICTGV based dynamic magnetic resonance imaging reconstruction allows for vast undersampling and therefore enables for very high spatial and temporal resolutions, spatial coverage and reduced scan time. With the proposed distinction of model and regularization parameters it offers a new and robust method of flexible decomposition into components with different degrees of temporal regularity. Magn Reson Med 78:142-155, 2017. © 2016 International Society for Magnetic Resonance in Medicine.


Subject(s)
Algorithms , Data Interpretation, Statistical , Heart Ventricles/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging, Cine/methods , Oscillometry/methods , Analysis of Variance , Humans , Reproducibility of Results , Sensitivity and Specificity , Spatio-Temporal Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...