Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 507
Filter
1.
Comput Biol Med ; 182: 109141, 2024 Sep 17.
Article in English | MEDLINE | ID: mdl-39293337

ABSTRACT

BACKGROUND: In electrocardiographic imaging (ECGI), selecting an optimal regularization parameter (λ) is crucial for obtaining accurate inverse electrograms. The effects of signal and geometry uncertainties on the inverse problem regularization have not been thoroughly quantified, and there is no established methodology to identify when λ is sub-optimal due to these uncertainties. This study introduces a novel approach to λ selection using Tikhonov regularization and L-curve optimization, specifically addressing the impact of electrical noise in body surface potential map (BSPM) signals and geometrical inaccuracies in the cardiac mesh. METHODS: Nineteen atrial simulations (5 of regular rhythms and 14 of atrial fibrillation) ensuring variability in substrate complexity and activation patterns were used for computing the ECGI with added white Gaussian noise from 40 dB to -3dB. Cardiac mesh displacements (1-3 cm) were applied to simulate the uncertainty of atrial positioning and study its impact on the L-curve shape. The regularization parameter, the maximum curvature, and the most horizontal angle of the L-curve (ß) were quantified. In addition, BSPM signals from real patients were used to validate our findings. RESULTS: The maximum curvature of the L-curve was found to be inversely related to signal-to-noise ratio and atrial positioning errors. In contrast, the ß angle is directly related to electrical noise and remains unaffected by geometrical errors. Our proposed adjustment of λ, based on the ß angle, provides a more reliable ECGI solution than traditional corner-based methods. Our findings have been validated with simulations and real patient data, demonstrating practical applicability. CONCLUSION: Adjusting λ based on the amount of noise in the data (or on the ß angle) allows finding optimal ECGI solutions than a λ purely found at the corner of the L-curve. It was observed that the relevant information in ECGI activation maps is preserved even under the presence of uncertainties when the regularization parameter is correctly selected. The proposed criteria for regularization parameter selection have the potential to enhance the accuracy and reliability of ECGI solutions.

2.
Adv Sci (Weinh) ; : e2406793, 2024 Sep 09.
Article in English | MEDLINE | ID: mdl-39246254

ABSTRACT

Across diverse domains of science and technology, electromagnetic (EM) inversion problems benefit from the ability to account for multimodal prior information to regularize their inherent ill-posedness. Indeed, besides priors that are formulated mathematically or learned from quantitative data, valuable prior information may be available in the form of text or images. Besides handling semantic multimodality, it is furthermore important to minimize the cost of adapting to a new physical measurement operator and to limit the requirements for costly labeled data. Here, these challenges are tackled with a frugal and multimodal semantic-EM inversion technique. The key ingredient is a multimodal generator of reconstruction results that can be pretrained, being agnostic to the physical measurement operator. The generator is fed by a multimodal foundation model encoding the multimodal semantic prior and a physical adapter encoding the measured data. For a new physical setting, only the lightweight physical adapter is retrained. The authors' architecture also enables a flexible iterative step-by-step solution to the inverse problem where each step can be semantically controlled. The feasibility and benefits of this methodology are demonstrated for three EM inverse problems: a canonical two-dimensional inverse-scattering problem in numerics, as well as three-dimensional and four-dimensional compressive microwave meta-imaging experiments.

3.
Neuroimage ; 299: 120802, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39173694

ABSTRACT

Electroencephalography (EEG) or Magnetoencephalography (MEG) source imaging aims to estimate the underlying activated brain sources to explain the observed EEG/MEG recordings. Solving the inverse problem of EEG/MEG Source Imaging (ESI) is challenging due to its ill-posed nature. To achieve a unique solution, it is essential to apply sophisticated regularization constraints to restrict the solution space. Traditionally, the design of regularization terms is based on assumptions about the spatiotemporal structure of the underlying source dynamics. In this paper, we propose a novel paradigm for ESI via an Explainable Deep Learning framework, termed as XDL-ESI, which connects the iterative optimization algorithm with deep learning architecture by unfolding the iterative updates with neural network modules. The proposed framework has the advantages of (1) establishing a data-driven approach to model the source solution structure instead of using hand-crafted regularization terms; (2) improving the robustness of source solutions by introducing a topological loss that leverages the geometric spatial information applying varying penalties on distinct localization errors; (3) improving the reconstruction efficiency and interpretability as it inherits the advantages from both the iterative optimization algorithms (interpretability) and deep learning approaches (function approximation). The proposed XDL-ESI framework provides an efficient, accurate, and interpretable paradigm to solve the ESI inverse problem with satisfactory performance in both simulated data and real clinical data. Specially, this approach is further validated using simultaneous EEG and intracranial EEG (iEEG).


Subject(s)
Deep Learning , Electroencephalography , Magnetoencephalography , Humans , Electroencephalography/methods , Magnetoencephalography/methods , Magnetoencephalography/standards , Brain/physiology , Brain/diagnostic imaging , Electrocorticography/methods , Electrocorticography/standards , Algorithms
4.
J Biomech Eng ; 146(12)2024 Dec 01.
Article in English | MEDLINE | ID: mdl-39196594

ABSTRACT

This study proposes a numerical approach for simulating bone remodeling in lumbar interbody fusion (LIF). It employs a topology optimization method to drive the remodeling process and uses a pixel function to describe the structural topology and bone density distribution. Unlike traditional approaches based on strain energy density or compliance, this study adopts von Mises stress to guide the remodeling of LIF. A novel pixel interpolation scheme associated with stress criteria is applied to the physical properties of the bone, directly addressing the stress shielding effect caused by the implanted cage, which significantly influences the bone remodeling outcome in LIF. Additionally, a boundary inverse approach is utilized to reconstruct a simplified analysis model. To reduce computational cost while maintaining high structural resolution and accuracy, the scaled boundary finite element method (SBFEM) is introduced. The proposed numerical approach successfully generates results that closely resemble human lumbar interbody fusion.


Subject(s)
Bone Remodeling , Finite Element Analysis , Lumbar Vertebrae , Spinal Fusion , Lumbar Vertebrae/surgery , Humans , Stress, Mechanical , Biomechanical Phenomena
5.
Neural Netw ; 179: 106515, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39032393

ABSTRACT

Accurate image reconstruction is crucial for photoacoustic (PA) computed tomography (PACT). Recently, deep learning has been used to reconstruct PA images with a supervised scheme, which requires high-quality images as ground truth labels. However, practical implementations encounter inevitable trade-offs between cost and performance due to the expensive nature of employing additional channels for accessing more measurements. Here, we propose a masked cross-domain self-supervised (CDSS) reconstruction strategy to overcome the lack of ground truth labels from limited PA measurements. We implement the self-supervised reconstruction in a model-based form. Simultaneously, we take advantage of self-supervision to enforce the consistency of measurements and images across three partitions of the measured PA data, achieved by randomly masking different channels. Our findings indicate that dynamically masking a substantial proportion of channels, such as 80%, yields meaningful self-supervisors in both the image and signal domains. Consequently, this approach reduces the multiplicity of pseudo solutions and enables efficient image reconstruction using fewer PA measurements, ultimately minimizing reconstruction error. Experimental results on in-vivo PACT dataset of mice demonstrate the potential of our self-supervised framework. Moreover, our method exhibits impressive performance, achieving a structural similarity index (SSIM) of 0.87 in an extreme sparse case utilizing only 13 channels, which outperforms the performance of the supervised scheme with 16 channels (0.77 SSIM). Adding to its advantages, our method can be deployed on different trainable models in an end-to-end manner, further enhancing its versatility and applicability.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Photoacoustic Techniques , Tomography, X-Ray Computed , Photoacoustic Techniques/methods , Animals , Image Processing, Computer-Assisted/methods , Mice , Tomography, X-Ray Computed/methods , Supervised Machine Learning , Neural Networks, Computer , Algorithms
6.
Neurophysiol Clin ; 54(5): 103005, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39029213

ABSTRACT

In patients with refractory epilepsy, the clinical interpretation of stereoelectroencephalographic (SEEG) signals is crucial to delineate the epileptogenic network that should be targeted by surgery. We propose a pipeline of patient-specific computational modeling of interictal epileptic activity to improve the definition of regions of interest. Comparison between the computationally defined regions of interest and the resected region confirmed the efficiency of the pipeline. This result suggests that computational modeling can be used to reconstruct signals and aid clinical interpretation.


Subject(s)
Brain , Electroencephalography , Humans , Electroencephalography/methods , Brain/physiopathology , Epilepsy/physiopathology , Computer Simulation , Male , Female , Adult , Drug Resistant Epilepsy/physiopathology
7.
Philos Trans A Math Phys Eng Sci ; 382(2277): 20230295, 2024 Aug 23.
Article in English | MEDLINE | ID: mdl-39005012

ABSTRACT

This study examines a class of time-dependent constitutive equations used to describe viscoelastic materials under creep in solid mechanics. In nonlinear elasticity, the strain response to the applied stress is expressed via an implicit graph allowing multi-valued functions. For coercive and maximal monotone graphs, the existence of a solution to the quasi-static viscoelastic problem is proven by applying the Browder-Minty fixed point theorem. Moreover, for quasi-linear viscoelastic problems, the solution is constructed as a semi-analytic formula. The inverse viscoelastic problem is represented by identification of a design variable from non-smooth measurements. A non-empty set of optimal variables is obtained based on the compactness argument by applying Tikhonov regularization in the space of bounded measures and deformations. Furthermore, an illustrative example is given for the inverse problem of isotropic kernel identification. This article is part of the theme issue 'Non-smooth variational problems with applications in mechanics'.

8.
Sensors (Basel) ; 24(14)2024 Jul 10.
Article in English | MEDLINE | ID: mdl-39065856

ABSTRACT

Contactless inductive flow tomography (CIFT) is a flow measurement technique allowing for visualization of the global flow in electrically conducting fluids. The method is based on the principle of induction by motion: very weak induced magnetic fields arise from the fluid motion under the influence of a primary excitation magnetic field and can be measured precisely outside of the fluid volume. The structure of the causative flow field can be reconstructed from the induced magnetic field values by solving the according linear inverse problem using appropriate regularization methods. The concurrent use of more than one excitation magnetic field is necessary to fully reconstruct three-dimensional liquid metal flows. In our laboratory demonstrator experiment, we impose two excitation magnetic fields perpendicular to each other to a mechanically driven flow of the liquid metal alloy GaInSn. In the first approach, the excitation fields are multiplexed. Here, the temporal resolution of the measurement needs to be kept as high as possible. Consecutive application by multiplexing enables determining the flow structure in the liquid with a temporal resolution down to 3 s with the existing equipment. In another approach, we concurrently apply two sinusoidal excitation fields with different frequencies. The signals are disentangled on the basis of the lock-in principle, enabling a successful reconstruction of the liquid metal flow.

9.
Sensors (Basel) ; 24(11)2024 May 30.
Article in English | MEDLINE | ID: mdl-38894316

ABSTRACT

We present a goniometer designed for capturing spectral and angular-resolved data from scattering and absorbing media. The experimental apparatus is complemented by a comprehensive Monte Carlo simulation, meticulously replicating the radiative transport processes within the instrument's optical components and simulating scattering and absorption across arbitrary volumes. Consequently, we were able to construct a precise digital replica, or "twin", of the experimental setup. This digital counterpart enabled us to tackle the inverse problem of deducing optical parameters such as absorption and scattering coefficients, along with the scattering anisotropy factor from measurements. We achieved this by fitting Monte Carlo simulations to our goniometric measurements using a Levenberg-Marquardt algorithm. Validation of our approach was performed using polystyrene particles, characterized by Mie scattering, supplemented by a theoretical analysis of algorithmic convergence. Ultimately, we demonstrate strong agreement between optical parameters derived using our novel methodology and those obtained via established measurement protocols.

10.
Sci Rep ; 14(1): 14198, 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38902434

ABSTRACT

Precisely estimating material parameters for cement-based materials is crucial for assessing the structural integrity of buildings. Both destructive (e.g., compression test) and non-destructive methods (e.g., ultrasound, computed tomography) are used to estimate Young's modulus. Since ultrasound estimates the dynamic Young's modulus, a formula is required to adapt it to the static modulus. For this formulas from the literature are compared. The investigated specimens are cylindrical mortar specimens with four different sand-to-cement mass fractions of 20%, 35%, 50%, and 65%. The ultrasound signals are analyzed in two distinct ways: manual onset picking and full-waveform inversion. Full-waveform inversion involves comparing the measured signal with a simulated one and iteratively adjusting the ultrasound velocities in a numerical model until the measured signal closely matches the simulated one. Using computed tomography measurements, Young's moduli are semi-analytically determined based on sand distribution in cement images. The reconstructed volume is segmented into sand, cement, and pores. Young's moduli, as determined by compression tests, were better represented by full-waveform inversions (best RMSE = 0.34 GPa) than by manual onset picking (best RMSE = 0.87 GPa). Moreover, material parameters from full-waveform inversion showed less deviation than those manually picked. The maximal standard deviation of a Young's modulus determined with FWI was 0.36, while that determined with manual picking was 1.11. Young's moduli from computed tomography scans match those from compression tests the closest, with an RMSE of 0.13 GPa.

11.
Biomed J Sci Tech Res ; 55(2): 46779-46884, 2024.
Article in English | MEDLINE | ID: mdl-38883320

ABSTRACT

There are fewer than 10 projection views in extreme few-view tomography. The state-of-the-art methods to reconstruct images with few-view data are compressed sensing based. Compressed sensing relies on a sparsification transformation and total variation (TV) norm minimization. However, for the extreme few-view tomography, the compressed sensing methods are not powerful enough. This paper seeks additional information as extra constraints so that extreme few-view tomography becomes possible. In transmission tomography, we roughly know the linear attenuation coefficients of the objects to be imaged. We can use these values as extra constraints. Computer simulations show that these extra constraints are helpful and improve the reconstruction quality.

12.
Sci Total Environ ; 946: 174374, 2024 Oct 10.
Article in English | MEDLINE | ID: mdl-38945246

ABSTRACT

Groundwater pollution source recognition (GPSR) is a prerequisite for subsequent pollution remediation and risk assessment work. The actual observed data are the most important known condition in GPSR, but the observed data can be contaminated with noise in real cases. This may directly affect the recognition results. Therefore, denoising is important. However, in different practical situations, the noise attribute (e.g., noise level) and observed data attribute (e.g., observed frequency) may be different. Therefore, it is necessary to study the applicability of denoising. Current studies have two deficiencies. First, when dealing with complex nonlinear and non-stationary situations, the effect of previous denoising methods needs to be improved. Second, previous attempts to analyze the applicability of denoising in GPSR have not been comprehensive enough because they only consider the influence of the noise attribute, while overlooking the observed data attribute. To resolve these issues, this study adopted the variational mode decomposition (VMD) to perform denoising on the noisy observed data in GPSR for the first time. It further explored the influence of different factors on the denoising effect. The tests were conducted under 12 different scenarios. Then, we expanded the study to include not only the noise attribute (noise level) but also the observed data attribute (observed frequency), thus providing a more comprehensive analysis of the applicability of denoising in GPSR. Additionally, we used a new heuristic optimization algorithm, the collective decision optimization algorithm, to improve the recognition accuracy. Four representative scenarios were adopted to test the ideas. The results showed that the VMD performed well under various scenarios, and the denoising effect diminished as the noise level increased and the observed frequency decreased. The denoising was more effective for GPSR with high noise levels and multiple observed frequencies. The collective decision optimization algorithm had a good inversion accuracy and strong robustness.

13.
Photoacoustics ; 38: 100609, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38745884

ABSTRACT

Quantitative photoacoustic tomography (qPAT) holds great potential in estimating chromophore concentrations, whereas the involved optical inverse problem, aiming to recover absorption coefficient distributions from photoacoustic images, remains challenging. To address this problem, we propose an extractor-attention-predictor network architecture (EAPNet), which employs a contracting-expanding structure to capture contextual information alongside a multilayer perceptron to enhance nonlinear modeling capability. A spatial attention module is introduced to facilitate the utilization of important information. We also use a balanced loss function to prevent network parameter updates from being biased towards specific regions. Our method obtains satisfactory quantitative metrics in simulated and real-world validations. Moreover, it demonstrates superior robustness to target properties and yields reliable results for targets with small size, deep location, or relatively low absorption intensity, indicating its broader applicability. The EAPNet, compared to the conventional UNet, exhibits improved efficiency, which significantly enhances performance while maintaining similar network size and computational complexity.

14.
Sensors (Basel) ; 24(9)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38732809

ABSTRACT

MIT (magnetic induction tomography) image reconstruction from data acquired with a single, small inductive sensor has unique requirements not found in other imaging modalities. During the course of scanning over a target, measured inductive loss decreases rapidly with distance from the target boundary. Since inductive loss exists even at infinite separation due to losses internal to the sensor, all other measurements made in the vicinity of the target require subtraction of the infinite-separation loss. This is accomplished naturally by treating infinite-separation loss as an unknown. Furthermore, since contributions to inductive loss decline with greater depth into a conductive target, regularization penalties must be decreased with depth. A pair of squared L2 penalty norms are combined to form a 2-term Sobolev norm, including a zero-order penalty that penalizes solution departures from a default solution and a first-order penalty that promotes smoothness. While constraining the solution to be non-negative and bounded from above, the algorithm is used to perform image reconstruction on scan data obtained over a 4.3 cm thick phantom consisting of bone-like features embedded in agarose gel, with the latter having a nominal conductivity of 1.4 S/m.

15.
Materials (Basel) ; 17(9)2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38730894

ABSTRACT

In the realm of high-tech materials and energy applications, accurately measuring the transient heat flow at media boundaries and the internal thermal conductivity of materials in harsh heat exchange environments poses a significant challenge when using conventional direct measurement methods. Consequently, the study of photothermal parameter reconstruction in translucent media, which relies on indirect measurement techniques, has crucial practical value. Current research on reconstructing photothermal properties within participating media typically focuses on single-objective or time-invariant properties. There is a pressing need to develop effective methods for the simultaneous reconstruction of time-varying thermal flow fields and internal thermal conductivity at the boundaries of participating media. This paper introduces a computational model based on the numerical simulation theory of internal heat transfer systems in participating media, stochastic particle swarm optimization algorithms, and Kalman filter technology. The model aims to enable the simultaneous reconstruction of various thermal parameters within the target medium. Our results demonstrate that under varying levels of measurement noise, the inversion results for different target parameters exhibit slight oscillations around the true values, leading to a reduction in reconstruction accuracy. However, overall, the model demonstrates robustness and accuracy in ideal conditions, validating its effectiveness.

16.
Materials (Basel) ; 17(3)2024 Jan 28.
Article in English | MEDLINE | ID: mdl-38591434

ABSTRACT

Measuring the size distribution and temperature of high-temperature dispersed particles, particularly in-flame soot, holds paramount importance across various industries. Laser-induced incandescence (LII) stands out as a potent non-contact diagnostic technology for in-flame soot, although its effectiveness is hindered by uncertainties associated with pre-determined thermal properties. To tackle this challenge, our study proposes a multi-parameter inversion strategy-simultaneous inversion of particle size distribution, thermal accommodation coefficient, and initial temperature of in-flame soot aggregates using time-resolved LII signals. Analyzing the responses of different heat transfer sub-models to temperature rise demonstrates the necessity of incorporating sublimation and thermionic emission for accurately reproducing LII signals of high-temperature dispersed particles. Consequently, we selected a particular LII model for the multi-parameter inversion strategy. Our research reveals that LII-based particle sizing is sensitive to biases in the initial temperature of particles (equivalent to the flame temperature), underscoring the need for the proposed multi-parameter inversion strategy. Numerical results obtained at two typical flame temperatures, 1100 K and 1700 K, illustrate that selecting an appropriate laser fluence enables the simultaneous inversion of particle size distribution, thermal accommodation coefficient, and initial particle temperatures of soot aggregates with high accuracy and confidence using the LII technique.

17.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 262-271, 2024 Apr 25.
Article in Chinese | MEDLINE | ID: mdl-38686406

ABSTRACT

Accurate reconstruction of tissue elasticity modulus distribution has always been an important challenge in ultrasound elastography. Considering that existing deep learning-based supervised reconstruction methods only use simulated displacement data with random noise in training, which cannot fully provide the complexity and diversity brought by in-vivo ultrasound data, this study introduces the use of displacement data obtained by tracking in-vivo ultrasound radio frequency signals (i.e., real displacement data) during training, employing a semi-supervised approach to enhance the prediction accuracy of the model. Experimental results indicate that in phantom experiments, the semi-supervised model augmented with real displacement data provides more accurate predictions, with mean absolute errors and mean relative errors both around 3%, while the corresponding data for the fully supervised model are around 5%. When processing real displacement data, the area of prediction error of semi-supervised model was less than that of fully supervised model. The findings of this study confirm the effectiveness and practicality of the proposed approach, providing new insights for the application of deep learning methods in the reconstruction of elastic distribution from in-vivo ultrasound data.


Subject(s)
Elastic Modulus , Elasticity Imaging Techniques , Image Processing, Computer-Assisted , Neural Networks, Computer , Phantoms, Imaging , Elasticity Imaging Techniques/methods , Image Processing, Computer-Assisted/methods , Humans , Algorithms , Deep Learning
18.
Genet Epidemiol ; 48(6): 270-288, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38644517

ABSTRACT

The genome-wide association studies (GWAS) typically use linear or logistic regression models to identify associations between phenotypes (traits) and genotypes (genetic variants) of interest. However, the use of regression with the additive assumption has potential limitations. First, the normality assumption of residuals is the one that is rarely seen in practice, and deviation from normality increases the Type-I error rate. Second, building a model based on such an assumption ignores genetic structures, like, dominant, recessive, and protective-risk cases. Ignoring genetic variants may result in spurious conclusions about the associations between a variant and a trait. We propose an assumption-free model built upon data-consistent inversion (DCI), which is a recently developed measure-theoretic framework utilized for uncertainty quantification. This proposed DCI-derived model builds a nonparametric distribution on model inputs that propagates to the distribution of observed data without the required normality assumption of residuals in the regression model. This characteristic enables the proposed DCI-derived model to cover all genetic variants without emphasizing on additivity of the classic-GWAS model. Simulations and a replication GWAS with data from the COPDGene demonstrate the ability of this model to control the Type-I error rate at least as well as the classic-GWAS (additive linear model) approach while having similar or greater power to discover variants in different genetic modes of transmission.


Subject(s)
Genome-Wide Association Study , Models, Genetic , Genome-Wide Association Study/methods , Genome-Wide Association Study/statistics & numerical data , Humans , Computer Simulation , Polymorphism, Single Nucleotide , Phenotype , Models, Statistical , Genotype , Pulmonary Disease, Chronic Obstructive/genetics , Genetic Variation
19.
Physiol Meas ; 45(4)2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38624240

ABSTRACT

Objective.Electrical impedance tomography (EIT) is a noninvasive imaging method whereby electrical measurements on the periphery of a heterogeneous conductor are inverted to map its internal conductivity. The EIT method proposed here aims to improve computational speed and noise tolerance by introducing sensitivity volume as a figure-of-merit for comparing EIT measurement protocols.Approach.Each measurement is shown to correspond to a sensitivity vector in model space, such that the set of measurements, in turn, corresponds to a set of vectors that subtend a sensitivity volume in model space. A maximal sensitivity volume identifies the measurement protocol with the greatest sensitivity and greatest mutual orthogonality. A distinguishability criterion is generalized to quantify the increased noise tolerance of high sensitivity measurements.Main result.The sensitivity volume method allows the model space dimension to be minimized to match that of the data space, and the data importance to be increased within an expanded space of measurements defined by an increased number of contacts.Significance.The reduction in model space dimension is shown to increasecomputational efficiency, accelerating tomographic inversion by several orders of magnitude, while the enhanced sensitivitytolerates higher noiselevels up to several orders of magnitude larger than standard methods.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Electric Impedance , Tomography/methods , Electric Conductivity
20.
J Contam Hydrol ; 262: 104323, 2024 03.
Article in English | MEDLINE | ID: mdl-38430692

ABSTRACT

While dozens of studies have attempted to estimate the Monod kinetic parameters of microbial reductive dechlorination, published values in the literature vary by 2-6 orders of magnitude. This lack of consensus can be attributed in part to limitations of both experimental design and parameter estimation techniques. To address these issues, Hamiltonian Monte Carlo was used to produce more than one million sets of realistic simulated microcosm data under a variety of experimental conditions. These data were then employed in model fitting experiments using a number of parameter estimation algorithms for determining Monod kinetic parameters. Analysis of data from conventional triplicate microcosms yielded parameter estimates characterized by high collinearity, resulting in poor estimation accuracy and precision. Additionally, confidence intervals computed by commonly used classical regression analysis techniques contained true parameter values much less frequently than their nominal confidence levels. Use of an alternative experimental design, requiring the same number of analyses as conventional experiments but comprised of microcosms with varying initial chlorinated ethene concentrations, is shown to result in order-of-magnitude decreases in parameter uncertainty. A Metropolis algorithm which can be run on a typical personal computer is demonstrated to return more reliable parameter interval estimates.


Subject(s)
Algorithms , Kinetics , Monte Carlo Method , Uncertainty
SELECTION OF CITATIONS
SEARCH DETAIL