Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 542
Filter
1.
J Contam Hydrol ; 266: 104418, 2024 Aug 26.
Article in English | MEDLINE | ID: mdl-39217676

ABSTRACT

Scarcity of stream salinity data poses a challenge to understanding salinity dynamics and its implications for water supply management in water-scarce salt-prone regions around the world. This paper introduces a framework for generating continuous daily stream salinity estimates using instance-based transfer learning (TL) and assessing the reliability of the synthetic salinity data through uncertainty quantification via prediction intervals (PIs). The framework was developed using two temporally distinct specific conductance (SC) datasets from the Upper Red River Basin (URRB) located in southwestern Oklahoma and Texas Panhandle, United States. The instance-based TL approach was implemented by calibrating Feedforward Neural Networks (FFNNs) on a source SC dataset of around 1200 instantaneous grab samples collected by United States Geological Survey (USGS) from 1959 to 1993. The trained FFNNs were subsequently tested on a target dataset (1998-present) of 220 instantaneous grab samples collected by the Oklahoma Water Resources Board (OWRB). The framework's generalizability was assessed in the data-rich Bird Creek watershed in Oklahoma by manipulating continuous SC data to simulate data-scarce conditions for training the models and using the complete Bird Creek dataset for model evaluation. The Lower Upper Bound Estimation (LUBE) method was used with FFNNs to estimate PIs for uncertainty quantification. Autoregressive SC prediction methods via FFNN were found to be reliable with Nash Sutcliffe Efficiency (NSE) values of 0.65 and 0.45 on in-sample and out-of-sample test data, respectively. The same modeling scenario resulted in an NSE of 0.54 for the Bird Creek data using a similar missing data ratio, whereas a higher ratio of observed data increased the accuracy (NSE = 0.84). The relatively narrow estimated PIs for the North Fork Red River in the URRB indicated satisfactory stream salinity predictions, showing an average width equivalent to 25 % of the observed range and a confidence level of 70 %.

2.
Int J Numer Method Biomed Eng ; : e3864, 2024 Sep 09.
Article in English | MEDLINE | ID: mdl-39250194

ABSTRACT

Heat transfer in the human eyeball, a complex organ, is significantly influenced by various pathophysiological and external parameters. Particularly, heat transfer critically affects fluid behavior within the eye and ocular drug delivery processes. Overcoming the challenges of experimental analysis, this study introduces a comprehensive three-dimensional mathematical and computational model to simulate the heat transfer in a realistic geometry. Our work includes an extensive sensitivity analysis to address uncertainties and delineate the impact of different variables on heat distribution in ocular tissues. To manage the model's complexity, we employed a very fast model reduction technique with certified sharp error bounds, ensuring computational efficiency without compromising accuracy. Our results demonstrate remarkable consistency with experimental observations and align closely with existing numerical findings in the literature. Crucially, our findings underscore the significant role of blood flow and environmental conditions, particularly in the eye's internal tissues. Clinically, this model offers a promising tool for examining the temperature-related effects of various therapeutic interventions on the eye. Such insights are invaluable for optimizing treatment strategies in ophthalmology.

3.
Int J Numer Method Biomed Eng ; : e3867, 2024 Sep 06.
Article in English | MEDLINE | ID: mdl-39239830

ABSTRACT

The Windkessel (WK) model is a simplified mathematical model used to represent the systemic arterial circulation. While the WK model is useful for studying blood flow dynamics, it suffers from inaccuracies or uncertainties that should be considered when using it to make physiological predictions. This paper aims to develop an efficient and easy-to-implement uncertainty quantification method based on a local gradient-based formulation to quantify the uncertainty of the pressure waveform resulting from aleatory uncertainties of the WK parameters and flow waveform. The proposed methodology, tested against Monte Carlo simulations, demonstrates good agreement in estimating blood pressure uncertainties due to uncertain Windkessel parameters, but less agreement considering uncertain blood-flow waveforms. To illustrate our methodology's applicability, we assessed the aortic pressure uncertainty generated by Windkessel parameters-sets from an available in silico database representing healthy adults. The results from the proposed formulation align qualitatively with those in the database and in vivo data. Furthermore, we investigated how changes in the uncertainty of the Windkessel parameters affect the uncertainty of systolic, diastolic, and pulse pressures. We found that peripheral resistance uncertainty produces the most significant change in the systolic and diastolic blood pressure uncertainties. On the other hand, compliance uncertainty considerably modifies the pulse pressure standard deviation. The presented expansion-based method is a tool for efficiently propagating the Windkessel parameters' uncertainty to the pressure waveform. The Windkessel model's clinical use depends on the reliability of the pressure in the presence of input uncertainties, which can be efficiently investigated with the proposed methodology. For instance, in wearable technology that uses sensor data and the Windkessel model to estimate systolic and diastolic blood pressures, it is important to check the confidence level in these calculations to ensure that the pressures accurately reflect the patient's cardiovascular condition.

4.
Sensors (Basel) ; 24(15)2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39123913

ABSTRACT

Validation is a critical aspect of product development for meeting design goals and mitigating risk in the face of considerable cost and time commitments. In this research article, uncertainty quantification (UQ) for efficiency testing of an Electric Drive Unit (EDU) is demonstrated, considering confidence in simulations with respect to the validation campaign. The methodology used for UQ is consistent with the framework mentioned in the guide to the expression of uncertainty in measurement (GUM). An analytical evaluation of the measurement chain involved in EDU efficiency testing was performed and elemental uncertainties were derived, later to be propagated to the derived quantity of efficiency. When uncertainties were associated with measurements, the erroneous measurements made through sensors in the measurement chain were highlighted. These results were used for the assessment of requirement coverage and the validation of test results.

5.
Sensors (Basel) ; 24(15)2024 Jul 27.
Article in English | MEDLINE | ID: mdl-39123931

ABSTRACT

This paper presents a novel adaptation of the conventional approximate Bayesian computation sequential Monte Carlo (ABC-SMC) sampling algorithm for parameter estimation in the presence of uncertainties, coined combinatorial ABC-SMC. Inference of this type is used in situations where there does not exist a closed form of the associated likelihood function, which is replaced by a simulating model capable of producing artificial data. In the literature, conventional ABC-SMC is utilised to perform inference on continuous parameters. The novel scheme presented here has been developed to perform inference on parameters that are high-dimensional binary, rather than continuous. By altering the form of the proposal distribution from which to sample candidates in subsequent iterations (referred to as waves), high-dimensional binary variables may be targeted and inferred by the scheme. The efficacy of the proposed scheme is demonstrated through application to vibration data obtained in a structural dynamics experiment on a fibre-optic sensor simulated as a finite plate with uncertain boundary conditions at its edges. Results indicate that the method provides sound inference on the plate boundary conditions, which is validated through subsequent application of the method to multiple vibration datasets. Comparisons between appropriate forms of the metric function used in the scheme are also developed to highlight the effect of this element in the schemes convergence.

6.
Philos Trans A Math Phys Eng Sci ; 382(2279): 20230364, 2024 Sep 23.
Article in English | MEDLINE | ID: mdl-39129401

ABSTRACT

Locally resonant metamaterials (LRMs) have recently emerged in the search for lightweight noise and vibration solutions. These materials have the ability to create stop bands, which arise from the sub-wavelength addition of identical resonators to a host structure and result in strong vibration attenuation. However, their manufacturing inevitably introduces variability such that the system as-manufactured often deviates significantly from the original as-designed. This can reduce attenuation performance, but may also broaden the attenuation band. This work focuses on the impact of variability within tolerance ranges in resonator properties on the vibration attenuation in metamaterial beams. Following a qualitative pre-study, two non-intrusive uncertainty propagation approaches are applied to find the upper and lower bounds of three performance metrics, by evaluating deterministic metamaterial models with uncertain parameters defined as interval variables. A global search approach is used and compared with a machine learning (ML)-based uncertainty propagation approach which significantly reduces the required number of simulations. Variability in resonator stiffnesses and masses is found to have the highest impact. Variability in the resonator positions only has a comparable impact for less deep sub-wavelength designs. The broadening potential of varying resonator properties is exploited in broadband optimization and the robustness of the optimized metamaterial is assessed.This article is part of the theme issue 'Current developments in elastic and acoustic metamaterials science (Part 2)'.

7.
J Sleep Res ; : e14300, 2024 Aug 07.
Article in English | MEDLINE | ID: mdl-39112022

ABSTRACT

Wearable electroencephalography devices emerge as a cost-effective and ergonomic alternative to gold-standard polysomnography, paving the way for better health monitoring and sleep disorder screening. Machine learning allows to automate sleep stage classification, but trust and reliability issues have hampered its adoption in clinical applications. Estimating uncertainty is a crucial factor in enhancing reliability by identifying regions of heightened and diminished confidence. In this study, we used an uncertainty-centred machine learning pipeline, U-PASS, to automate sleep staging in a challenging real-world dataset of single-channel electroencephalography and accelerometry collected with a wearable device from an elderly population. We were able to effectively limit the uncertainty of our machine learning model and to reliably inform clinical experts of which predictions were uncertain to improve the machine learning model's reliability. This increased the five-stage sleep-scoring accuracy of a state-of-the-art machine learning model from 63.9% to 71.2% on our dataset. Remarkably, the machine learning approach outperformed the human expert in interpreting these wearable data. Manual review by sleep specialists, without specific training for sleep staging on wearable electroencephalography, proved ineffective. The clinical utility of this automated remote monitoring system was also demonstrated, establishing a strong correlation between the predicted sleep parameters and the reference polysomnography parameters, and reproducing known correlations with the apnea-hypopnea index. In essence, this work presents a promising avenue to revolutionize remote patient care through the power of machine learning by the use of an automated data-processing pipeline enhanced with uncertainty estimation.

8.
J Math Imaging Vis ; 66(4): 697-717, 2024.
Article in English | MEDLINE | ID: mdl-39156696

ABSTRACT

We consider the problem of blob detection for uncertain images, such as images that have to be inferred from noisy measurements. Extending recent work motivated by astronomical applications, we propose an approach that represents the uncertainty in the position and size of a blob by a region in a three-dimensional scale space. Motivated by classic tube methods such as the taut-string algorithm, these regions are obtained from level sets of the minimizer of a total variation functional within a high-dimensional tube. The resulting non-smooth optimization problem is challenging to solve, and we compare various numerical approaches for its solution and relate them to the literature on constrained total variation denoising. Finally, the proposed methodology is illustrated on numerical experiments for deconvolution and models related to astrophysics, where it is demonstrated that it allows to represent the uncertainty in the detected blobs in a precise and physically interpretable way.

9.
Mol Pharm ; 21(9): 4356-4371, 2024 Sep 02.
Article in English | MEDLINE | ID: mdl-39132855

ABSTRACT

We present a novel computational approach for predicting human pharmacokinetics (PK) that addresses the challenges of early stage drug design. Our study introduces and describes a large-scale data set of 11 clinical PK end points, encompassing over 2700 unique chemical structures to train machine learning models. To that end multiple advanced training strategies are compared, including the integration of in vitro data and a novel self-supervised pretraining task. In addition to the predictions, our final model provides meaningful epistemic uncertainties for every data point. This allows us to successfully identify regions of exceptional predictive performance, with an absolute average fold error (AAFE/geometric mean fold error) of less than 2.5 across multiple end points. Together, these advancements represent a significant leap toward actionable PK predictions, which can be utilized early on in the drug design process to expedite development and reduce reliance on nonclinical studies.


Subject(s)
Drug Design , Machine Learning , Humans , Pharmacokinetics , Pharmaceutical Preparations/chemistry
10.
Entropy (Basel) ; 26(8)2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39202104

ABSTRACT

Deep learning approaches have been gaining importance in several applications. However, the widespread use of these methods in safety-critical domains, such as Autonomous Driving, is still dependent on their reliability and trustworthiness. The goal of this paper is to provide a review of deep learning-based uncertainty methods and their applications to support perception tasks for Autonomous Driving. We detail significant Uncertainty Quantification and calibration methods, and their contributions and limitations, as well as important metrics and concepts. We present an overview of the state of the art of out-of-distribution detection and active learning, where uncertainty estimates are commonly applied. We show how these methods have been applied in the automotive context, providing a comprehensive analysis of reliable AI for Autonomous Driving. Finally, challenges and opportunities for future work are discussed for each topic.

11.
ArXiv ; 2024 Aug 13.
Article in English | MEDLINE | ID: mdl-39184536

ABSTRACT

When predicting physical phenomena through simulation, quantification of the total uncertainty due to multiple sources is as crucial as making sure the underlying numerical model is accurate. Possible sources include irreducible aleatoric uncertainty due to noise in the data, epistemic uncertainty induced by insufficient data or inadequate parameterization, and model-form uncertainty related to the use of misspecified model equations. In addition, recently proposed approaches provide flexible ways to combine information from data with full or partial satisfaction of equations that typically encode physical principles. Physics-based regularization interacts in nontrivial ways with aleatoric, epistemic and model-form uncertainty and their combination, and a better understanding of this interaction is needed to improve the predictive performance of physics-informed digital twins that operate under real conditions. To better understand this interaction, with a specific focus on biological and physiological models, this study investigates the decomposition of total uncertainty in the estimation of states and parameters of a differential system simulated with MC X-TFC, a new physics-informed approach for uncertainty quantification based on random projections and Monte-Carlo sampling. After an introductory comparison between approaches for physics-informed estimation, MC X-TFC is applied to a six-compartment stiff ODE system, the CVSim-6 model, developed in the context of human physiology. The system is first analyzed by progressively removing data while estimating an increasing number of parameters, and subsequently by investigating total uncertainty under model-form misspecification of non-linear resistance in the pulmonary compartment. In particular, we focus on the interaction between the formulation of the discrepancy term and quantification of model-form uncertainty, and show how additional physics can help in the estimation process. The method demonstrates robustness and efficiency in estimating unknown states and parameters, even with limited, sparse, and noisy data. It also offers great flexibility in integrating data with physics for improved estimation, even in cases of model misspecification.

12.
Comput Biol Med ; 180: 108990, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39126788

ABSTRACT

Segmentation in medical images is inherently ambiguous. It is crucial to capture the uncertainty in lesion segmentations to assist cancer diagnosis and further interventions. Recent works have made great progress in generating multiple plausible segmentation results as diversified references to account for the uncertainty in lesion segmentations. However, the efficiency of existing models is limited, and the uncertainty information lying in multi-annotated datasets remains to be fully utilized. In this study, we propose a series of methods to corporately deal with the above limitation and leverage the abundant information in multi-annotated datasets: (1) Customized T-time Inner Sampling Network to promote the modeling flexibility and efficiently generate samples matching the ground-truth distribution of a number of annotators; (2) Uncertainty Degree defined for quantitatively measuring the uncertainty of each sample and the imbalance of the whole multi-annotated dataset from a brand new perspective; (3) Uncertainty-aware Data Augmentation Strategy to help probabilistic models adaptively fit samples with different ranges of uncertainty. We have evaluated each of them on both the publicly available lung nodule dataset and our in-house Liver Tumor dataset. Results show that our proposed methods achieves the overall best performance on both accuracy and efficiency, demonstrating its great potential in lesion segmentations and more downstream tasks in real clinical scenarios.


Subject(s)
Lung Neoplasms , Humans , Uncertainty , Lung Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Algorithms , Databases, Factual
13.
Sci Rep ; 14(1): 16166, 2024 Jul 13.
Article in English | MEDLINE | ID: mdl-39003341

ABSTRACT

Machine learning is increasingly applied to Earth Observation (EO) data to obtain datasets that contribute towards international accords. However, these datasets contain inherent uncertainty that needs to be quantified reliably to avoid negative consequences. In response to the increased need to report uncertainty, we bring attention to the promise of conformal prediction within the domain of EO. Unlike previous uncertainty quantification methods, conformal prediction offers statistically valid prediction regions while concurrently supporting any machine learning model and data distribution. To support the need for conformal prediction, we reviewed EO datasets and found that only 22.5% of the datasets incorporated a degree of uncertainty information, with unreliable methods prevalent. Current open implementations require moving large amounts of EO data to the algorithms. We introduced Google Earth Engine native modules that bring conformal prediction to the data and compute, facilitating the integration of uncertainty quantification into existing traditional and deep learning modelling workflows. To demonstrate the versatility and scalability of these tools we apply them to valued EO applications spanning local to global extents, regression, and classification tasks. Subsequently, we discuss the opportunities arising from the use of conformal prediction in EO. We anticipate that accessible and easy-to-use tools, such as those provided here, will drive wider adoption of rigorous uncertainty quantification in EO, thereby enhancing the reliability of downstream uses such as operational monitoring and decision-making.

14.
Patterns (N Y) ; 5(6): 100991, 2024 Jun 14.
Article in English | MEDLINE | ID: mdl-39005492

ABSTRACT

Deep-learning-based classification models are increasingly used for predicting molecular properties in drug development. However, traditional classification models using the Softmax function often give overconfident mispredictions for out-of-distribution samples, highlighting a critical lack of accurate uncertainty estimation. Such limitations can result in substantial costs and should be avoided during drug development. Inspired by advances in evidential deep learning and Posterior Network, we replaced the Softmax function with a normalizing flow to enhance the uncertainty estimation ability of the model in molecular property classification. The proposed strategy was evaluated across diverse scenarios, including simulated experiments based on a synthetic dataset, ADMET predictions, and ligand-based virtual screening. The results demonstrate that compared with the vanilla model, the proposed strategy effectively alleviates the problem of giving overconfident but incorrect predictions. Our findings support the promising application of evidential deep learning in drug development and offer a valuable framework for further research.

15.
Phys Med Biol ; 69(15)2024 Jul 19.
Article in English | MEDLINE | ID: mdl-38981594

ABSTRACT

Objective.Deep learning models that aid in medical image assessment tasks must be both accurate and reliable to be deployed within clinical settings. While deep learning models have been shown to be highly accurate across a variety of tasks, measures that indicate the reliability of these models are less established. Increasingly, uncertainty quantification (UQ) methods are being introduced to inform users on the reliability of model outputs. However, most existing methods cannot be augmented to previously validated models because they are not post hoc, and they change a model's output. In this work, we overcome these limitations by introducing a novel post hoc UQ method, termedLocal Gradients UQ, and demonstrate its utility for deep learning-based metastatic disease delineation.Approach.This method leverages a trained model's localized gradient space to assess sensitivities to trained model parameters. We compared the Local Gradients UQ method to non-gradient measures defined using model probability outputs. The performance of each uncertainty measure was assessed in four clinically relevant experiments: (1) response to artificially degraded image quality, (2) comparison between matched high- and low-quality clinical images, (3) false positive (FP) filtering, and (4) correspondence with physician-rated disease likelihood.Main results.(1) Response to artificially degraded image quality was enhanced by the Local Gradients UQ method, where the median percent difference between matching lesions in non-degraded and most degraded images was consistently higher for the Local Gradients uncertainty measure than the non-gradient uncertainty measures (e.g. 62.35% vs. 2.16% for additive Gaussian noise). (2) The Local Gradients UQ measure responded better to high- and low-quality clinical images (p< 0.05 vsp> 0.1 for both non-gradient uncertainty measures). (3) FP filtering performance was enhanced by the Local Gradients UQ method when compared to the non-gradient methods, increasing the area under the receiver operating characteristic curve (ROC AUC) by 20.1% and decreasing the false positive rate by 26%. (4) The Local Gradients UQ method also showed more favorable correspondence with physician-rated likelihood for malignant lesions by increasing ROC AUC for correspondence with physician-rated disease likelihood by 16.2%.Significance. In summary, this work introduces and validates a novel gradient-based UQ method for deep learning-based medical image assessments to enhance user trust when using deployed clinical models.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Uncertainty , Humans , Image Processing, Computer-Assisted/methods
16.
BMC Bioinformatics ; 25(1): 240, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-39014339

ABSTRACT

BACKGROUND: Identification of human leukocyte antigen (HLA) types from DNA-sequenced human samples is important in organ transplantation and cancer immunotherapy and remains a challenging task considering sequence homology and extreme polymorphism of HLA genes. RESULTS: We present Orthanq, a novel statistical model and corresponding application for transparent and uncertainty-aware quantification of haplotypes. We utilize our approach to perform HLA typing while, for the first time, reporting uncertainty of predictions and transparently observing mutations beyond reported HLA types. Using 99 gold standard samples from 1000 Genomes, Illumina Platinum Genomes and Genome In a Bottle projects, we show that Orthanq can provide overall superior accuracy and shorter runtimes than state-of-the-art HLA typers. CONCLUSIONS: Orthanq is the first approach that allows to directly utilize existing pangenome alignments and type all HLA loci. Moreover, it can be generalized for usages beyond HLA typing, e.g. for virus lineage quantification. Orthanq is available under https://orthanq.github.io .


Subject(s)
HLA Antigens , Haplotypes , Histocompatibility Testing , Humans , Haplotypes/genetics , HLA Antigens/genetics , Histocompatibility Testing/methods , Software , Uncertainty , Sequence Analysis, DNA/methods , Models, Statistical , Algorithms
17.
Sci Rep ; 14(1): 15237, 2024 07 02.
Article in English | MEDLINE | ID: mdl-38956095

ABSTRACT

Pharmacodynamic (PD) models are mathematical models of cellular reaction networks that include drug mechanisms of action. These models are useful for studying predictive therapeutic outcomes of novel drug therapies in silico. However, PD models are known to possess significant uncertainty with respect to constituent parameter data, leading to uncertainty in the model predictions. Furthermore, experimental data to calibrate these models is often limited or unavailable for novel pathways. In this study, we present a Bayesian optimal experimental design approach for improving PD model prediction accuracy. We then apply our method using simulated experimental data to account for uncertainty in hypothetical laboratory measurements. This leads to a probabilistic prediction of drug performance and a quantitative measure of which prospective laboratory experiment will optimally reduce prediction uncertainty in the PD model. The methods proposed here provide a way forward for uncertainty quantification and guided experimental design for models of novel biological pathways.


Subject(s)
Bayes Theorem , Uncertainty , Models, Biological , Computer Simulation , Humans , Signal Transduction
18.
J Imaging Inform Med ; 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38980624

ABSTRACT

Reliable and trustworthy artificial intelligence (AI), particularly in high-stake medical diagnoses, necessitates effective uncertainty quantification (UQ). Existing UQ methods using model ensembles often introduce invalid variability or computational complexity, rendering them impractical and ineffective in clinical workflow. We propose a UQ approach based on deep neuroevolution (DNE), a data-efficient optimization strategy. Our goal is to replicate trends observed in expert-based UQ. We focused on language lateralization maps from resting-state functional MRI (rs-fMRI). Fifty rs-fMRI maps were divided into training/testing (30:20) sets, representing two labels: "left-dominant" and "co-dominant." DNE facilitated acquiring an ensemble of 100 models with high training and testing set accuracy. Model uncertainty was derived from distribution entropies over the 100 model predictions. Expert reviewers provided user-based uncertainties for comparison. Model (epistemic) and user-based (aleatoric) uncertainties were consistent in the independently and identically distributed (IID) testing set, mainly indicating low uncertainty. In a mostly out-of-distribution (OOD) holdout set, both model and user-based entropies correlated but displayed a bimodal distribution, with one peak representing low and another high uncertainty. We also found a statistically significant positive correlation between epistemic and aleatoric uncertainties. DNE-based UQ effectively mirrored user-based uncertainties, particularly highlighting increased uncertainty in OOD images. We conclude that DNE-based UQ correlates with expert assessments, making it reliable for our use case and potentially for other radiology applications.

19.
Sensors (Basel) ; 24(13)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39000999

ABSTRACT

This study utilizes artificial neural networks (ANN) to estimate prediction intervals (PI) for seismic performance assessment of buildings subjected to long-term ground motion. To address the uncertainty quantification in structural health monitoring (SHM), the quality-driven lower upper bound estimation (QD-LUBE) has been opted for global probabilistic assessment of damage at local and global levels, unlike traditional methods. A distribution-free machine learning model has been adopted for enhanced reliability in quantifying uncertainty and ensuring robustness in post-earthquake probabilistic assessments and early warning systems. The distribution-free machine learning model is capable of quantifying uncertainty with high accuracy as compared to previous methods such as the bootstrap method, etc. This research demonstrates the efficacy of the QD-LUBE method in complex seismic risk assessment scenarios, thereby contributing significant enhancement in building resilience and disaster management strategies. This study also validates the findings through fragility curve analysis, offering comprehensive insights into structural damage assessment and mitigation strategies.

20.
Arthroplast Today ; 27: 101396, 2024 Jun.
Article in English | MEDLINE | ID: mdl-39071822

ABSTRACT

Hip and knee arthroplasty are high-volume procedures undergoing rapid growth. The large volume of procedures generates a vast amount of data available for next-generation analytics. Techniques in the field of artificial intelligence (AI) can assist in large-scale pattern recognition and lead to clinical insights. AI methodologies have become more prevalent in orthopaedic research. This review will first describe an overview of AI in the medical field, followed by a description of the 3 arthroplasty research areas in which AI is commonly used (risk modeling, automated radiographic measurements, arthroplasty registry construction). Finally, we will discuss the next frontier of AI research focusing on model deployment and uncertainty quantification.

SELECTION OF CITATIONS
SEARCH DETAIL