Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 220
Filter
1.
Bioinformatics ; 40(Supplement_1): i140-i150, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38940126

ABSTRACT

MOTIVATION: Metastasis formation is a hallmark of cancer lethality. Yet, metastases are generally unobservable during their early stages of dissemination and spread to distant organs. Genomic datasets of matched primary tumors and metastases may offer insights into the underpinnings and the dynamics of metastasis formation. RESULTS: We present metMHN, a cancer progression model designed to deduce the joint progression of primary tumors and metastases using cross-sectional cancer genomics data. The model elucidates the statistical dependencies among genomic events, the formation of metastasis, and the clinical emergence of both primary tumors and their metastatic counterparts. metMHN enables the chronological reconstruction of mutational sequences and facilitates estimation of the timing of metastatic seeding. In a study of nearly 5000 lung adenocarcinomas, metMHN pinpointed TP53 and EGFR as mediators of metastasis formation. Furthermore, the study revealed that post-seeding adaptation is predominantly influenced by frequent copy number alterations. AVAILABILITY AND IMPLEMENTATION: All datasets and code are available on GitHub at https://github.com/cbg-ethz/metMHN.


Subject(s)
Genomics , Neoplasm Metastasis , Humans , Genomics/methods , Neoplasm Metastasis/genetics , Lung Neoplasms/genetics , Lung Neoplasms/pathology , Disease Progression , Neoplasms/genetics , Neoplasms/pathology , Adenocarcinoma of Lung/genetics , Adenocarcinoma of Lung/pathology , Mutation , Tumor Suppressor Protein p53/genetics , Tumor Suppressor Protein p53/metabolism , Cross-Sectional Studies , ErbB Receptors/genetics
2.
Sci Rep ; 14(1): 10434, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38714763

ABSTRACT

This paper presents the construction of intelligent systems for selecting the optimum concentration of geopolymer matrix components based on ranking optimality criteria. A peculiarity of the methodology is replacing discrete time intervals with a sequence of states. Markov chains represent a synthetic property accumulating heterogeneous factors. The computational basis for the calculations was the digitization of experimental data on the strength properties of fly ashes collected from thermal power plants in the Czech Republic and used as additives in geopolymers. A database and a conceptual model of priority ranking have been developed, that are suitable for determining the structure of relations of the main factors. Computational results are presented by studying geopolymer matrix structure formation kinetics under changing component concentrations in real- time. Multicriteria optimization results for fly-ash as an additive on metakaolin-based geopolymer composites show that the optimal composition of the geopolymer matrix within the selected variation range includes 100 g metakaolin, 90 g potassium activator, 8 g silica fume, 2 g basalt fibers and 50 g fly ash by ratio weight. This ratio gives the best mechanical, thermal, and technological properties.

3.
Brain Sci ; 14(5)2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38790421

ABSTRACT

Information theory explains how systems encode and transmit information. This article examines the neuronal system, which processes information via neurons that react to stimuli and transmit electrical signals. Specifically, we focus on transfer entropy to measure the flow of information between sequences and explore its use in determining effective neuronal connectivity. We analyze the causal relationships between two discrete time series, X:=Xt:t∈Z and Y:=Yt:t∈Z, which take values in binary alphabets. When the bivariate process (X,Y) is a jointly stationary ergodic variable-length Markov chain with memory no larger than k, we demonstrate that the null hypothesis of the test-no causal influence-requires a zero transfer entropy rate. The plug-in estimator for this function is identified with the test statistic of the log-likelihood ratios. Since under the null hypothesis, this estimator follows an asymptotic chi-squared distribution, it facilitates the calculation of p-values when applied to empirical data. The efficacy of the hypothesis test is illustrated with data simulated from a neuronal network model, characterized by stochastic neurons with variable-length memory. The test results identify biologically relevant information, validating the underlying theory and highlighting the applicability of the method in understanding effective connectivity between neurons.

4.
J Am Med Inform Assoc ; 31(5): 1093-1101, 2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38472144

ABSTRACT

OBJECTIVE: To introduce 2 R-packages that facilitate conducting health economics research on OMOP-based data networks, aiming to standardize and improve the reproducibility, transparency, and transferability of health economic models. MATERIALS AND METHODS: We developed the software tools and demonstrated their utility by replicating a UK-based heart failure data analysis across 5 different international databases from Estonia, Spain, Serbia, and the United States. RESULTS: We examined treatment trajectories of 47 163 patients. The overall incremental cost-effectiveness ratio (ICER) for telemonitoring relative to standard of care was 57 472 €/QALY. Country-specific ICERs were 60 312 €/QALY in Estonia, 58 096 €/QALY in Spain, 40 372 €/QALY in Serbia, and 90 893 €/QALY in the US, which surpassed the established willingness-to-pay thresholds. DISCUSSION: Currently, the cost-effectiveness analysis lacks standard tools, is performed in ad-hoc manner, and relies heavily on published information that might not be specific for local circumstances. Published results often exhibit a narrow focus, central to a single site, and provide only partial decision criteria, limiting their generalizability and comprehensive utility. CONCLUSION: We created 2 R-packages to pioneer cost-effectiveness analysis in OMOP CDM data networks. The first manages state definitions and database interaction, while the second focuses on Markov model learning and profile synthesis. We demonstrated their utility in a multisite heart failure study, comparing telemonitoring and standard care, finding telemonitoring not cost-effective.


Subject(s)
Cost-Effectiveness Analysis , Heart Failure , Humans , United States , Cost-Benefit Analysis , Reproducibility of Results , Models, Economic , Heart Failure/therapy , Markov Chains
5.
Biophys Rev ; 16(1): 29-56, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38495441

ABSTRACT

Single-cell analysis is currently one of the most high-resolution techniques to study biology. The large complex datasets that have been generated have spurred numerous developments in computational biology, in particular the use of advanced statistics and machine learning. This review attempts to explain the deeper theoretical concepts that underpin current state-of-the-art analysis methods. Single-cell analysis is covered from cell, through instruments, to current and upcoming models. The aim of this review is to spread concepts which are not yet in common use, especially from topology and generative processes, and how new statistical models can be developed to capture more of biology. This opens epistemological questions regarding our ontology and models, and some pointers will be given to how natural language processing (NLP) may help overcome our cognitive limitations for understanding single-cell data.

6.
Proc Natl Acad Sci U S A ; 121(3): e2318989121, 2024 Jan 16.
Article in English | MEDLINE | ID: mdl-38215186

ABSTRACT

The continuous-time Markov chain (CTMC) is the mathematical workhorse of evolutionary biology. Learning CTMC model parameters using modern, gradient-based methods requires the derivative of the matrix exponential evaluated at the CTMC's infinitesimal generator (rate) matrix. Motivated by the derivative's extreme computational complexity as a function of state space cardinality, recent work demonstrates the surprising effectiveness of a naive, first-order approximation for a host of problems in computational biology. In response to this empirical success, we obtain rigorous deterministic and probabilistic bounds for the error accrued by the naive approximation and establish a "blessing of dimensionality" result that is universal for a large class of rate matrices with random entries. Finally, we apply the first-order approximation within surrogate-trajectory Hamiltonian Monte Carlo for the analysis of the early spread of Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) across 44 geographic regions that comprise a state space of unprecedented dimensionality for unstructured (flexible) CTMC models within evolutionary biology.


Subject(s)
COVID-19 , SARS-CoV-2 , Humans , Algorithms , COVID-19/epidemiology , Markov Chains
7.
Spine J ; 24(1): 21-31, 2024 01.
Article in English | MEDLINE | ID: mdl-37302415

ABSTRACT

BACKGROUND CONTEXT: Degenerative cervical myelopathy (DCM) is a form of acquired spinal cord compression and contributes to reduced quality of life secondary to neurological dysfunction and pain. There remains uncertainty regarding optimal management for individuals with mild myelopathy. Specifically, owing to lacking long-term natural history studies in this population, we do not know whether these individuals should be treated with initial surgery or observation. PURPOSE: We sought to perform a cost-utility analysis to examine early surgery for mild degenerative cervical myelopathy from the healthcare payer perspective. STUDY DESIGN/SETTING: We utilized data from the prospective observational cohorts included in the Cervical Spondylotic Myelopathy AO Spine International and North America studies to determine health related quality of life estimates and clinical myelopathy outcomes. PATIENT SAMPLE: We recruited all patients that underwent surgery for DCM enrolled in the Cervical Spondylotic Myelopathy AO Spine International and North America studies between December 2005 and January 2011. OUTCOME MEASURES: Clinical assessment measures were obtained using the Modified Japanese Orthopedic Association scale and health-related quality of life measures were obtained using the Short Form-6D utility score at baseline (preoperative), 6 months, 12 months and 24 months postsurgery. Cost measures inflated to January 2015 values were obtained using pooled estimates from the hospital payer perspective for surgical patients. METHODS: We employed a Markov state transition model with Monte Carlo microsimulation using a lifetime horizon to obtain an incremental cost utility ratio associated with early surgery for mild myelopathy. Parameter uncertainty was assessed through deterministic means using one-way and two-way sensitivity analyses and probabilistically using parameter estimate distributions with microsimulation (10,000 trials). Costs and utilities were discounted at 3% per annum. RESULTS: Initial surgery for mild degenerative cervical myelopathy was associated with an incremental lifetime increase of 1.26 quality-adjusted life years (QALY) compared to observation. The associated cost incurred to the healthcare payer over a lifetime horizon was $12,894.56, resulting in a lifetime incremental cost-utility ratio of $10,250.71/QALY. Utilizing a willingness to pay threshold in keeping with the World Health Organization definition of "very cost-effective" ($54,000 CDN), the probabilistic sensitivity analysis demonstrated that 100% of cases were cost-effective. CONCLUSIONS: Surgery compared to initial observation for mild degenerative cervical myelopathy was cost-effective from the Canadian healthcare payer perspective and was associated with lifetime gains in health-related quality of life.


Subject(s)
Spinal Cord Compression , Spinal Cord Diseases , Humans , Canada , Cervical Vertebrae/surgery , Cost-Benefit Analysis , Quality of Life , Spinal Cord Compression/etiology , Spinal Cord Compression/surgery , Spinal Cord Diseases/surgery , Prospective Studies
8.
Mem Cognit ; 52(2): 430-443, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37792165

ABSTRACT

Through their selective rehearsal, Central Speakers can reshape collective memory in a group of listeners, both by increasing accessibility for mentioned items (shared practice effects) and by decreasing relative accessibility for related but unmentioned items (socially shared retrieval induced forgetting, i.e., SSRIF). Subsequent networked communication in the group can further modify these mnemonic influences. Extant empirical work has tended to examine such downstream influences on a Central Speaker's mnemonic influence following a relatively limited number of interactions - often only two or three conversations. We develop a set of Markov chain simulations to model the long-term dynamics of such conversational remembering across a variety of group types, based on reported empirical data. These models indicate that some previously reported effects will stabilize in the long-term collective memory following repeated rounds of conversation. Notably, both shared practice effects and SSRIF persist into future steady states. However, other projected future states differ from those described so far in the empirical literature, specifically: the amplification of shared practice effects in communicational versus solo remembering non-conversational groups, the relatively transient impact of social (dis)identification with a Central Speaker, and the sensitivity of communicating networks to much smaller mnemonic biases introduced by the Central Speaker than groups of individual rememberers. Together, these simulations contribute insights into the long-term temporal dynamics of collective memory by addressing questions difficult to tackle using extant laboratory methods, and provide concrete suggestions for future empirical work.


Subject(s)
Memory , Social Behavior , Humans , Markov Chains , Communication , Mental Recall
9.
J Behav Med ; 47(2): 308-319, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38017251

ABSTRACT

Family caregivers are at high risk of psychological distress and low sleep efficiency resulting from their caregiving responsibilities. Although psychological symptoms are associated with sleep efficiency, there is limited knowledge about the association of psychological distress with variations in sleep efficiency. We aimed to characterize the short- and long-term patterns of caregivers' sleep efficiency using Markov chain models and compare these patterns between groups with high and low psychological symptoms (i.e., depression, anxiety, and caregiving stress). Based on 7-day actigraphy data from 33 caregivers, we categorized sleep efficiency into three states, < 75% (S1), 75-84% (S2), and ≥ 85% (S3), and developed Markov chain models. Caregivers were likely to maintain a consistent sleep efficiency state from one night to the next without returning efficiently to a normal state. On average, it took 3.6-5.1 days to return to a night of normal sleep efficiency (S3) from lower states, and the long-term probability of achieving normal sleep was 42%. We observed lower probabilities of transitioning to or remaining in a normal sleep efficiency state (S3) in the high depression and anxiety groups compared to the low symptom groups. The differences in the time required to return to a normal state were inconsistent by symptom levels. The long-term probability of achieving normal sleep efficiency was significantly lower for caregivers with high depression and anxiety compared to the low symptom groups. Caregivers' sleep efficiency appears to remain relatively consistent over time and does not show rapid recovery. Caregivers with higher levels of depression and anxiety may be more vulnerable to sustained suboptimal sleep efficiency.


Subject(s)
Caregivers , Sleep Wake Disorders , Humans , Caregivers/psychology , Stress, Psychological/psychology , Sleep , Sleep Wake Disorders/psychology , Anxiety/psychology , Depression
10.
J Neurosci Methods ; 403: 110049, 2024 03.
Article in English | MEDLINE | ID: mdl-38151187

ABSTRACT

BACKGROUND: Dynamic spatial functional network connectivity (dsFNC) has shown advantages in detecting functional alterations impacted by mental disorders using magnitude-only fMRI data. However, complete fMRI data are complex-valued with unique and useful phase information. METHODS: We propose dsFNC of spatial source phase (SSP) maps, derived from complex-valued fMRI data (named SSP-dsFNC), to capture the dynamics elicited by the phase. We compute mutual information for connectivity quantification, employ statistical analysis and Markov chains to assess dynamics, ultimately classifying schizophrenia patients (SZs) and healthy controls (HCs) based on connectivity variance and Markov chain state transitions across windows. RESULTS: SSP-dsFNC yielded greater dynamics and more significant HC-SZ differences, due to the use of complete brain information from complex-valued fMRI data. COMPARISON WITH EXISTING METHODS: Compared with magnitude-dsFNC, SSP-dsFNC detected additional and meaningful connections across windows (e.g., for right frontal parietal) and achieved 14.6% higher accuracy for classifying HCs and SZs. CONCLUSIONS: This work provides new evidence about how SSP-dsFNC could be impacted by schizophrenia, and this information could be used to identify potential imaging biomarkers for psychotic diagnosis.


Subject(s)
Schizophrenia , Humans , Schizophrenia/diagnostic imaging , Magnetic Resonance Imaging/methods , Brain Mapping/methods , Brain/diagnostic imaging , Markov Chains
11.
J. bras. econ. saúde (Impr.) ; 15(3): 178-189, Dezembro/2023.
Article in English, Portuguese | LILACS, ECOS | ID: biblio-1553989

ABSTRACT

Objetivo: Desenvolver uma análise de custo-utilidade da implementação do teste farmacogenético como uma ferramenta adicional para orientar a escolha do melhor tratamento medicamentoso para indivíduos com depressão. Métodos: Para a realização desta análise, criou-se um modelo analítico de decisão baseado em um modelo de Markov. A avaliação foi realizada sob a perspectiva do Sistema de Saúde Suplementar brasileiro, com horizonte temporal de 10 anos, incluindo custos médicos diretos e custos da tecnologia utilizada, além de ter como comparador o tratamento empírico tradicional para a depressão. As probabilidades de transição foram obtidas por meio de análise da literatura disponível. Também foram realizadas análises de sensibilidade probabilística e univariada. Adicionalmente, foi realizada uma avaliação sob a perspectiva da sociedade, incluindo os custos de tratamento medicamentoso realizados pelos pacientes. Resultados: De acordo com a análise realizada, o emprego do teste farmacogenético como guia do tratamento para depressão mostrou-se favorável, proporcionando economia de -R$ 3.439,97 por paciente e aumento de 0,39 QALY ao longo do horizonte temporal. Assim, evidencia-se uma economia significativa a favor do teste farmacogenético, correspondendo a -R$ 8.776,78 por QALY salvo. Além disso, a robustez do modelo foi comprovada por meio das análises de sensibilidade. No cenário sob perspectiva da sociedade, o resultado foi ainda mais favorável, proporcionando economia de -R$ 9.381,49 por paciente e aumento de 0,39 QALY, correspondendo a -R$ 23.936,05 por QALY salvo. Conclusão: Os resultados encontrados neste estudo demonstraram que o uso de testes farmacogenéticos no tratamento da depressão é economicamente vantajoso, com aumento no valor de QALY e redução nos custos médicos diretos, em comparação ao tratamento empírico tradicional. Essa descoberta alinha-se à tendência atual de personalização no cuidado da saúde mental, sugerindo implicações práticas na reavaliação de protocolos, com potencial incorporação dos testes farmacogenéticos como padrão de cuidado.


Objective: To evaluate the cost-utility of pharmacogenetic testing incorporation as an additional tool in guiding the selection of optimal drug treatments for individuals with depression. Methods: A decision analytical model was created based on the Markov model for this analysis. The evaluation was conducted from the perspective of the Brazilian Supplementary Health System, with a time horizon of 10 years. The study included direct medical and technology costs and a comparison with traditional empirical treatment for depression was performed. Transition probabilities were derived from an analysis of available literature. Probabilistic and univariate sensitivity analyses were also carried out. Additionally, an evaluation was conducted from the perspective of Society, including the costs of drug treatment carried out by patients. Results: The application of pharmacogenetic testing as a guide for depression treatment demonstrated favorable outcomes, yielding savings of -R$ 3,439.97 per patient and an increase of 0.39 QALY over the specified time frame. Thus, significant savings were evident, corresponding to -R$ 8,776.78 per QALY saved. The sensitivity analyses confirmed the model's robustness. In the Society's perspective scenario, the outcome was even more favorable, resulting in savings of -R$ 9,381.49 per patient and a 0.39 increase in QALYs, equivalent to -R$ 23,936.05 per QALY saved. Conclusion: The study findings reveal that incorporating harmacogenetic tests in depression treatment offers economic benefits, evidenced by an increase in QALY value and a decrease in direct medical costs compared to conventional empirical treatment. This aligns with the ongoing trend towards personalized mental health care, implying practical considerations for protocol reassessment and the possible integration of pharmacogenetic tests as a standard of care.


Subject(s)
Markov Chains , Cost-Benefit Analysis , Pharmacogenomic Testing , Cost-Effectiveness Analysis
12.
J Math Biol ; 88(1): 12, 2023 12 19.
Article in English | MEDLINE | ID: mdl-38112786

ABSTRACT

We derive asymptotic formulae in the limit when population size N tends to infinity for mean fixation times (conditional and unconditional) in a population with two types of individuals, A and B, governed by the Moran process. We consider only the case in which the fitness of the two types do not depend on the population frequencies. Our results start with the important cases in which the initial condition is a single individual of any type, but we also consider the initial condition of a fraction [Formula: see text] of A individuals, where x is kept fixed and the total population size tends to infinity. In the cases covered by Antal and Scheuring (Bull Math Biol 68(8):1923-1944, 2006), i.e. conditional fixation times for a single individual of any type, it will turn out that our formulae are much more accurate than the ones they found. As quoted, our results include other situations not treated by them. An interesting and counterintuitive consequence of our results on mean conditional fixation times is the following. Suppose that a population consists initially of fitter individuals at fraction x and less fit individuals at a fraction [Formula: see text]. If population size N is large enough, then in the average the fixation of the less fit individuals is faster (provided it occurs) than fixation of the fitter individuals, even if x is close to 1, i.e. fitter individuals are the majority.


Subject(s)
Gene Frequency , Humans , Population Density
13.
BMC Res Notes ; 16(1): 346, 2023 Nov 24.
Article in English | MEDLINE | ID: mdl-38001467

ABSTRACT

IMPORTANCE: The prevalence of obesity among United States adults has increased from 30.5% in 1999 to 41.9% in 2020. However, despite the recognition of long-term weight gain as an important public health issue, there is a paucity of studies studying the long-term weight gain and building models for long-term projection. METHODS: A retrospective, cross-sectional cohort study using the publicly available National Health and Nutrition Examination Survey (NHANES 2017-2020) was conducted in patients who completed the weight questionnaire and had accurate data for both weight at time of survey and weight ten years ago. Multistate gradient boost modeling classifiers were used to generate covariate dependent transition matrices and Markov chains were utilized for multistate modeling. RESULTS: Of the 6146 patients that met the inclusion criteria, 3024 (49%) of patients were male and 3122 (51%) of patients were female. There were 2252 (37%) White patients, 1257 (20%) Hispanic patients, 1636 (37%) Black patients, and 739 (12%) Asian patients. The average BMI was 30.16 (SD = 7.15), the average weight was 83.67 kilos (SD = 22.04), and the average weight change was a 3.27 kg (SD = 14.97) increase in body weight (Fig. 1). A total of 2411 (39%) patients lost weight, and 3735 (61%) patients gained weight (Table 1). We observed that 87 (1%) of patients were underweight (BMI < 18.5), 2058 (33%) were normal weight (18.5 ≤ BMI < 25), 1376 (22%) were overweight (25 ≤ BMI < 30) and 2625 (43%) were obese (BMI > 30). From analysis of the transitions between normal/underweight, overweight, and obese, we observed that after 10 years, of the patients who were underweight, 65% stayed underweight, 32% became normal weight, 2% became overweight, and 2% became obese. After 10 years, of the patients who were normal weight, 3% became underweight, 78% stayed normal weight, 17% became overweight, and 2% became obese. Of the patients who were overweight, 71% stayed overweight, 0% became underweight, 14% became normal weight, and 15% became obese. Of the patients who were obese, 84% stayed obese, 0% became underweight, 1% became normal weight, and 14% became overweight. CONCLUSIONS: United States adults are at risk of transitioning from normal weight to becoming overweight or obese. Covariate dependent Markov chains constructed with gradient boost modeling can effectively generate long-term predictions.


Subject(s)
Overweight , Thinness , Adult , Humans , Male , Female , United States , Overweight/epidemiology , Nutrition Surveys , Retrospective Studies , Thinness/epidemiology , Cross-Sectional Studies , Markov Chains , Body Mass Index , Obesity/epidemiology , Weight Gain
14.
Community Dent Health ; 40(4): 233-241, 2023 Nov 30.
Article in English | MEDLINE | ID: mdl-37812584

ABSTRACT

OBJECTIVE: To develop a needs-based workforce planning model to explore specialist workforce capacity and capability for the effective, efficient, and safe provision of services in the United Kingdom (UK); and test the model using Dental Public Health (DPH). BASIC RESEARCH DESIGN: Data from a national workforce survey, national audit, and specialty workshops in 2020 and 2021 set the parameters for a safe effective DPH workforce. A working group drawing on external expertise, developed a conceptual workforce model which informed the mathematical modelling, taking a Markovian approach. The latter enabled the consideration of possible scenarios relating to workforce development. It involved exploration of capacity within each career stage in DPH across a time horizon of 15 years. Workforce capacity requirements were calculated, informed by past principles. RESULTS: Currently an estimated 100 whole time equivalent (WTE) specialists are required to provide a realistic basic capacity nationally for DPH across the UK given the range of organisations, population growth, complexity and diversity of specialty roles. In February 2022 the specialty had 53.55 WTE academic/service consultants, thus a significant gap. The modelling evidence suggests a reduction in DPH specialist capacity towards a steady state in line with the current rate of training, recruitment and retention. The scenario involving increasing training numbers and drawing on other sources of public health trained dentists whilst retaining expertise within DPH has the potential to build workforce capacity. CONCLUSIONS: Current capacity is below basic requirements and approaching 'steady state'. Retention and innovative capacity building are required to secure and safeguard the provision of specialist DPH services to meet the needs of the UK health and care systems.


Subject(s)
Consultants , Public Health , Humans , United Kingdom , Workforce , Dentists
15.
Stat Med ; 42(28): 5189-5206, 2023 12 10.
Article in English | MEDLINE | ID: mdl-37705508

ABSTRACT

Intensive care occupancy is an important indicator of health care stress that has been used to guide policy decisions during the COVID-19 pandemic. Toward reliable decision-making as a pandemic progresses, estimating the rates at which patients are admitted to and discharged from hospitals and intensive care units (ICUs) is crucial. Since individual-level hospital data are rarely available to modelers in each geographic locality of interest, it is important to develop tools for inferring these rates from publicly available daily numbers of hospital and ICU beds occupied. We develop such an estimation approach based on an immigration-death process that models fluctuations of ICU occupancy. Our flexible framework allows for immigration and death rates to depend on covariates, such as hospital bed occupancy and daily SARS-CoV-2 test positivity rate, which may drive changes in hospital ICU operations. We demonstrate via simulation studies that the proposed method performs well on noisy time series data and apply our statistical framework to hospitalization data from the University of California, Irvine (UCI) Health and Orange County, California. By introducing a likelihood-based framework where immigration and death rates can vary with covariates, we find, through rigorous model selection, that hospitalization and positivity rates are crucial covariates for modeling ICU stay dynamics and validate our per-patient ICU stay estimates using anonymized patient-level UCI hospital data.


Subject(s)
Bed Occupancy , Critical Care , Intensive Care Units , Humans , COVID-19/epidemiology , Hospitalization , Likelihood Functions , Pandemics , SARS-CoV-2 , Time Factors , Stochastic Processes
16.
Cell Syst ; 14(10): 822-843.e22, 2023 10 18.
Article in English | MEDLINE | ID: mdl-37751736

ABSTRACT

Recent experimental developments in genome-wide RNA quantification hold considerable promise for systems biology. However, rigorously probing the biology of living cells requires a unified mathematical framework that accounts for single-molecule biological stochasticity in the context of technical variation associated with genomics assays. We review models for a variety of RNA transcription processes, as well as the encapsulation and library construction steps of microfluidics-based single-cell RNA sequencing, and present a framework to integrate these phenomena by the manipulation of generating functions. Finally, we use simulated scenarios and biological data to illustrate the implications and applications of the approach.


Subject(s)
Models, Biological , Systems Biology , Stochastic Processes , RNA , Genomics
17.
Bull Math Biol ; 85(10): 87, 2023 08 25.
Article in English | MEDLINE | ID: mdl-37624445

ABSTRACT

Stochastic reaction networks, which are usually modeled as continuous-time Markov chains on [Formula: see text], and simulated via a version of the "Gillespie algorithm," have proven to be a useful tool for the understanding of processes, chemical and otherwise, in homogeneous environments. There are multiple avenues for generalizing away from the assumption that the environment is homogeneous, with the proper modeling choice dependent upon the context of the problem being considered. One such generalization was recently introduced in Duso and Zechner (Proc Nat Acad Sci 117(37):22674-22683 , Duso and Zechner (2020)), where the proposed model includes a varying number of interacting compartments, or cells, each of which contains an evolving copy of the stochastic reaction system. The novelty of the model is that these compartments also interact via the merging of two compartments (including their contents), the splitting of one compartment into two, and the appearance and destruction of compartments. In this paper we begin a systematic exploration of the mathematical properties of this model. We (i) obtain basic/foundational results pertaining to explosivity, transience, recurrence, and positive recurrence of the model, (ii) explore a number of examples demonstrating some possible non-intuitive behaviors of the model, and (iii) identify the limiting distribution of the model in a special case that generalizes three formulas from an example in Duso and Zechner (Proc Nat Acad Sci 117(37):22674-22683 , Duso and Zechner (2020)).


Subject(s)
Mathematical Concepts , Models, Biological , Algorithms , Apoptosis , Markov Chains
18.
Appl Netw Sci ; 8(1): 46, 2023.
Article in English | MEDLINE | ID: mdl-37502612

ABSTRACT

Motivation: Social media platforms centered around content creators (CCs) faced rapid growth in the past decade. Currently, millions of CCs make livable incomes through platforms such as YouTube, TikTok, and Instagram. As such, similarly to the job market, it is important to ensure the success and income (usually related to the follower counts) of CCs reflect the quality of their work. Since quality cannot be observed directly, two other factors govern the network-formation process: (a) the visibility of CCs (resulted from, e.g., recommender systems and moderation processes) and (b) the decision-making process of seekers (i.e., of users focused on finding CCs). Prior virtual experiments and empirical work seem contradictory regarding fairness: While the first suggests the expected number of followers of CCs reflects their quality, the second says that quality does not perfectly predict success. Results: Our paper extends prior models in order to bridge this gap between theoretical and empirical work. We (a) define a parameterized recommendation process which allocates visibility based on popularity biases, (b) define two metrics of individual fairness (ex-ante and ex-post), and (c) define a metric for seeker satisfaction. Through an analytical approach we show our process is an absorbing Markov Chain where exploring only the most popular CCs leads to lower expected times to absorption but higher chances of unfairness for CCs. While increasing the exploration helps, doing so only guarantees fair outcomes for the highest (and lowest) quality CC. Simulations revealed that CCs and seekers prefer different algorithmic designs: CCs generally have higher chances of fairness with anti-popularity biased recommendation processes, while seekers are more satisfied with popularity-biased recommendations. Altogether, our results suggest that while the exploration of low-popularity CCs is needed to improve fairness, platforms might not have the incentive to do so and such interventions do not entirely prevent unfair outcomes.

19.
Sensors (Basel) ; 23(13)2023 Jun 28.
Article in English | MEDLINE | ID: mdl-37447861

ABSTRACT

At present, IoT and intelligent applications are developed on a large scale. However, these types of new applications require stable wireless connectivity with sensors, based on several standards of communication, such as ZigBee, LoRA, nRF, Bluetooth, or cellular (LTE, 5G, etc.). The continuous expansion of these networks and services also comes with the requirement of a stable level of service, which makes the task of maintenance operators more difficult. Therefore, in this research, an integrated solution for the management of preventive maintenance is proposed, employing software-defined sensing for hardware components, applications, and client satisfaction. A specific algorithm for monitoring the levels of services was developed, and an integrated instrument to assist the management of preventive maintenance was proposed, which are based on the network of future states prediction. A case study was also investigated for smart city applications to verify the expandability and flexibility of the approach. The purpose of this research is to improve the efficiency and response time of the preventive maintenance, helping to rapidly recover the required levels of service, thus increasing the resilience of complex systems.


Subject(s)
Algorithms , Software , Humans , Communication , Intelligence , Patient Satisfaction
20.
Int J Eat Disord ; 56(10): 1887-1897, 2023 10.
Article in English | MEDLINE | ID: mdl-37415559

ABSTRACT

OBJECTIVE: To determine the cost-effectiveness of a virtual version of the Body Project (vBP), a cognitive dissonance-based program, to prevent eating disorders (ED) among young women with a subjective sense of body dissatisfaction in the Swedish context. METHOD: A decision tree combined with a Markov model was developed to estimate the cost-effectiveness of the vBP in a clinical trial population of 149 young women (mean age 17 years) with body image concerns. Treatment effect was modeled using data from a trial investigating the effects of vBP compared to expressive writing (EW) and a do-nothing alternative. Population characteristics and intervention costs were sourced from the trial. Other parameters, including utilities, treatment costs for ED, and mortality were sourced from the literature. The model predicted the costs and quality-adjusted life years (QALYs) related to the prevention of incidence of ED in the modeled population until they reached 25 years of age. The study used both a cost-utility and return on investment (ROI) framework. RESULTS: In total, vBP yielded lower costs and larger QALYs than the alternatives. The ROI analysis denoted a return of US $152 for every USD invested in vBP over 8 years against the do-nothing alternative and US $105 against EW. DISCUSSION: vBP is likely to be cost-effective compared to both EW and a do-nothing alternative. The ROI from vBP is substantial and could be attractive information for decision makers for implementation of this intervention for young females at risk of developing ED. PUBLIC SIGNIFICANCE: This study estimates that the vBP is cost-effective for the prevention of eating disorders among young women in the Swedish setting, and thus is a good investment of public resources.


Subject(s)
Body Dissatisfaction , Feeding and Eating Disorders , Humans , Female , Adolescent , Cost-Benefit Analysis , Sweden/epidemiology , Feeding and Eating Disorders/prevention & control , Body Image/psychology , Quality-Adjusted Life Years
SELECTION OF CITATIONS
SEARCH DETAIL
...