Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 113
Filter
1.
Res Synth Methods ; 2024 Aug 13.
Article in English | MEDLINE | ID: mdl-39136358

ABSTRACT

In sparse data meta-analyses (with few trials or zero events), conventional methods may distort results. Although better-performing one-stage methods have become available in recent years, their implementation remains limited in practice. This study examines the impact of using conventional methods compared to one-stage models by re-analysing meta-analyses from the Cochrane Database of Systematic Reviews in scenarios with zero event trials and few trials. For each scenario, we computed one-stage methods (Generalised linear mixed model [GLMM], Beta-binomial model [BBM], Bayesian binomial-normal hierarchical model using a weakly informative prior [BNHM-WIP]) and compared them with conventional methods (Peto-Odds-ratio [PETO], DerSimonian-Laird method [DL] for zero event trials; DL, Paule-Mandel [PM], Restricted maximum likelihood [REML] method for few trials). While all methods showed similar treatment effect estimates, substantial variability in statistical precision emerged. Conventional methods generally resulted in smaller confidence intervals (CIs) compared to one-stage models in the zero event situation. In the few trials scenario, the CI lengths were widest for the BBM on average and significance often changed compared to the PM and REML, despite the relatively wide CIs of the latter. In agreement with simulations and guidelines for meta-analyses with zero event trials, our results suggest that one-stage models are preferable. The best model can be either selected based on the data situation or, using a method that can be used in various situations. In the few trial situation, using BBM and additionally PM or REML for sensitivity analyses appears reasonable when conservative results are desired. Overall, our results encourage careful method selection.

2.
Sensors (Basel) ; 24(15)2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39124055

ABSTRACT

Rare events are occurrences that take place with a significantly lower frequency than more common, regular events. These events can be categorized into distinct categories, from frequently rare to extremely rare, based on factors like the distribution of data and significant differences in rarity levels. In manufacturing domains, predicting such events is particularly important, as they lead to unplanned downtime, a shortening of equipment lifespans, and high energy consumption. Usually, the rarity of events is inversely correlated with the maturity of a manufacturing industry. Typically, the rarity of events affects the multivariate data generated within a manufacturing process to be highly imbalanced, which leads to bias in predictive models. This paper evaluates the role of data enrichment techniques combined with supervised machine learning techniques for rare event detection and prediction. We use time series data augmentation and sampling to address the data scarcity, maintaining its patterns, and imputation techniques to handle null values. Evaluating 15 learning models, we find that data enrichment improves the F1 measure by up to 48% in rare event detection and prediction. Our empirical and ablation experiments provide novel insights, and we also investigate model interpretability.

3.
Longit Life Course Stud ; 15(3): 371-393, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38954423

ABSTRACT

The prevention paradox describes circumstances in which the majority of cases with a suicide attempt come from a population of low or moderate risk, and only a few from a 'high-risk' group. The assumption is that a low base rate in combination with multiple causes makes it impossible to identify a high-risk group with all suicide attempts. The best way to study events such as first-time suicide attempts and their causes is to collect event history data. Administrative registers were used to identify a group at higher risk of suicidal behaviour within a population of six national birth cohorts (N = 300,000) born between 1980 and 1985 and followed from age 15 to 29 years. Estimation of risk parameters is based on the discrete-time logistic odds-ratio model. Lifetime prevalence was 4.5% for first-time suicide attempts. Family background and family child-rearing factors were predicative of later first-time suicide attempts. A young person's diagnosis with psychiatric or neurodevelopmental disorders (ADHD, anxiety, depression, PTSD), and being a victim of violence or sex offences contributed to the explanatory model. Contrary to the prevention paradox, results suggest that it is possible to identify a discrete high-risk group (<12%) among the population from whom two thirds of all first-time suicide attempts occur, but one third of observed suicide attempts derived from low- to moderate-risk groups. Findings confirm the need for a combined strategy of universal, targeted and indicated prevention approaches in policy development and in strategic and practice responses, and some promising prevention strategies are presented.


Subject(s)
Suicide, Attempted , Humans , Suicide, Attempted/statistics & numerical data , Suicide, Attempted/psychology , Male , Female , Adolescent , Adult , Longitudinal Studies , Risk Factors , Young Adult , Life Change Events , Prevalence , Mental Disorders/epidemiology
4.
Appetite ; 201: 107597, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-38972638

ABSTRACT

We Investigated how promoting diverse, healthy food options affects long-term dietary choices. We hypothesized that encouraging exploration of nutritious plant-based foods would lead to lasting improvements in diet. Participants (N = 211) were randomly assigned into two groups for a 6-week intervention: The fixed menu group was given the same large menu every week, while the changing menu group received a new small menu each week. At the end of the intervention both groups were exposed to the same menu suggestions. Food diversity evaluation was based on weekly reports collected during the intervention. Self-reported adherence to Mediterranean diet components was assessed using the I-MEDAS screener. The proportion of plant-based foods in participants' diets was estimated using a 0-100% scale based on self-report. Both items were evaluated using online questionnaires given to participants at baseline, at the end of the intervention, as well as three and six months after the intervention concluded. Results mean(SD) demonstrated that participants in fixed menu group explored a significantly wider array of items 26.33(11.64) than those in the changing menus group [19.79(10.29), t(202) = 4.25, p < 0.001, Cohen's d = 0.60]. A repeated measures analysis of covariance rmANCOVA revealed that short-term increase in I-MEDAS and PBD score were noted in both groups; however, only participants with the fixed menu sustained this increase at months follow-up [diff = 1.50, t(132) = 4.50, p < 0.001 Our findings suggest that manipulating the rate of exposure to food suggestions may affect overall dietary variety. It seems that early presentation with options may increase overall dietary variety and may even support longer-term habits. This study contributes to developing effective interventions and highlights the challenge of promoting exploratory behavior in nutrition.


Subject(s)
Diet, Mediterranean , Patient Compliance , Humans , Female , Male , Adult , Middle Aged , Surveys and Questionnaires , Cooking/methods , Diet, Healthy/psychology , Diet, Healthy/methods , Food Preferences/psychology , Young Adult , Feeding Behavior/psychology , Choice Behavior
5.
Annu Rev Phys Chem ; 75(1): 137-162, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38941527

ABSTRACT

Dynamical reweighting techniques aim to recover the correct molecular dynamics from a simulation at a modified potential energy surface. They are important for unbiasing enhanced sampling simulations of molecular rare events. Here, we review the theoretical frameworks of dynamical reweighting for modified potentials. Based on an overview of kinetic models with increasing level of detail, we discuss techniques to reweight two-state dynamics, multistate dynamics, and path integrals. We explore the natural link to transition path sampling and how the effect of nonequilibrium forces can be reweighted. We end by providing an outlook on how dynamical reweighting integrates with techniques for optimizing collective variables and with modern potential energy surfaces.

6.
Pharm Stat ; 2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38628051

ABSTRACT

The meta-analysis of rare events presents unique methodological challenges owing to the small number of events. Bayesian methods are often used to combine rare events data to inform decision-making, as they can incorporate prior information and handle studies with zero events without the need for continuity corrections. However, the comparative performances of different Bayesian models in pooling rare events data are not well understood. We conducted a simulation to compare the statistical properties of four parameterizations based on the binomial-normal hierarchical model, using two different priors for the treatment effect: weakly informative prior (WIP) and non-informative prior (NIP), pooling randomized controlled trials with rare events using the odds ratio metric. We also considered the beta-binomial model proposed by Kuss and the random intercept and slope generalized linear mixed models. The simulation scenarios varied based on the treatment effect, sample size ratio between the treatment and control arms, and level of heterogeneity. Performance was evaluated using median bias, root mean square error, median width of 95% credible or confidence intervals, coverage, Type I error, and empirical power. Two reviews are used to illustrate these methods. The results demonstrate that the WIP outperforms the NIP within the same model structure. Among the compared models, the model that included the treatment effect parameter in the risk model for the control arm did not perform well. Our findings confirm that rare events meta-analysis faces the challenge of being underpowered, highlighting the importance of reporting the power of results in empirical studies.

7.
Proc Natl Acad Sci U S A ; 121(7): e2318731121, 2024 Feb 13.
Article in English | MEDLINE | ID: mdl-38315841

ABSTRACT

Capturing rare yet pivotal events poses a significant challenge for molecular simulations. Path sampling provides a unique approach to tackle this issue without altering the potential energy landscape or dynamics, enabling recovery of both thermodynamic and kinetic information. However, despite its exponential acceleration compared to standard molecular dynamics, generating numerous trajectories can still require a long time. By harnessing our recent algorithmic innovations-particularly subtrajectory moves with high acceptance, coupled with asynchronous replica exchange featuring infinite swaps-we establish a highly parallelizable and rapidly converging path sampling protocol, compatible with diverse high-performance computing architectures. We demonstrate our approach on the liquid-vapor phase transition in superheated water, the unfolding of the chignolin protein, and water dissociation. The latter, performed at the ab initio level, achieves comparable statistical accuracy within days, in contrast to a previous study requiring over a year.

8.
Proc Natl Acad Sci U S A ; 121(10): e2313542121, 2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38412121

ABSTRACT

Studying the pathways of ligand-receptor binding is essential to understand the mechanism of target recognition by small molecules. The binding free energy and kinetics of protein-ligand complexes can be computed using molecular dynamics (MD) simulations, often in quantitative agreement with experiments. However, only a qualitative picture of the ligand binding/unbinding paths can be obtained through a conventional analysis of the MD trajectories. Besides, the higher degree of manual effort involved in analyzing pathways limits its applicability in large-scale drug discovery. Here, we address this limitation by introducing an automated approach for analyzing molecular transition paths with a particular focus on protein-ligand dissociation. Our method is based on the dynamic time-warping algorithm, originally designed for speech recognition. We accurately classified molecular trajectories using a very generic descriptor set of contacts or distances. Our approach outperforms manual classification by distinguishing between parallel dissociation channels, within the pathways identified by visual inspection. Most notably, we could compute exit-path-specific ligand-dissociation kinetics. The unbinding timescale along the fastest path agrees with the experimental residence time, providing a physical interpretation to our entirely data-driven protocol. In combination with appropriate enhanced sampling algorithms, this technique can be used for the initial exploration of ligand-dissociation pathways as well as for calculating path-specific thermodynamic and kinetic properties.

9.
Stat Med ; 43(4): 706-730, 2024 02 20.
Article in English | MEDLINE | ID: mdl-38111986

ABSTRACT

Rare events are events which occur with low frequencies. These often arise in clinical trials or cohort studies where the data are arranged in binary contingency tables. In this article, we investigate the estimation of effect heterogeneity for the risk-ratio parameter in meta-analysis of rare events studies through two likelihood-based nonparametric mixture approaches: an arm-based and a contrast-based model. Maximum likelihood estimation is achieved using the EM algorithm. Special attention is given to the choice of initial values. Inspired by the classification likelihood, a strategy is implemented which repeatably uses random allocation of the studies to the mixture components as choice of initial values. The likelihoods under the contrast-based and arm-based approaches are compared and differences are highlighted. We use simulations to assess the performance of these two methods. Under the design of sampling studies with nested treatment groups, the results show that the nonparametric mixture model based on the contrast-based approach is more appropriate in terms of model selection criteria such as AIC and BIC. Under the arm-based design the results from the arm-based model performs well although in some cases it is also outperformed by the contrast-based model. Comparisons of the estimators are provided in terms of bias and mean squared error. Also included in the comparison is the mixed Poisson regression model as well as the classical DerSimonian-Laird model (using the Mantel-Haenszel estimator for the common effect). Using simulation, estimating effect heterogeneity in the case of the contrast-based method appears to behave better than the compared methods although differences become negligible for large within-study sample sizes. We illustrate the methodologies using several meta-analytic data sets in medicine.


Subject(s)
Meta-Analysis as Topic , Humans , Computer Simulation , Likelihood Functions , Odds Ratio , Sample Size
10.
Res Synth Methods ; 14(6): 853-873, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37607885

ABSTRACT

In meta-analyses of rare events, it can be challenging to obtain a reliable estimate of the pooled effect, in particular when the meta-analysis is based on a small number of studies. Recent simulation studies have shown that the beta-binomial model is a promising candidate in this situation, but have thus far only investigated its performance in a frequentist framework. In this study, we aim to make the beta-binomial model for meta-analysis of rare events amenable to Bayesian inference by proposing prior distributions for the effect parameter and investigating the models' robustness to different specifications of priors for the scale parameter. To evaluate the performance of Bayesian beta-binomial models with different priors, we conducted a simulation study with two different data generating models in which we varied the size of the pooled effect, the degree of heterogeneity, the baseline probability, and the sample size. Our results show that while some caution must be exercised when using the Bayesian beta-binomial in meta-analyses with extremely sparse data, the use of a weakly informative prior for the effect parameter is beneficial in terms of mean bias, mean squared error, and coverage. For the scale parameter, half-normal and exponential distributions are identified as candidate priors in meta-analysis of rare events using the Bayesian beta-binomial model.


Subject(s)
Models, Statistical , Bayes Theorem , Computer Simulation , Probability , Sample Size
11.
Sensors (Basel) ; 23(12)2023 Jun 09.
Article in English | MEDLINE | ID: mdl-37420632

ABSTRACT

We report on the development of scintillating bolometers based on lithium molybdate crystals that contain molybdenum that has depleted into the double-ß active isotope 100Mo (Li2100deplMoO4). We used two Li2100deplMoO4 cubic samples, each of which consisted of 45-millimeter sides and had a mass of 0.28 kg; these samples were produced following the purification and crystallization protocols developed for double-ß search experiments with 100Mo-enriched Li2MoO4 crystals. Bolometric Ge detectors were utilized to register the scintillation photons that were emitted by the Li2100deplMoO4 crystal scintillators. The measurements were performed in the CROSS cryogenic set-up at the Canfranc Underground Laboratory (Spain). We observed that the Li2100deplMoO4 scintillating bolometers were characterized by an excellent spectrometric performance (∼3-6 keV of FWHM at 0.24-2.6 MeV γs), moderate scintillation signal (∼0.3-0.6 keV/MeV scintillation-to-heat energy ratio, depending on the light collection conditions), and high radiopurity (228Th and 226Ra activities are below a few µBq/kg), which is comparable with the best reported results of low-temperature detectors that are based on Li2MoO4 using natural or 100Mo-enriched molybdenum content. The prospects of Li2100deplMoO4 bolometers for use in rare-event search experiments are briefly discussed.


Subject(s)
Molybdenum , Radium , Isotopes , Scintillation Counting/methods , Lithium , Ions
12.
Res Synth Methods ; 14(5): 689-706, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37309821

ABSTRACT

Rare events meta-analyses of randomized controlled trials (RCTs) are often underpowered because the outcomes are infrequent. Real-world evidence (RWE) from non-randomized studies may provide valuable complementary evidence about the effects of rare events, and there is growing interest in including such evidence in the decision-making process. Several methods for combining RCTs and RWE studies have been proposed, but the comparative performance of these methods is not well understood. We describe a simulation study that aims to evaluate an array of alternative Bayesian methods for including RWE in rare events meta-analysis of RCTs: the naïve data synthesis, the design-adjusted synthesis, the use of RWE as prior information, the three-level hierarchical models, and the bias-corrected meta-analysis model. The percentage bias, root-mean-square-error, mean 95% credible interval width, coverage probability, and power are used to measure performance. The various methods are illustrated using a systematic review to evaluate the risk of diabetic ketoacidosis among patients using sodium/glucose co-transporter 2 inhibitors as compared with active-comparators. Our simulations show that the bias-corrected meta-analysis model is comparable to or better than the other methods in terms of all evaluated performance measures and simulation scenarios. Our results also demonstrate that data solely from RCTs may not be sufficiently reliable for assessing the effects of rare events. In summary, the inclusion of RWE could increase the certainty and comprehensiveness of the body of evidence of rare events from RCTs, and the bias-corrected meta-analysis model may be preferable.


Subject(s)
Randomized Controlled Trials as Topic , Humans
13.
Front Mol Biosci ; 10: 1197154, 2023.
Article in English | MEDLINE | ID: mdl-37275961

ABSTRACT

Complex mechanisms regulate the cellular distribution of cholesterol, a critical component of eukaryote membranes involved in regulation of membrane protein functions directly and through the physiochemical properties of membranes. StarD4, a member of the steroidogenic acute regulator-related lipid-transfer (StART) domain (StARD)-containing protein family, is a highly efficient sterol-specific transfer protein involved in cholesterol homeostasis. Its mechanism of cargo loading and release remains unknown despite recent insights into the key role of phosphatidylinositol phosphates in modulating its interactions with target membranes. We have used large-scale atomistic Molecular dynamics (MD) simulations to study how the dynamics of cholesterol bound to the StarD4 protein can affect interaction with target membranes, and cargo delivery. We identify the two major cholesterol (CHL) binding modes in the hydrophobic pocket of StarD4, one near S136&S147 (the Ser-mode), and another closer to the putative release gate located near W171, R92&Y117 (the Trp-mode). We show that conformational changes of StarD4 associated directly with the transition between these binding modes facilitate the opening of the gate. To understand the dynamics of this connection we apply a machine-learning algorithm for the detection of rare events in MD trajectories (RED), which reveals the structural motifs involved in the opening of a front gate and a back corridor in the StarD4 structure occurring together with the spontaneous transition of CHL from the Ser-mode of binding to the Trp-mode. Further analysis of MD trajectory data with the information-theory based NbIT method reveals the allosteric network connecting the CHL binding site to the functionally important structural components of the gate and corridor. Mutations of residues in the allosteric network are shown to affect the performance of the allosteric connection. These findings outline an allosteric mechanism which prepares the CHL-bound StarD4 to release and deliver the cargo when it is bound to the target membrane.

14.
Philos Trans A Math Phys Eng Sci ; 381(2250): 20220245, 2023 Jul 10.
Article in English | MEDLINE | ID: mdl-37211032

ABSTRACT

Discrete state Markov chains in discrete or continuous time are widely used to model phenomena in the social, physical and life sciences. In many cases, the model can feature a large state space, with extreme differences between the fastest and slowest transition timescales. Analysis of such ill-conditioned models is often intractable with finite precision linear algebra techniques. In this contribution, we propose a solution to this problem, namely partial graph transformation, to iteratively eliminate and renormalize states, producing a low-rank Markov chain from an ill-conditioned initial model. We show that the error induced by this procedure can be minimized by retaining both the renormalized nodes that represent metastable superbasins, and those through which reactive pathways concentrate, i.e. the dividing surface in the discrete state space. This procedure typically returns a much lower rank model, where trajectories can be efficiently generated with kinetic path sampling. We apply this approach to an ill-conditioned Markov chain for a model multi-community system, measuring the accuracy by direct comparison with trajectories and transition statistics. This article is part of a discussion meeting issue 'Supercomputing simulations of advanced materials'.

15.
Financ Innov ; 9(1): 73, 2023.
Article in English | MEDLINE | ID: mdl-37033296

ABSTRACT

In response to the unprecedented uncertain rare events of the last decade, we derive an optimal portfolio choice problem in a semi-closed form by integrating price diffusion ambiguity, volatility diffusion ambiguity, and jump ambiguity occurring in the traditional stock market and the cryptocurrency market into a single framework. We reach the following conclusions in both markets: first, price diffusion and jump ambiguity mainly determine detection-error probability; second, optimal choice is more significantly affected by price diffusion ambiguity than by jump ambiguity, and trivially affected by volatility diffusion ambiguity. In addition, investors tend to be more aggressive in a stable market than in a volatile one. Next, given a larger volatility jump size, investors tend to increase their portfolio during downward price jumps and decrease it during upward price jumps. Finally, the welfare loss caused by price diffusion ambiguity is more pronounced than that caused by jump ambiguity in an incomplete market. These findings enrich the extant literature on effects of ambiguity on the traditional stock market and the evolving cryptocurrency market. The results have implications for both investors and regulators.

16.
Proteomics ; 23(21-22): e2200290, 2023 Nov.
Article in English | MEDLINE | ID: mdl-36852539

ABSTRACT

The evolution of omics and computational competency has accelerated discoveries of the underlying biological processes in an unprecedented way. High throughput methodologies, such as flow cytometry, can reveal deeper insights into cell processes, thereby allowing opportunities for scientific discoveries related to health and diseases. However, working with cytometry data often imposes complex computational challenges due to high-dimensionality, large size, and nonlinearity of the data structure. In addition, cytometry data frequently exhibit diverse patterns across biomarkers and suffer from substantial class imbalances which can further complicate the problem. The existing methods of cytometry data analysis either predict cell population or perform feature selection. Through this study, we propose a "wisdom of the crowd" approach to simultaneously predict rare cell populations and perform feature selection by integrating a pool of modern machine learning (ML) algorithms. Given that our approach integrates superior performing ML models across different normalization techniques based on entropy and rank, our method can detect diverse patterns existing across the model features. Furthermore, the method identifies a dynamic biomarker structure that divides the features into persistently selected, unselected, and fluctuating assemblies indicating the role of each biomarker in rare cell prediction, which can subsequently aid in studies of disease progression.


Subject(s)
Algorithms , Machine Learning , Biomarkers/analysis
17.
J Clin Med ; 12(4)2023 Feb 20.
Article in English | MEDLINE | ID: mdl-36836227

ABSTRACT

BACKGROUND: Many rare events meta-analyses of randomized controlled trials (RCTs) have lower statistical power, and real-world evidence (RWE) is becoming widely recognized as a valuable source of evidence. The purpose of this study is to investigate methods for including RWE in a rare events meta-analysis of RCTs and the impact on the level of uncertainty around the estimates. METHODS: Four methods for the inclusion of RWE in evidence synthesis were investigated by applying them to two previously published rare events meta-analyses: the naïve data synthesis (NDS), the design-adjusted synthesis (DAS), the use of RWE as prior information (RPI), and the three-level hierarchical models (THMs). We gauged the effect of the inclusion of RWE by varying the degree of confidence placed in RWE. RESULTS: This study showed that the inclusion of RWE in a rare events meta-analysis of RCTs could increase the precision of the estimates, but this depended on the method of inclusion and the level of confidence placed in RWE. NDS cannot consider the bias of RWE, and its results may be misleading. DAS resulted in stable estimates for the two examples, regardless of whether we placed high- or low-level confidence in RWE. The results of the RPI approach were sensitive to the confidence level placed in RWE. The THM was effective in allowing for accommodating differences between study types, while it had a conservative result compared with other methods. CONCLUSION: The inclusion of RWE in a rare events meta-analysis of RCTs could increase the level of certainty of the estimates and enhance the decision-making process. DAS might be appropriate for inclusion of RWE in a rare event meta-analysis of RCTs, but further evaluation in different scenarios of empirical or simulation studies is still warranted.

18.
Addiction ; 118(6): 1167-1176, 2023 06.
Article in English | MEDLINE | ID: mdl-36683137

ABSTRACT

BACKGROUND AND AIMS: Low outcome prevalence, often observed with opioid-related outcomes, poses an underappreciated challenge to accurate predictive modeling. Outcome class imbalance, where non-events (i.e. negative class observations) outnumber events (i.e. positive class observations) by a moderate to extreme degree, can distort measures of predictive accuracy in misleading ways, and make the overall predictive accuracy and the discriminatory ability of a predictive model appear spuriously high. We conducted a simulation study to measure the impact of outcome class imbalance on predictive performance of a simple SuperLearner ensemble model and suggest strategies for reducing that impact. DESIGN, SETTING, PARTICIPANTS: Using a Monte Carlo design with 250 repetitions, we trained and evaluated these models on four simulated data sets with 100 000 observations each: one with perfect balance between events and non-events, and three where non-events outnumbered events by an approximate factor of 10:1, 100:1, and 1000:1, respectively. MEASUREMENTS: We evaluated the performance of these models using a comprehensive suite of measures, including measures that are more appropriate for imbalanced data. FINDINGS: Increasing imbalance tended to spuriously improve overall accuracy (using a high threshold to classify events vs non-events, overall accuracy improved from 0.45 with perfect balance to 0.99 with the most severe outcome class imbalance), but diminished predictive performance was evident using other metrics (corresponding positive predictive value decreased from 0.99 to 0.14). CONCLUSION: Increasing reliance on algorithmic risk scores in consequential decision-making processes raises critical fairness and ethical concerns. This paper provides broad guidance for analytic strategies that clinical investigators can use to remedy the impacts of outcome class imbalance on risk prediction tools.


Subject(s)
Drug Overdose , Humans , Computer Simulation , Risk Factors , Analgesics, Opioid
19.
Curr Protoc ; 3(1): e636, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36598346

ABSTRACT

Immunological memory is the basis of protection against most pathogens. Long-living memory T and B cells able to respond to specific stimuli, as well as persistent antibodies in plasma and in other body fluids, are crucial for determining the efficacy of vaccination and for protecting from a second infection by a previously encountered pathogen. Antigen-specific cells are represented at a very low frequency in the blood, and indeed, they can be considered "rare events" present in the memory T-cell pool. Therefore, such events should be analyzed with careful attention. In the last 20 years, different methods, mostly based upon flow cytometry, have been developed to identify such rare antigen-specific cells, and the COVID-19 pandemic has given a dramatic impetus to characterize the immune response against the virus. In this regard, we know that the identification, enumeration, and characterization of SARS-CoV-2-specific T and B cells following infection and/or vaccination require i) the use of specific peptides and adequate co-stimuli, ii) the use of appropriate inhibitors to avoid nonspecific activation, iii) the setting of appropriate timing for stimulation, and iv) the choice of adequate markers and reagents to identify antigen-specific cells. Optimization of these procedures allows not only determination of the magnitude of SARS-CoV-2-specific responses but also a comparison of the effects of different combinations of vaccines or determination of the response provided by so-called "hybrid immunity," resulting from a combination of natural immunity and vaccine-generated immunity. Here, we present two methods that are largely used to monitor the response magnitude and phenotype of SARS-CoV-2-specific T and B cells by polychromatic flow cytometry, along with some tips that can be useful for the quantification of these rare events. © 2023 Wiley Periodicals LLC. Basic Protocol 1: Identification of antigen-specific T cells Basic Protocol 2: Identification of antigen-specific B cells.


Subject(s)
COVID-19 , SARS-CoV-2 , Humans , COVID-19/prevention & control , Pandemics/prevention & control , B-Lymphocytes , Antibodies
20.
Int J Biostat ; 19(1): 21-38, 2023 05 01.
Article in English | MEDLINE | ID: mdl-36306466

ABSTRACT

Meta-analysis of binary outcome data faces often a situation where studies with a rare event are part of the set of studies to be considered. These studies have low occurrence of event counts to the extreme that no events occur in one or both groups to be compared. This raises issues how to estimate validly the summary risk or rate ratio across studies. A preferred choice is the Mantel-Haenszel estimator, which is still defined in the situation of zero studies unless all studies have zeros in one of the groups to be compared. For this situation, a modified Mantel-Haenszel estimator is suggested and shown to perform well by means of simulation work. Also, confidence interval estimation is discussed and evaluated in a simulation study. In a second part, heterogeneity of relative risk across studies is investigated with a new chi-square type statistic which is based on a conditional binomial distribution where the conditioning is on the event margin for each study. This is necessary as the conventional Q-statistic is undefined in the occurrence of zero studies. The null-distribution of the proposed Q-statistic is obtained by means of a parametric bootstrap as a chi-square approximation is not valid for rare events meta-analysis, as bootstrapping of the null-distribution shows. In addition, for the effect heterogeneity situation, confidence interval estimation is considered using a nonparametric bootstrap procedure. The proposed techniques are illustrated at hand of three meta-analytic data sets.


Subject(s)
Risk , Odds Ratio , Computer Simulation , Binomial Distribution
SELECTION OF CITATIONS
SEARCH DETAIL