Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.495
Filter
3.
Biom J ; 66(4): e2300156, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38847059

ABSTRACT

How to analyze data when there is violation of the positivity assumption? Several possible solutions exist in the literature. In this paper, we consider propensity score (PS) methods that are commonly used in observational studies to assess causal treatment effects in the context where the positivity assumption is violated. We focus on and examine four specific alternative solutions to the inverse probability weighting (IPW) trimming and truncation: matching weight (MW), Shannon's entropy weight (EW), overlap weight (OW), and beta weight (BW) estimators. We first specify their target population, the population of patients for whom clinical equipoise, that is, where we have sufficient PS overlap. Then, we establish the nexus among the different corresponding weights (and estimators); this allows us to highlight the shared properties and theoretical implications of these estimators. Finally, we introduce their augmented estimators that take advantage of estimating both the propensity score and outcome regression models to enhance the treatment effect estimators in terms of bias and efficiency. We also elucidate the role of the OW estimator as the flagship of all these methods that target the overlap population. Our analytic results demonstrate that OW, MW, and EW are preferable to IPW and some cases of BW when there is a moderate or extreme (stochastic or structural) violation of the positivity assumption. We then evaluate, compare, and confirm the finite-sample performance of the aforementioned estimators via Monte Carlo simulations. Finally, we illustrate these methods using two real-world data examples marked by violations of the positivity assumption.


Subject(s)
Biometry , Propensity Score , Biometry/methods , Humans , Causality , Probability
4.
J Refract Surg ; 40(6): e354-e361, 2024 May.
Article in English | MEDLINE | ID: mdl-38848053

ABSTRACT

PURPOSE: To assess the predictive accuracy of new-generation online intraocular lens (IOL) power formulas in eyes with previous myopic laser refractive surgery (LRS) and to evaluate the influence of corneal asphericity on the predictive accuracy. METHODS: The authors retrospectively evaluated 52 patients (78 eyes) with a history of laser in situ keratomileusis (LASIK) or photorefractive keratectomy (PRK) who subsequently underwent cataract surgery. Refractive prediction errors were calculated for 12 no-history new online formulas: 8 formulas with post-LRS versions (Barrett True-K, EVO 2.0, Hoffer QST, and Pearl DGS) using keratometry and posterior/total keratometry measured by IOLMaster 700 and 4 formulas without post-LRS versions (Cooke K6 and Kane) using keratometry and total keratometry. The refractive prediction error, mean absolute error (MAE), and percentages of eyes with prediction errors of ±0.25, ±0.50, ±0.75, ±1.00, and ±1.50 diopters (D) were compared. RESULTS: The MAEs of the 12 formulas were significantly different (F = 83.66, P < .001). The MAEs ranged from 0.62 to 0.94 D and from 1.07 to 1.84 D in the formulas with and without post-LRS versions, respectively. The EVO formula produced the lowest MAE (0.60) and MedAE (0.47), followed by the Barrett True-K (0.69 and 0.50, respectively). Each percentage of eyes with refractive prediction error was also significantly different among the 12 formulas (P < .001). CONCLUSIONS: The EVO and Barrett True-K formulas demonstrate comparable performance to the other existing formulas in eyes with a history of myopic LASIK/PRK. Surgeons should use these formulas with post-LRS versions and input keratometric values whenever possible. [J Refract Surg. 2024;40(6):e354-e361.].


Subject(s)
Keratomileusis, Laser In Situ , Lens Implantation, Intraocular , Lenses, Intraocular , Myopia , Optics and Photonics , Photorefractive Keratectomy , Refraction, Ocular , Visual Acuity , Humans , Retrospective Studies , Myopia/surgery , Myopia/physiopathology , Female , Male , Refraction, Ocular/physiology , Middle Aged , Photorefractive Keratectomy/methods , Keratomileusis, Laser In Situ/methods , Adult , Visual Acuity/physiology , Lasers, Excimer/therapeutic use , Cornea/surgery , Cornea/physiopathology , Reproducibility of Results , Biometry/methods , Phacoemulsification , Aged
5.
PLoS One ; 19(6): e0305076, 2024.
Article in English | MEDLINE | ID: mdl-38857255

ABSTRACT

This study aimed to develop and analyze the accuracy of predictive formulae for postoperative anterior chamber depth, tilt, and decentration of low-added-segment refractive intraocular lenses. This single-center, retrospective, observational study included the right eyes of 96 patients (mean age: 72.43 ± 6.58 years), who underwent a cataract surgery with implantation of a low-added segmented refractive intraocular lens at the Medical University Hospital between July 2019 and January 2021, and were followed up for more than 1 month postoperatively. The participants were divided into an estimation group to create a prediction formula and a validation group to verify the accuracy of the formula. Anterior segment optical coherence tomography (CASIA 2, Tomey Corporation, Japan) and swept-source optical coherence tomography biometry (IOL Master 700, Carl Zeiss Meditec AG) were used to measure the anterior ocular components. A predictive formula was devised for postoperative anterior chamber depth, intraocular lens tilt, and intraocular lens decentration (p <0.01) in the estimation group. A significant positive correlation was observed between the estimated values calculated using the prediction formula and the measured values for postoperative anterior chamber depth (r = 0.792), amount of intraocular lens tilt (r = 0.610), direction of intraocular lens tilt (r = 0.668), and amount of intraocular lens decentration (r = 0.431) (p < 0.01) in the validation group. In conclusion, our findings reveal that predicting the position of the low-added segmented refractive intraocular lens enables the prognosis of postoperative refractive values with a greater accuracy in determining the intraocular lens adaptation.


Subject(s)
Lens Implantation, Intraocular , Lenses, Intraocular , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Aged , Male , Female , Retrospective Studies , Middle Aged , Anterior Chamber/diagnostic imaging , Aged, 80 and over , Biometry/methods , Cataract Extraction , Refraction, Ocular/physiology
6.
PLoS One ; 19(6): e0304169, 2024.
Article in English | MEDLINE | ID: mdl-38857282

ABSTRACT

This study aimed to assess the effect of intraocular pressure (IOP) changes on biometry and intraocular lens (IOL) power calculation in patients diagnosed with primary open-angle glaucoma (POAG) and ocular hypertension (OHT). This prospective non-randomized cohort study enrolled patients with diagnosed POAG and OHT, presenting with IOP levels exceeding 25 mmHg. Thai Clinical Trials Registry number was TCTR20180912007. Optical biometry, encompassing measurements such as corneal thickness (CCT), keratometry, anterior chamber depth (ACD), and axial length, was conducted before and after IOP reduction. The IOL power was also determined using the SRK/T formula. The main outcomes measured were alterations in biometry and IOL power. Correlations between IOP, biometric parameters, and IOL power were analyzed. In total, 28 eyes were included in the study, with a mean patient age of 65.71±10.2 years. After IOP reduction, all biometric parameters, except CCT and ACD, exhibited a decrease without reaching statistical significance (all p>0.05). Meanwhile, IOL power showed a slight increase of 0.214±0.42 diopters (P = 0.035). The correlation between IOP and biometric parameters was found to be weak. However, there was a moderate correlation between IOP and IOL power (r2 = 0.267). Notably, IOL power tended to increase by more than 0.5 diopters when IOP decreased by more than 10 mmHg (p < 0.001). In conclusion, changes in IOP among patients with POAG and OHT do not significantly impact biometry and IOL power calculations. Nonetheless, it may be prudent to consider a slight adjustment in IOL power when IOP is lowered by more than 10 mmHg.


Subject(s)
Biometry , Glaucoma, Open-Angle , Intraocular Pressure , Lenses, Intraocular , Ocular Hypertension , Humans , Intraocular Pressure/physiology , Glaucoma, Open-Angle/physiopathology , Male , Female , Aged , Middle Aged , Ocular Hypertension/physiopathology , Prospective Studies , Biometry/methods
7.
Transl Vis Sci Technol ; 13(6): 2, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38837172

ABSTRACT

Purpose: The purpose of this study was to develop a simplified method to approximate constants minimizing the standard deviation (SD) and the root mean square (RMS) of the prediction error in single-optimized intraocular lens (IOL) power calculation formulas. Methods: The study introduces analytical formulas to determine the optimal constant value for minimizing SD and RMS in single-optimized IOL power calculation formulas. These formulas were tested against various datasets containing biometric measurements from cataractous populations and included 10,330 eyes and 4 different IOL models. The study evaluated the effectiveness of the proposed method by comparing the outcomes with those obtained using traditional reference methods. Results: In optimizing IOL constants, minor differences between reference and estimated A-constants were found, with the maximum deviation at -0.086 (SD, SRK/T, and Vivinex) and -0.003 (RMS, PEARL DGS, and Vivinex). The largest discrepancy for third-generation formulas was -0.027 mm (SD, Haigis, and Vivinex) and 0.002 mm (RMS, Hoffer Q, and PCB00/SN60WF). Maximum RMS differences were -0.021 and +0.021, both involving Hoffer Q. Post-minimization, the largest mean prediction error was 0.726 diopters (D; SD) and 0.043 D (RMS), with the highest SD and RMS after adjustments at 0.529 D and 0.875 D, respectively, indicating effective minimization strategies. Conclusions: The study simplifies the process of minimizing SD and RMS in single-optimized IOL power predictions, offering a valuable tool for clinicians. However, it also underscores the complexity of achieving balanced optimization and suggests the need for further research in this area. Translational Relevance: The study presents a novel, clinically practical approach for optimizing IOL power calculations.


Subject(s)
Lenses, Intraocular , Optics and Photonics , Humans , Optics and Photonics/methods , Biometry/methods , Refraction, Ocular/physiology , Female , Male , Lens Implantation, Intraocular/methods , Aged , Visual Acuity/physiology , Middle Aged
8.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38837900

ABSTRACT

Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of small, randomized experiments with non-normal outcomes. Two common test statistics used to perform Fisher randomization tests are the difference-in-means between the treatment and control groups and the covariate-adjusted version of the difference-in-means using analysis of covariance. Modern computing allows for fast computation of the Fisher-exact P-value, but confidence intervals have typically been obtained by inverting the Fisher randomization test over a range of possible effect sizes. The test inversion procedure is computationally expensive, limiting the usage of randomization-based inference in applied work. A recent paper by Zhu and Liu developed a closed form expression for the randomization-based confidence interval using the difference-in-means statistic. We develop an important extension of Zhu and Liu to obtain a closed form expression for the randomization-based covariate-adjusted confidence interval and give practitioners a sufficiency condition that can be checked using observed data and that guarantees that these confidence intervals have correct coverage. Simulations show that our procedure generates randomization-based covariate-adjusted confidence intervals that are robust to non-normality and that can be calculated in nearly the same time as it takes to calculate the Fisher-exact P-value, thus removing the computational barrier to performing randomization-based inference when adjusting for covariates. We also demonstrate our method on a re-analysis of phase I clinical trial data.


Subject(s)
Computer Simulation , Confidence Intervals , Humans , Biometry/methods , Models, Statistical , Data Interpretation, Statistical , Random Allocation , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/methods
10.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38837902

ABSTRACT

In mobile health, tailoring interventions for real-time delivery is of paramount importance. Micro-randomized trials have emerged as the "gold-standard" methodology for developing such interventions. Analyzing data from these trials provides insights into the efficacy of interventions and the potential moderation by specific covariates. The "causal excursion effect," a novel class of causal estimand, addresses these inquiries. Yet, existing research mainly focuses on continuous or binary data, leaving count data largely unexplored. The current work is motivated by the Drink Less micro-randomized trial from the UK, which focuses on a zero-inflated proximal outcome, i.e., the number of screen views in the subsequent hour following the intervention decision point. To be specific, we revisit the concept of causal excursion effect, specifically for zero-inflated count outcomes, and introduce novel estimation approaches that incorporate nonparametric techniques. Bidirectional asymptotics are established for the proposed estimators. Simulation studies are conducted to evaluate the performance of the proposed methods. As an illustration, we also implement these methods to the Drink Less trial data.


Subject(s)
Computer Simulation , Telemedicine , Humans , Telemedicine/statistics & numerical data , Statistics, Nonparametric , Causality , Randomized Controlled Trials as Topic , Models, Statistical , Biometry/methods , Data Interpretation, Statistical
11.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38861372

ABSTRACT

In many randomized placebo-controlled trials with a biomarker defined subgroup, it is believed that this subgroup has the same or higher treatment effect compared with its complement. These subgroups are often referred to as the biomarker positive and negative subgroups. Most biomarker-stratified pivotal trials are aimed at demonstrating a significant treatment effect either in the biomarker positive subgroup or in the overall population. A major shortcoming of this approach is that the treatment can be declared effective in the overall population even though it has no effect in the biomarker negative subgroup. We use the isotonic assumption about the treatment effects in the two subgroups to construct an efficient way to test for a treatment effect in both the biomarker positive and negative subgroups. A substantial reduction in the required sample size for such a trial compared with existing methods makes evaluating the treatment effect in both the biomarker positive and negative subgroups feasible in pivotal trials especially when the prevalence of the biomarker positive subgroup is less than 0.5.


Subject(s)
Biomarkers , Randomized Controlled Trials as Topic , Humans , Biomarkers/analysis , Biomarkers/blood , Randomized Controlled Trials as Topic/statistics & numerical data , Sample Size , Treatment Outcome , Biometry/methods , Computer Simulation , Models, Statistical
12.
Biomed Res Int ; 2024: 8112209, 2024.
Article in English | MEDLINE | ID: mdl-38884018

ABSTRACT

Existing security issues like keys, pins, and passwords employed presently in almost all the fields that have certain limitations like passwords and pins can be easily forgotten; keys can be lost. To overcome such security issues, new biometric features have shown outstanding improvements in authentication systems as a result of significant developments in biological digital signal processing. Currently, the multimodal authentications have gained huge attention in biometric systems which can be either behavioural or physiological. A biometric system with multimodality club data from many biometric modalities increases each biometric system's performance and makes it more resistant to spoof attempts. Apart from electrocardiogram (ECG) and iris, there are a lot of other biometric traits that can be captured from the human body. They include face, fingerprint, gait, keystroke dynamics, voice, DNA, palm vein, and hand geometry recognition. Electrocardiograms (ECG) have recently been employed in unimodal and multimodal biometric recognition systems as a novel biometric technology. When compared to other biometric approaches, ECG has the intrinsic quality of a person's liveness, making it difficult to fake. Similarly, the iris also plays an important role in biometric authentication. Based on these assumptions, we present a multimodal biometric person authentication system. The projected method includes preprocessing, segmentation, feature extraction, feature fusion, and ensemble classifier where majority voting is presented to obtain the final outcome. The comparative analysis shows the overall performance as 96.55%, 96.2%, 96.2%, 96.5%, and 95.65% in terms of precision, F1-score, sensitivity, specificity, and accuracy.


Subject(s)
Biometric Identification , Electrocardiography , Iris , Humans , Electrocardiography/methods , Biometric Identification/methods , Iris/physiology , Iris/anatomy & histology , Algorithms , Biometry/methods , Male , Female
13.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38884127

ABSTRACT

The marginal structure quantile model (MSQM) provides a unique lens to understand the causal effect of a time-varying treatment on the full distribution of potential outcomes. Under the semiparametric framework, we derive the efficiency influence function for the MSQM, from which a new doubly robust estimator is proposed for point estimation and inference. We show that the doubly robust estimator is consistent if either of the models associated with treatment assignment or the potential outcome distributions is correctly specified, and is semiparametric efficient if both models are correct. To implement the doubly robust MSQM estimator, we propose to solve a smoothed estimating equation to facilitate efficient computation of the point and variance estimates. In addition, we develop a confounding function approach to investigate the sensitivity of several MSQM estimators when the sequential ignorability assumption is violated. Extensive simulations are conducted to examine the finite-sample performance characteristics of the proposed methods. We apply the proposed methods to the Yale New Haven Health System Electronic Health Record data to study the effect of antihypertensive medications to patients with severe hypertension and assess the robustness of the findings to unmeasured baseline and time-varying confounding.


Subject(s)
Computer Simulation , Hypertension , Models, Statistical , Humans , Hypertension/drug therapy , Antihypertensive Agents/therapeutic use , Electronic Health Records/statistics & numerical data , Biometry/methods
14.
Biom J ; 66(4): e2300288, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38700021

ABSTRACT

We introduce a new class of zero-or-one inflated power logit (IPL) regression models, which serve as a versatile tool for analyzing bounded continuous data with observations at a boundary. These models are applied to explore the effects of climate changes on the distribution of tropical tuna within the North Atlantic Ocean. Our findings suggest that our modeling approach is adequate and capable of handling the outliers in the data. It exhibited superior performance compared to rival models in both diagnostic analysis and regarding the inference robustness. We offer a user-friendly method for fitting IPL regression models in practical applications.


Subject(s)
Tropical Climate , Tuna , Animals , Logistic Models , Atlantic Ocean , Biometry/methods
15.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38708764

ABSTRACT

When studying the treatment effect on time-to-event outcomes, it is common that some individuals never experience failure events, which suggests that they have been cured. However, the cure status may not be observed due to censoring which makes it challenging to define treatment effects. Current methods mainly focus on estimating model parameters in various cure models, ultimately leading to a lack of causal interpretations. To address this issue, we propose 2 causal estimands, the timewise risk difference and mean survival time difference, in the always-uncured based on principal stratification as a complement to the treatment effect on cure rates. These estimands allow us to study the treatment effects on failure times in the always-uncured subpopulation. We show the identifiability using a substitutional variable for the potential cure status under ignorable treatment assignment mechanism, these 2 estimands are identifiable. We also provide estimation methods using mixture cure models. We applied our approach to an observational study that compared the leukemia-free survival rates of different transplantation types to cure acute lymphoblastic leukemia. Our proposed approach yielded insightful results that can be used to inform future treatment decisions.


Subject(s)
Models, Statistical , Precursor Cell Lymphoblastic Leukemia-Lymphoma , Humans , Precursor Cell Lymphoblastic Leukemia-Lymphoma/mortality , Precursor Cell Lymphoblastic Leukemia-Lymphoma/therapy , Precursor Cell Lymphoblastic Leukemia-Lymphoma/drug therapy , Causality , Biometry/methods , Treatment Outcome , Computer Simulation , Disease-Free Survival , Survival Analysis
16.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38708763

ABSTRACT

Time-series data collected from a network of random variables are useful for identifying temporal pathways among the network nodes. Observed measurements may contain multiple sources of signals and noises, including Gaussian signals of interest and non-Gaussian noises, including artifacts, structured noise, and other unobserved factors (eg, genetic risk factors, disease susceptibility). Existing methods, including vector autoregression (VAR) and dynamic causal modeling do not account for unobserved non-Gaussian components. Furthermore, existing methods cannot effectively distinguish contemporaneous relationships from temporal relations. In this work, we propose a novel method to identify latent temporal pathways using time-series biomarker data collected from multiple subjects. The model adjusts for the non-Gaussian components and separates the temporal network from the contemporaneous network. Specifically, an independent component analysis (ICA) is used to extract the unobserved non-Gaussian components, and residuals are used to estimate the contemporaneous and temporal networks among the node variables based on method of moments. The algorithm is fast and can easily scale up. We derive the identifiability and the asymptotic properties of the temporal and contemporaneous networks. We demonstrate superior performance of our method by extensive simulations and an application to a study of attention-deficit/hyperactivity disorder (ADHD), where we analyze the temporal relationships between brain regional biomarkers. We find that temporal network edges were across different brain regions, while most contemporaneous network edges were bilateral between the same regions and belong to a subset of the functional connectivity network.


Subject(s)
Algorithms , Biomarkers , Computer Simulation , Models, Statistical , Humans , Biomarkers/analysis , Normal Distribution , Attention Deficit Disorder with Hyperactivity , Time Factors , Biometry/methods
17.
Sensors (Basel) ; 24(9)2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38732856

ABSTRACT

Biometric authentication plays a vital role in various everyday applications with increasing demands for reliability and security. However, the use of real biometric data for research raises privacy concerns and data scarcity issues. A promising approach using synthetic biometric data to address the resulting unbalanced representation and bias, as well as the limited availability of diverse datasets for the development and evaluation of biometric systems, has emerged. Methods for a parameterized generation of highly realistic synthetic data are emerging and the necessary quality metrics to prove that synthetic data can compare to real data are open research tasks. The generation of 3D synthetic face data using game engines' capabilities of generating varied realistic virtual characters is explored as a possible alternative for generating synthetic face data while maintaining reproducibility and ground truth, as opposed to other creation methods. While synthetic data offer several benefits, including improved resilience against data privacy concerns, the limitations and challenges associated with their usage are addressed. Our work shows concurrent behavior in comparing semi-synthetic data as a digital representation of a real identity with their real datasets. Despite slight asymmetrical performance in comparison with a larger database of real samples, a promising performance in face data authentication is shown, which lays the foundation for further investigations with digital avatars and the creation and analysis of fully synthetic data. Future directions for improving synthetic biometric data generation and their impact on advancing biometrics research are discussed.


Subject(s)
Face , Video Games , Humans , Face/anatomy & histology , Face/physiology , Biometry/methods , Biometric Identification/methods , Imaging, Three-Dimensional/methods , Male , Female , Algorithms , Reproducibility of Results
18.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38768225

ABSTRACT

Conventional supervised learning usually operates under the premise that data are collected from the same underlying population. However, challenges may arise when integrating new data from different populations, resulting in a phenomenon known as dataset shift. This paper focuses on prior probability shift, where the distribution of the outcome varies across datasets but the conditional distribution of features given the outcome remains the same. To tackle the challenges posed by such shift, we propose an estimation algorithm that can efficiently combine information from multiple sources. Unlike existing methods that are restricted to discrete outcomes, the proposed approach accommodates both discrete and continuous outcomes. It also handles high-dimensional covariate vectors through variable selection using an adaptive least absolute shrinkage and selection operator penalty, producing efficient estimates that possess the oracle property. Moreover, a novel semiparametric likelihood ratio test is proposed to check the validity of prior probability shift assumptions by embedding the null conditional density function into Neyman's smooth alternatives (Neyman, 1937) and testing study-specific parameters. We demonstrate the effectiveness of our proposed method through extensive simulations and a real data example. The proposed methods serve as a useful addition to the repertoire of tools for dealing dataset shifts.


Subject(s)
Algorithms , Computer Simulation , Models, Statistical , Probability , Humans , Likelihood Functions , Biometry/methods , Data Interpretation, Statistical , Supervised Machine Learning
19.
Biom J ; 66(4): e2300084, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38775273

ABSTRACT

The cumulative incidence function is the standard method for estimating the marginal probability of a given event in the presence of competing risks. One basic but important goal in the analysis of competing risk data is the comparison of these curves, for which limited literature exists. We proposed a new procedure that lets us not only test the equality of these curves but also group them if they are not equal. The proposed method allows determining the composition of the groups as well as an automatic selection of their number. Simulation studies show the good numerical behavior of the proposed methods for finite sample size. The applicability of the proposed method is illustrated using real data.


Subject(s)
Models, Statistical , Humans , Incidence , Biometry/methods , Risk Assessment , Computer Simulation , Data Interpretation, Statistical
20.
Biom J ; 66(4): e2300171, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38785212

ABSTRACT

Statistical and machine learning methods have proved useful in many areas of immunology. In this paper, we address for the first time the problem of predicting the occurrence of class switch recombination (CSR) in B-cells, a problem of interest in understanding antibody response under immunological challenges. We propose a framework to analyze antibody repertoire data, based on clonal (CG) group representation in a way that allows us to predict CSR events using CG level features as input. We assess and compare the performance of several predicting models (logistic regression, LASSO logistic regression, random forest, and support vector machine) in carrying out this task. The proposed approach can obtain an unweighted average recall of 71 % $71\%$ with models based on variable region descriptors and measures of CG diversity during an immune challenge and, most notably, before an immune challenge.


Subject(s)
B-Lymphocytes , Immunoglobulin Class Switching , B-Lymphocytes/immunology , Animals , Biometry/methods , Recombination, Genetic , Antibodies/immunology , Mice , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...