Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 109
Filtrar
1.
SIAM J Imaging Sci ; 17(1): 273-300, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38550750

RESUMO

Intensity-based image registration is critical for neuroimaging tasks, such as 3D reconstruction, times-series alignment, and common coordinate mapping. The gradient-based optimization methods commonly used to solve this problem require a careful selection of step-length. This limitation imposes substantial time and computational costs. Here we propose a gradient-independent rigid-motion registration algorithm based on the majorization-minimization (MM) principle. Each iteration of our intensity-based MM algorithm reduces to a simple point-set rigid registration problem with a closed form solution that avoids the step-length issue altogether. The details of the algorithm are presented, and an error bound for its more practical truncated form is derived. The performance of the MM algorithm is shown to be more effective than gradient descent on simulated images and Nissl stained coronal slices of mouse brain. We also compare and contrast the similarities and differences between the MM algorithm and another gradient-free registration algorithm called the block-matching method. Finally, extensions of this algorithm to more complex problems are discussed.

2.
J Comput Graph Stat ; 32(3): 1097-1108, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37982129

RESUMO

Many problems in classification involve huge numbers of irrelevant features. Variable selection reveals the crucial features, reduces the dimensionality of feature space, and improves model interpretation. In the support vector machine literature, variable selection is achieved by ℓ1 penalties. These convex relaxations seriously bias parameter estimates toward 0 and tend to admit too many irrelevant features. The current paper presents an alternative that replaces penalties by sparse-set constraints. Penalties still appear, but serve a different purpose. The proximal distance principle takes a loss function L(ß) and adds the penalty ρ2dist(ß,Sk)2 capturing the squared Euclidean distance of the parameter vector ß to the sparsity set Sk where at most k components of ß are nonzero. If ßρ represents the minimum of the objective fρ(ß)=L(ß)+ρ2dist(ß,Sk)2, then ßρ tends to the constrained minimum of L(ß) over Sk as ρ tends to ∞. We derive two closely related algorithms to carry out this strategy. Our simulated and real examples vividly demonstrate how the algorithms achieve better sparsity without loss of classification power.

3.
Technometrics ; 65(1): 117-126, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37448596

RESUMO

Building on previous research of Chi and Chi (2022), the current paper revisits estimation in robust structured regression under the L2E criterion. We adopt the majorization-minimization (MM) principle to design a new algorithm for updating the vector of regression coefficients. Our sharp majorization achieves faster convergence than the previous alternating proximal gradient descent algorithm (Chi and Chi, 2022). In addition, we reparameterize the model by substituting precision for scale and estimate precision via a modified Newton's method. This simplifies and accelerates overall estimation. We also introduce distance-to-set penalties to enable constrained estimation under nonconvex constraint sets. This tactic also improves performance in coefficient estimation and structure recovery. Finally, we demonstrate the merits of our improved tactics through a rich set of simulation examples and a real data application.

4.
Proc Natl Acad Sci U S A ; 120(27): e2303168120, 2023 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-37339185

RESUMO

We briefly review the majorization-minimization (MM) principle and elaborate on the closely related notion of proximal distance algorithms, a generic approach for solving constrained optimization problems via quadratic penalties. We illustrate how the MM and proximal distance principles apply to a variety of problems from statistics, finance, and nonlinear optimization. Drawing from our selected examples, we also sketch a few ideas pertinent to the acceleration of MM algorithms: a) structuring updates around efficient matrix decompositions, b) path following in proximal distance iteration, and c) cubic majorization and its connections to trust region methods. These ideas are put to the test on several numerical examples, but for the sake of brevity, we omit detailed comparisons to competing methods. The current article, which is a mix of review and current contributions, celebrates the MM principle as a powerful framework for designing optimization algorithms and reinterpreting existing ones.

5.
Bioinformatics ; 39(4)2023 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-37067496

RESUMO

MOTIVATION: In a genome-wide association study, analyzing multiple correlated traits simultaneously is potentially superior to analyzing the traits one by one. Standard methods for multivariate genome-wide association study operate marker-by-marker and are computationally intensive. RESULTS: We present a sparsity constrained regression algorithm for multivariate genome-wide association study based on iterative hard thresholding and implement it in a convenient Julia package MendelIHT.jl. In simulation studies with up to 100 quantitative traits, iterative hard thresholding exhibits similar true positive rates, smaller false positive rates, and faster execution times than GEMMA's linear mixed models and mv-PLINK's canonical correlation analysis. On UK Biobank data with 470 228 variants, MendelIHT completed a three-trait joint analysis (n=185 656) in 20 h and an 18-trait joint analysis (n=104 264) in 53 h with an 80 GB memory footprint. In short, MendelIHT enables geneticists to fit a single regression model that simultaneously considers the effect of all SNPs and dozens of traits. AVAILABILITY AND IMPLEMENTATION: Software, documentation, and scripts to reproduce our results are available from https://github.com/OpenMendel/MendelIHT.jl.


Assuntos
Estudo de Associação Genômica Ampla , Software , Algoritmos , Simulação por Computador , Fenótipo , Polimorfismo de Nucleotídeo Único
6.
Am J Hum Genet ; 110(2): 314-325, 2023 02 02.
Artigo em Inglês | MEDLINE | ID: mdl-36610401

RESUMO

Admixture estimation plays a crucial role in ancestry inference and genome-wide association studies (GWASs). Computer programs such as ADMIXTURE and STRUCTURE are commonly employed to estimate the admixture proportions of sample individuals. However, these programs can be overwhelmed by the computational burdens imposed by the 105 to 106 samples and millions of markers commonly found in modern biobanks. An attractive strategy is to run these programs on a set of ancestry-informative SNP markers (AIMs) that exhibit substantially different frequencies across populations. Unfortunately, existing methods for identifying AIMs require knowing ancestry labels for a subset of the sample. This supervised learning approach creates a chicken and the egg scenario. In this paper, we present an unsupervised, scalable framework that seamlessly carries out AIM selection and likelihood-based estimation of admixture proportions. Our simulated and real data examples show that this approach is scalable to modern biobank datasets. OpenADMIXTURE, our Julia implementation of the method, is open source and available for free.


Assuntos
Bancos de Espécimes Biológicos , Estudo de Associação Genômica Ampla , Humanos , Estudo de Associação Genômica Ampla/métodos , Funções Verossimilhança , Grupos Populacionais , Software , Genética Populacional
7.
Algorithms ; 16(10)2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38529123

RESUMO

The Hausdorff distance between two closed sets has important theoretical and practical applications. Yet apart from finite point clouds, there appear to be no generic algorithms for computing this quantity. Because many infinite sets are defined by algebraic equalities and inequalities, this a huge gap. The current paper constructs Frank-Wolfe and projected gradient ascent algorithms for computing the Hausdorff distance between two compact convex sets. Although these algorithms are guaranteed to go uphill, they can become trapped by local maxima. To avoid this defect, we investigate a homotopy method that gradually deforms two balls into the two target sets. The Frank-Wolfe and projected gradient algorithms are tested on two pairs A,B of compact convex sets, where: (1) A is the box -1,1 translated by 1 and B is the intersection of the unit ball and the non-negative orthant; and (2) A is the probability simplex and B is the ℓ1 unit ball translated by 1. For problem (2), we find the Hausdorff distance analytically. Projected gradient ascent is more reliable than the Frank-Wolfe algorithm and finds the exact solution of problem (2). Homotopy improves the performance of both algorithms when the exact solution is unknown or unattained.

8.
Optim Lett ; 17(5): 1133-1159, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38516636

RESUMO

The task of projecting onto ℓp norm balls is ubiquitous in statistics and machine learning, yet the availability of actionable algorithms for doing so is largely limited to the special cases of p∈{0, 1, 2, ∞}. In this paper, we introduce novel, scalable methods for projecting onto the ℓp-ball for general p>0. For p≥1, we solve the univariate Lagrangian dual via a dual Newton method. We then carefully design a bisection approach For p<1, presenting theoretical and empirical evidence of zero or a small duality gap in the non-convex case. The success of our contributions is thoroughly assessed empirically, and applied to large-scale regularized multi-task learning and compressed sensing. The code implementing our methods is publicly available on Github.

9.
J Med Primatol ; 51(6): 329-344, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35855511

RESUMO

BACKGROUND: Poor nutrition during fetal development programs postnatal kidney function. Understanding postnatal consequences in nonhuman primates (NHP) is important for translation to our understanding the impact on human kidney function and disease risk. We hypothesized that intrauterine growth restriction (IUGR) in NHP persists postnatally, with potential molecular mechanisms revealed by Western-type diet challenge. METHODS: IUGR juvenile baboons were fed a 7-week Western diet, with kidney biopsies, blood, and urine collected before and after challenge. Transcriptomics and metabolomics were used to analyze biosamples. RESULTS: Pre-challenge IUGR kidney transcriptome and urine metabolome differed from controls. Post-challenge, sex and diet-specific responses in urine metabolite and renal signaling pathways were observed. Dysregulated mTOR signaling persisted postnatally in female pre-challenge. Post-challenge IUGR male response showed uncoordinated signaling suggesting proximal tubule injury. CONCLUSION: Fetal undernutrition impacts juvenile offspring kidneys at the molecular level suggesting early-onset blood pressure dysregulation.


Assuntos
Retardo do Crescimento Fetal , Rim , Humanos , Animais , Feminino , Masculino , Retardo do Crescimento Fetal/etiologia , Retardo do Crescimento Fetal/veterinária , Rim/patologia , Papio , Pressão Sanguínea
10.
PLoS Comput Biol ; 18(6): e1009598, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35696417

RESUMO

Differential sensitivity analysis is indispensable in fitting parameters, understanding uncertainty, and forecasting the results of both thought and lab experiments. Although there are many methods currently available for performing differential sensitivity analysis of biological models, it can be difficult to determine which method is best suited for a particular model. In this paper, we explain a variety of differential sensitivity methods and assess their value in some typical biological models. First, we explain the mathematical basis for three numerical methods: adjoint sensitivity analysis, complex perturbation sensitivity analysis, and forward mode sensitivity analysis. We then carry out four instructive case studies. (a) The CARRGO model for tumor-immune interaction highlights the additional information that differential sensitivity analysis provides beyond traditional naive sensitivity methods, (b) the deterministic SIR model demonstrates the value of using second-order sensitivity in refining model predictions, (c) the stochastic SIR model shows how differential sensitivity can be attacked in stochastic modeling, and (d) a discrete birth-death-migration model illustrates how the complex perturbation method of differential sensitivity can be generalized to a broader range of biological models. Finally, we compare the speed, accuracy, and ease of use of these methods. We find that forward mode automatic differentiation has the quickest computational time, while the complex perturbation method is the simplest to implement and the most generalizable.


Assuntos
Modelos Biológicos , Processos Estocásticos , Incerteza
11.
Int Stat Rev ; 90(Suppl 1): S52-S66, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37204987

RESUMO

Nan Laird has an enormous and growing impact on computational statistics. Her paper with Dempster and Rubin on the expectation-maximisation (EM) algorithm is the second most cited paper in statistics. Her papers and book on longitudinal modelling are nearly as impressive. In this brief survey, we revisit the derivation of some of her most useful algorithms from the perspective of the minorisation-maximisation (MM) principle. The MM principle generalises the EM principle and frees it from the shackles of missing data and conditional expectations. Instead, the focus shifts to the construction of surrogate functions via standard mathematical inequalities. The MM principle can deliver a classical EM algorithm with less fuss or an entirely new algorithm with a faster rate of convergence. In any case, the MM principle enriches our understanding of the EM principle and suggests new algorithms of considerable potential in high-dimensional settings where standard algorithms such as Newton's method and Fisher scoring falter.

12.
Artigo em Inglês | MEDLINE | ID: mdl-37205013

RESUMO

The current paper studies the problem of minimizing a loss f(x) subject to constraints of the form Dx ∈ S, where S is a closed set, convex or not, and D is a matrix that fuses parameters. Fusion constraints can capture smoothness, sparsity, or more general constraint patterns. To tackle this generic class of problems, we combine the Beltrami-Courant penalty method of optimization with the proximal distance principle. The latter is driven by minimization of penalized objectives f(x)+ρ2dist(Dx,S)2 involving large tuning constants ρ and the squared Euclidean distance of Dx from S. The next iterate xn+1 of the corresponding proximal distance algorithm is constructed from the current iterate xn by minimizing the majorizing surrogate function f(x)+ρ2‖Dx-𝒫S(Dxn)‖2. For fixed ρ and a subanalytic loss f(x) and a subanalytic constraint set S, we prove convergence to a stationary point. Under stronger assumptions, we provide convergence rates and demonstrate linear local convergence. We also construct a steepest descent (SD) variant to avoid costly linear system solves. To benchmark our algorithms, we compare their results to those delivered by the alternating direction method of multipliers (ADMM). Our extensive numerical tests include problems on metric projection, convex regression, convex clustering, total variation image denoising, and projection of a matrix to a good condition number. These experiments demonstrate the superior speed and acceptable accuracy of our steepest variant on high-dimensional problems. Julia code to replicate all of our experiments can be found at https://github.com/alanderos91/ProximalDistanceAlgorithms.jl.

13.
Biometrika ; 109(4): 1047-1066, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38094986

RESUMO

This paper addresses the task of estimating a covariance matrix under a patternless sparsity assumption. In contrast to existing approaches based on thresholding or shrinkage penalties, we propose a likelihood-based method that regularizes the distance from the covariance estimate to a symmetric sparsity set. This formulation avoids unwanted shrinkage induced by more common norm penalties, and enables optimization of the resulting nonconvex objective by solving a sequence of smooth, unconstrained subproblems. These subproblems are generated and solved via the proximal distance version of the majorization-minimization principle. The resulting algorithm executes rapidly, gracefully handles settings where the number of parameters exceeds the number of cases, yields a positive-definite solution, and enjoys desirable convergence properties. Empirically, we demonstrate that our approach outperforms competing methods across several metrics, for a suite of simulated experiments. Its merits are illustrated on international migration data and a case study on flow cytometry. Our findings suggest that the marginal and conditional dependency networks for the cell signalling data are more similar than previously concluded.

14.
IEEE Trans Comput Imaging ; 8: 838-850, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37065711

RESUMO

This paper discusses phase retrieval algorithms for maximum likelihood (ML) estimation from measurements following independent Poisson distributions in very low-count regimes, e.g., 0.25 photon per pixel. To maximize the log-likelihood of the Poisson ML model, we propose a modified Wirtinger flow (WF) algorithm using a step size based on the observed Fisher information. This approach eliminates all parameter tuning except the number of iterations. We also propose a novel curvature for majorize-minimize (MM) algorithms with a quadratic majorizer. We show theoretically that our proposed curvature is sharper than the curvature derived from the supremum of the second derivative of the Poisson ML cost function. We compare the proposed algorithms (WF, MM) with existing optimization methods, including WF using other step-size schemes, quasi-Newton methods such as LBFGS and alternating direction method of multipliers (ADMM) algorithms, under a variety of experimental settings. Simulation experiments with a random Gaussian matrix, a canonical DFT matrix, a masked DFT matrix and an empirical transmission matrix demonstrate the following. 1) As expected, algorithms based on the Poisson ML model consistently produce higher quality reconstructions than algorithms derived from Gaussian noise ML models when applied to low-count data. Furthermore, incorporating regularizers, such as corner-rounded anisotropic total variation (TV) that exploit the assumed properties of the latent image, can further improve the reconstruction quality. 2) For unregularized cases, our proposed WF algorithm with Fisher information for step size converges faster (in terms of cost function and PSNR vs. time) than other WF methods, e.g., WF with empirical step size, backtracking line search, and optimal step size for the Gaussian noise model; it also converges faster than the LBFGS quasi-Newton method. 3) In regularized cases, our proposed WF algorithm converges faster than WF with backtracking line search, LBFGS, MM and ADMM.

15.
SIAM J Matrix Anal Appl ; 42(2): 859-882, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34776610

RESUMO

This paper studies the problem of maximizing the sum of traces of matrix quadratic forms on a product of Stiefel manifolds. This orthogonal trace-sum maximization (OTSM) problem generalizes many interesting problems such as generalized canonical correlation analysis (CCA), Procrustes analysis, and cryo-electron microscopy of the Nobel prize fame. For these applications finding global solutions is highly desirable, but it has been unclear how to find even a stationary point, let alone test its global optimality. Through a close inspection of Ky Fan's classical result [Proc. Natl. Acad. Sci. USA, 35 (1949), pp. 652-655] on the variational formulation of the sum of largest eigenvalues of a symmetric matrix, and a semidefinite programming (SDP) relaxation of the latter, we first provide a simple method to certify global optimality of a given stationary point of OTSM. This method only requires testing whether a symmetric matrix is positive semidefinite. A by-product of this analysis is an unexpected strong duality between Shapiro and Botha [SIAM J. Matrix Anal. Appl., 9 (1988), pp. 378-383] and Zhang and Singer [Linear Algebra Appl., 524 (2017), pp. 159-181]. After showing that a popular algorithm for generalized CCA and Procrustes analysis may generate oscillating iterates, we propose a simple fix that provably guarantees convergence to a stationary point. The combination of our algorithm and certificate reveals novel global optima of various instances of OTSM.

16.
Bioinformatics ; 37(24): 4756-4763, 2021 12 11.
Artigo em Inglês | MEDLINE | ID: mdl-34289008

RESUMO

MOTIVATION: Current methods for genotype imputation and phasing exploit the volume of data in haplotype reference panels and rely on hidden Markov models (HMMs). Existing programs all have essentially the same imputation accuracy, are computationally intensive and generally require prephasing the typed markers. RESULTS: We introduce a novel data-mining method for genotype imputation and phasing that substitutes highly efficient linear algebra routines for HMM calculations. This strategy, embodied in our Julia program MendelImpute.jl, avoids explicit assumptions about recombination and population structure while delivering similar prediction accuracy, better memory usage and an order of magnitude or better run-times compared to the fastest competing method. MendelImpute operates on both dosage data and unphased genotype data and simultaneously imputes missing genotypes and phase at both the typed and untyped SNPs (single nucleotide polymorphisms). Finally, MendelImpute naturally extends to global and local ancestry estimation and lends itself to new strategies for data compression and hence faster data transport and sharing. AVAILABILITY AND IMPLEMENTATION: Software, documentation and scripts to reproduce our results are available from https://github.com/OpenMendel/MendelImpute.jl. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Compressão de Dados , Software , Genótipo , Haplótipos , Polimorfismo de Nucleotídeo Único
17.
PLoS One ; 16(5): e0251242, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34014947

RESUMO

The SARS-CoV-2 pandemic led to closure of nearly all K-12 schools in the United States of America in March 2020. Although reopening K-12 schools for in-person schooling is desirable for many reasons, officials understand that risk reduction strategies and detection of cases are imperative in creating a safe return to school. Furthermore, consequences of reclosing recently opened schools are substantial and impact teachers, parents, and ultimately educational experiences in children. To address competing interests in meeting educational needs with public safety, we compare the impact of physical separation through school cohorts on SARS-CoV-2 infections against policies acting at the level of individual contacts within classrooms. Using an age-stratified Susceptible-Exposed-Infected-Removed model, we explore influences of reduced class density, transmission mitigation, and viral detection on cumulative prevalence. We consider several scenarios over a 6-month period including (1) multiple rotating cohorts in which students cycle through in-person instruction on a weekly basis, (2) parallel cohorts with in-person and remote learning tracks, (3) the impact of a hypothetical testing program with ideal and imperfect detection, and (4) varying levels of aggregate transmission reduction. Our mathematical model predicts that reducing the number of contacts through cohorts produces a larger effect than diminishing transmission rates per contact. Specifically, the latter approach requires dramatic reduction in transmission rates in order to achieve a comparable effect in minimizing infections over time. Further, our model indicates that surveillance programs using less sensitive tests may be adequate in monitoring infections within a school community by both keeping infections low and allowing for a longer period of instruction. Lastly, we underscore the importance of factoring infection prevalence in deciding when a local outbreak of infection is serious enough to require reverting to remote learning.


Assuntos
COVID-19/transmissão , Busca de Comunicante/métodos , Pandemias , Vigilância da População/métodos , Instituições Acadêmicas , Adolescente , Criança , Humanos , Modelos Teóricos , Estados Unidos
18.
BMC Bioinformatics ; 22(1): 228, 2021 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-33941078

RESUMO

BACKGROUND: Statistical geneticists employ simulation to estimate the power of proposed studies, test new analysis tools, and evaluate properties of causal models. Although there are existing trait simulators, there is ample room for modernization. For example, most phenotype simulators are limited to Gaussian traits or traits transformable to normality, while ignoring qualitative traits and realistic, non-normal trait distributions. Also, modern computer languages, such as Julia, that accommodate parallelization and cloud-based computing are now mainstream but rarely used in older applications. To meet the challenges of contemporary big studies, it is important for geneticists to adopt new computational tools. RESULTS: We present TraitSimulation, an open-source Julia package that makes it trivial to quickly simulate phenotypes under a variety of genetic architectures. This package is integrated into our OpenMendel suite for easy downstream analyses. Julia was purpose-built for scientific programming and provides tremendous speed and memory efficiency, easy access to multi-CPU and GPU hardware, and to distributed and cloud-based parallelization. TraitSimulation is designed to encourage flexible trait simulation, including via the standard devices of applied statistics, generalized linear models (GLMs) and generalized linear mixed models (GLMMs). TraitSimulation also accommodates many study designs: unrelateds, sibships, pedigrees, or a mixture of all three. (Of course, for data with pedigrees or cryptic relationships, the simulation process must include the genetic dependencies among the individuals.) We consider an assortment of trait models and study designs to illustrate integrated simulation and analysis pipelines. Step-by-step instructions for these analyses are available in our electronic Jupyter notebooks on Github. These interactive notebooks are ideal for reproducible research. CONCLUSION: The TraitSimulation package has three main advantages. (1) It leverages the computational efficiency and ease of use of Julia to provide extremely fast, straightforward simulation of even the most complex genetic models, including GLMs and GLMMs. (2) It can be operated entirely within, but is not limited to, the integrated analysis pipeline of OpenMendel. And finally (3), by allowing a wider range of more realistic phenotype models, TraitSimulation brings power calculations and diagnostic tools closer to what investigators might see in real-world analyses.


Assuntos
Computação em Nuvem , Testes Genéticos , Idoso , Simulação por Computador , Humanos , Linhagem , Fenótipo
19.
PLoS One ; 16(3): e0247046, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33651796

RESUMO

Interacting Particle Systems (IPSs) are used to model spatio-temporal stochastic systems in many disparate areas of science. We design an algorithmic framework that reduces IPS simulation to simulation of well-mixed Chemical Reaction Networks (CRNs). This framework minimizes the number of associated reaction channels and decouples the computational cost of the simulations from the size of the lattice. Decoupling allows our software to make use of a wide class of techniques typically reserved for well-mixed CRNs. We implement the direct stochastic simulation algorithm in the open source programming language Julia. We also apply our algorithms to several complex spatial stochastic phenomena. including a rock-paper-scissors game, cancer growth in response to immunotherapy, and lipid oxidation dynamics. Our approach aids in standardizing mathematical models and in generating hypotheses based on concrete mechanistic behavior across a wide range of observed spatial phenomena.


Assuntos
Simulação por Computador , Algoritmos , Software , Processos Estocásticos
20.
medRxiv ; 2021 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-32793918

RESUMO

The SARS-CoV-2 pandemic led to closure of nearly all K-12 schools in the United States of America in March 2020. Although reopening K-12 schools for in-person schooling is desirable for many reasons, officials understand that risk reduction strategies and detection of cases are imperative in creating a safe return to school. Furthermore, consequences of reclosing recently opened schools are substantial and impact teachers, parents, and ultimately educational experiences in children. To address competing interests in meeting educational needs with public safety, we compare the impact of physical separation through school cohorts on SARS-CoV-2 infections against policies acting at the level of individual contacts within classrooms. Using an age-stratified Susceptible-Exposed-Infected-Removed model, we explore influences of reduced class density, transmission mitigation, and viral detection on cumulative prevalence. We consider several scenarios over a 6-month period including (1) multiple rotating cohorts in which students cycle through in-person instruction on a weekly basis, (2) parallel cohorts with in-person and remote learning tracks, (3) the impact of a hypothetical testing program with ideal and imperfect detection, and (4) varying levels of aggregate transmission reduction. Our mathematical model predicts that reducing the number of contacts through cohorts produces a larger effect than diminishing transmission rates per contact. Specifically, the latter approach requires dramatic reduction in transmission rates in order to achieve a comparable effect in minimizing infections over time. Further, our model indicates that surveillance programs using less sensitive tests may be adequate in monitoring infections within a school community by both keeping infections low and allowing for a longer period of instruction. Lastly, we underscore the importance of factoring infection prevalence in deciding when a local outbreak of infection is serious enough to require reverting to remote learning.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...