Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 302
Filtrar
1.
Int J Drug Policy ; : 104469, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38880700

RESUMO

INTRODUCTION: The introduction of new direct-acting antivirals for hepatitis C virus (HCV) infection, has enabled the formulation of a HCV elimination strategy led by the World Health Organisation (WHO). Guidelines for elimination of HCV target a reduction in incidence, but this is difficult to measure and needs estimating. METHODS: Serial cross-sectional bio-behavioural sero-surveys provide information on an individual's infection status and duration of exposure and how these change over time. These data can be used to estimate the rate of first infection through appropriate statistical models. This study utilised updated HCV seroprevalence information from the Unlinked Anonymous Monitoring survey, an annual survey of England, Wales and Northern Ireland monitoring the prevalence of blood borne viruses in people who inject drugs. Flexible parametric and semiparametric approaches, including fractional polynomials and splines, for estimating incidence rates by exposure time and survey year were implemented and compared. RESULTS: Incidence rates were shown to peak in those recently initiating injecting drug use at approximately 0.20 infections per person-year followed by a rapid reduction in the subsequent few years of injecting to approximately 0.05 infections per person-year. There was evidence of a rise in incidence rates for recent initiates between 2011 and 2020 from 0.17 infections per person-year (95 % CI, 0.16-0.19) to 0.26 infections per person-year (0.23-0.30). In those injecting for longer durations, incidence rates were stable over time. CONCLUSIONS: Fractional polynomials provided an adequate fit with relatively few parameters, but splines may be preferable to ensure flexibility, in particular, to detect short-term changes in the rate of first infection over time that may be a result of treatment effects. Although chronic HCV prevalence has declined with treatment scale up over 2016-2020, there is no evidence yet of a corresponding fall in the rate of first infection. Seroprevalence and risk behaviour data can be used to estimate and monitor HCV incidence, providing insight into progress towards WHO defined elimination of HCV.

2.
Ann Data Sci ; 11(3): 1031-1050, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38855634

RESUMO

This work concerns the effective personalized prediction of longitudinal biomarker trajectory, motivated by a study of cancer targeted therapy for patients with chronic myeloid leukemia (CML). Continuous monitoring with a confirmed biomarker of residual disease is a key component of CML management for early prediction of disease relapse. However, the longitudinal biomarker measurements have highly heterogeneous trajectories between subjects (patients) with various shapes and patterns. It is believed that the trajectory is clinically related to the development of treatment resistance, but there was limited knowledge about the underlying mechanism. To address the challenge, we propose a novel Bayesian approach to modeling the distribution of subject-specific longitudinal trajectories. It exploits flexible Bayesian learning to accommodate complex changing patterns over time and non-linear covariate effects, and allows for real-time prediction of both in-sample and out-of-sample subjects. The generated information can help make clinical decisions, and consequently enhance the personalized treatment management of precision medicine.

3.
Sci Rep ; 14(1): 13372, 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38862705

RESUMO

A relatively recent approach in molecular graph theory for analyzing chemical networks and structures is called a modified polynomial. It emphasizes the characteristics of molecules through the use of a polynomial-based procedure and presents numerical descriptors in algebraic form. The Quantitative Structure-Property Relationship study makes use of Modified Polynomials (M-Polynomials) as a mathematical tool. M-Polynomials used to create connections between a material's various properties and its structural characteristics. In this study, we calculated several modified polynomials and gave a polynomial description of the magnesium iodide structure. Particularly, we computed first, second and modified Zagreb indices based M-polynomials. Randic index, and inverse Randic indices based M-polynomials are also computed in this work.

4.
J Appl Stat ; 51(7): 1251-1270, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38835825

RESUMO

The accelerated hazards model is one of the most commonly used models for regression analysis of failure time data and this is especially the case when, for example, the hazard functions may have monotonicity property. Correspondingly a large literature has been established for its estimation or inference when right-censored data are observed. Although several methods have also been developed for its inference based on interval-censored data, they apply only to limited situations or rely on some assumptions such as independent censoring. In this paper, we consider the situation where one observes case K interval-censored data, the type of failure time data that occur most in, for example, medical research such as clinical trials or periodical follow-up studies. For inference, we propose a sieve borrow-strength method and in particular, it allows for informative censoring. The asymptotic properties of the proposed estimators are established. Simulation studies demonstrate that the proposed inference procedure performs well. The method is applied to a set of real data set arising from an AIDS clinical trial.

5.
ISA Trans ; 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38926019

RESUMO

We present a novel numerical approach for solving nonlinear constrained optimal control problems (NCOCPs). Instead of directly solving the NCOCPs, we start by linearizing the constraints and dynamic system, which results in a sequence of sub-problems. For each sub-problem, we use finite number of Chebyshev polynomials to estimate the control and state vectors. To eliminate the errors at non-collocation points caused by conventional collocation methods, we additionally estimate the coefficient functions involved in the linear constraints and dynamic system by Chebyshev polynomials. By leveraging the characteristics of Chebyshev polynomials, the approximate sub-problem is changed into an equivalent nonlinear optimization problem with linear equality constraints. Consequently, any feasible point of the approximate sub-problem will satisfy the constraints and dynamic system throughout the entire time scale. To validate the efficacy of the new method, we solve three examples and assess the accuracy of the method through the computation of its approximation error. Numerical results obtained show that our approach achieves lower approximation error when compared to the Chebyshev pseudo-spectral method. The proposed method is particularly suitable for scenarios that require high-precision approximation, such as aerospace and precision instrument production.

6.
Heliyon ; 10(7): e28302, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38689983

RESUMO

Within the scope of this research, we introduce a novel category of bi-univalent functions. Horadam polynomials are utilized to characterize these functions by utilizing series from the Poisson distribution of the Miller-Ross type. Functions from these new categories have been used to construct estimates for the Fekete-Szego functional, as well as estimates of the Taylor-Maclaurin coefficients |l2| and |l3|. These projections were created for the methods in each of these brand-new subclasses. We made some additional discoveries after, focusing on the traits that contributed to our initial findings.

7.
Heliyon ; 10(7): e28888, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38617904

RESUMO

Sturm-Liouville problems have yielded the biggest achievement in the spectral theory of ordinary differential operators. Sturm-Liouville boundary value issues appear in many key applications in natural sciences. All the eigenvalues for the standard Sturm-Liouville problem are guaranteed to be real and simple, and the related eigenfunctions form a basis in a suitable Hilbert space. This article uses the weighted residual collocation technique to numerically compute the eigenpairs of both regular and singular Strum Liouville problems. Bernstein polynomials over [0,1] has been used to develop a weighted residual collocation approach to achieve an improved accuracy. The properties of Bernstein polynomials and the differentiation formula based on the Bernstein operational matrix are used to simplify the given singular boundary value problems into a matrix-based linear algebraic system. Keeping this fact in mind such a polynomial with space defined collocation scheme has been studied for Strum Liouville problems. The main reasons to use the collocation technique are its affordability, ease of use, well-conditioned matrices, and flexibility. The weighted residual collocation method is found to be more appealing because Bernstein polynomials vanish at the two interval ends, providing better versatility. A multitude of test problems are offered along with computation errors to demonstrate how the suggested method behaves. The numerical algorithm and its applicability to particular situations are described in detail, along with the convergence behavior and precision of the current technique.

8.
Sci Rep ; 14(1): 8683, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622192

RESUMO

In this paper, two problems involving nonlinear time fractional hyperbolic partial differential equations (PDEs) and time fractional pseudo hyperbolic PDEs with nonlocal conditions are presented. Collocation technique for shifted Chebyshev of the second kind with residual power series algorithm (CTSCSK-RPSA) is the main method for solving these problems. Moreover, error analysis theory is provided in detail. Numerical solutions provided using CTSCSK-RPSA are compared with existing techniques in literature. CTSCSK-RPSA is accurate, simple and convenient method for obtaining solutions of linear and nonlinear physical and engineering problems.

9.
Heliyon ; 10(7): e28945, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38601649

RESUMO

In this paper, a connection is established between the Jones polynomial of generalized weaving knots of type W(3,n,m) and the Chebyshev polynomial of the first kind. Consequently, it is proved that the coefficients of the Jones polynomial of weaving knots are essentially the Whitney numbers of Lucas lattices. Additionally, an explicit formula for the coefficients of the Alexander polynomial of weaving knots W(3,n) is introduced, and it is proven that these coefficients satisfy Fox's trapezoidal conjecture.

10.
Heliyon ; 10(5): e27260, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38562493

RESUMO

Volterra integro-partial differential equations with weakly singular kernels (VIPDEWSK) are utilized to model diverse physical phenomena. A matrix collocation method is proposed for determining the approximate solution of this functional equation category. The method employs shifted Chebyshev polynomials of the fifth kind (SCPFK) to construct two-dimensional pseudo-operational matrices of integration, avoiding the need for explicit integration and thereby speeding up computations. Error bounds are examined in a Chebyshev-weighted space, providing insights into approximation accuracy. The approach is applied to several experimental examples, and the results are compared with those obtained using the Bernoulli wavelets and Legendre wavelets methods.

11.
Polymers (Basel) ; 16(6)2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38543347

RESUMO

A novel method is proposed to quickly predict the tensile strength of carbon/epoxy composites with resin-missing defects. The univariate Chebyshev prediction model (UCPM) was developed using the dimension reduction method and Chebyshev polynomials. To enhance the computational efficiency and reduce the manual modeling workload, a parameterization script for the finite element model was established using Python during the model construction process. To validate the model, specimens with different defect sizes were prepared using the vacuum assistant resin infusion (VARI) process, the mechanical properties of the specimens were tested, and the model predictions were analyzed in comparison with the experimental results. Additionally, the impact of the order (second-ninth) on the predictive accuracy of the UCPM was examined, and the performance of the model was evaluated using statistical errors. The results demonstrate that the prediction model has a high prediction accuracy, with a maximum prediction error of 5.20% compared to the experimental results. A low order resulted in underfitting, while increasing the order can improve the prediction accuracy of the UCPM. However, if the order is too high, overfitting may occur, leading to a decrease in the prediction accuracy.

12.
Sensors (Basel) ; 24(5)2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38474984

RESUMO

Fourier ptychographic microscopy, as a computational imaging method, can reconstruct high-resolution images but suffers optical aberration, which affects its imaging quality. For this reason, this paper proposes a network model for simulating the forward imaging process in the Tensorflow framework using samples and coherent transfer functions as the input. The proposed model improves the introduced Wirtinger flow algorithm, retains the central idea, simplifies the calculation process, and optimizes the update through back propagation. In addition, Zernike polynomials are used to accurately estimate aberration. The simulation and experimental results show that this method can effectively improve the accuracy of aberration correction, maintain good correction performance under complex scenes, and reduce the influence of optical aberration on imaging quality.

13.
Theor Popul Biol ; 157: 55-85, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38552964

RESUMO

In this article, discrete and stochastic changes in (effective) population size are incorporated into the spectral representation of a biallelic diffusion process for drift and small mutation rates. A forward algorithm inspired by Hidden-Markov-Model (HMM) literature is used to compute exact sample allele frequency spectra for three demographic scenarios: single changes in (effective) population size, boom-bust dynamics, and stochastic fluctuations in (effective) population size. An approach for fully agnostic demographic inference from these sample allele spectra is explored, and sufficient statistics for stepwise changes in population size are found. Further, convergence behaviours of the polymorphic sample spectra for population size changes on different time scales are examined and discussed within the context of inference of the effective population size. Joint visual assessment of the sample spectra and the temporal coefficients of the spectral decomposition of the forward diffusion process is found to be important in determining departure from equilibrium. Stochastic changes in (effective) population size are shown to shape sample spectra particularly strongly.


Assuntos
Algoritmos , Frequência do Gene , Densidade Demográfica , Processos Estocásticos , Genética Populacional , Modelos Genéticos , Cadeias de Markov , Humanos
14.
Heliyon ; 10(1): e23453, 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38169955

RESUMO

This article introduces novel numerical approaches utilizing both standard and nonstandard finite difference methods to solve one-dimensional Bratu's problems. Using the quasilinearization technique, the original problem is converted into a sequence of linear problems. Chebyshev polynomials are employed to approximate the second derivative of the function y(x), after which Sumudu transform is applied to obtain a new form of trial function. The obtained trial function is then substituted into a linearized and discretized Bratu's equations. We discuss the convergence of the schemes and compare the numerical outcomes to those derived using other relevant methods. We further modify one of the new schemes and apply it to solve boundary value problem with associated Robin conditions. The results show that the proposed schemes yield accurate approximations to the solutions of the problems considered.

15.
J Anim Breed Genet ; 141(3): 291-303, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38062881

RESUMO

Feed efficiency plays a major role in the overall profitability and sustainability of the beef cattle industry, as it is directly related to the reduction of the animal demand for input and methane emissions. Traditionally, the average daily feed intake and weight gain are used to calculate feed efficiency traits. However, feed efficiency traits can be analysed longitudinally using random regression models (RRMs), which allow fitting random genetic and environmental effects over time by considering the covariance pattern between the daily records. Therefore, the objectives of this study were to: (1) propose genomic evaluations for dry matter intake (DMI), body weight gain (BWG), residual feed intake (RFI) and residual weight gain (RWG) data collected during an 84-day feedlot test period via RRMs; (2) compare the goodness-of-fit of RRM using Legendre polynomials (LP) and B-spline functions; (3) evaluate the genetic parameters behaviour for feed efficiency traits and their implication for new selection strategies. The datasets were provided by the EMBRAPA-GENEPLUS beef cattle breeding program and included 2920 records for DMI, 2696 records for BWG and 4675 genotyped animals. Genetic parameters and genomic breeding values (GEBVs) were estimated by RRMs under ssGBLUP for Nellore cattle using orthogonal LPs and B-spline. Models were compared based on the deviance information criterion (DIC). The ranking of the average GEBV of each test week and the overall GEBV average were compared by the percentage of individuals in common and the Spearman correlation coefficient (top 1%, 5%, 10% and 100%). The highest goodness-of-fit was obtained with linear B-Spline function considering heterogeneous residual variance. The heritability estimates across the test period for DMI, BWG, RFI and RWG ranged from 0.06 to 0.21, 0.11 to 0.30, 0.03 to 0.26 and 0.07 to 0.27, respectively. DMI and RFI presented within-trait genetic correlations ranging from low to high magnitude across different performance test-day. In contrast, BWG and RWG presented negative genetic correlations between the first 3 weeks and the other days of performance tests. DMI and RFI presented a high-ranking similarity between the GEBV average of week eight and the overall GEBV average, with Spearman correlations and percentages of individuals selected in common ranging from 0.95 to 1.00 and 93 to 100, respectively. Week 11 presented the highest Spearman correlations (ranging from 0.94 to 0.98) and percentages of individuals selected in common (ranging from 85 to 94) of BWG and RWG with the average GEBV of the entire period of the test. In conclusion, the RRM using linear B-splines is a feasible alternative for the genomic evaluation of feed efficiency. Heritability estimates of DMI, RFI, BWG and RWG indicate enough additive genetic variance to achieve a moderate response to selection. A new selection strategy can be adopted by reducing the performance test to 56 days for DMI and RFI selection and 77 days for BWG and RWG selection.


Assuntos
Genoma , Genômica , Humanos , Bovinos/genética , Animais , Fenótipo , Aumento de Peso/genética , Genótipo , Ingestão de Alimentos/genética , Ração Animal
16.
Heliyon ; 9(11): e21401, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38027690

RESUMO

In theoretical chemistry, topological indices are commonly employed to model the physico-chemical properties of chemical compounds. Mathematicians frequently use Zagreb indices to calculate a chemical compound's strain energy, melting point, boiling temperature, distortion, and stability. The current global pandemic caused by the new SARS-CoV-2, also known as COVID-19, is a significant public health concern. Various therapy modalities are advised. The issue has become worse since there hasn't been enough counseling. Researchers are looking at compounds that might be used as SARS and MERS therapies based on earlier studies. In several quantitative structure-property-activity relationships (QSPR and QSAR) studies, a variety of physiochemical properties are successfully represented by topological indices, a sort of molecular descriptor that just specifies numerical values connected to a substance's molecular structure. This study investigates several irregularity-based topological indices for various antiviral medicines, depending on the degree of irregularity. In order to evaluate the effectiveness of the generated topological indices, a QSPR was also carried out using the indicated pharmaceuticals, the various topological indices, and the various physiochemical features of these antiviral medicines. The acquired results show a substantial association between the topological indices being studied by the curve-fitting approach and the physiochemical properties of possible antiviral medicines.

17.
Comput Biol Med ; 167: 107635, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37952306

RESUMO

This study aims to examine geometric models of the corneal surface that can be used to reduce in reasonable time the dimensionality of datasets of normal anterior corneas. Polynomial models (P) like Zernike polynomials (ZP) and spherical harmonic polynomials (SHP) were obvious candidates along with their rational function (R) counterparts, namely Zernike rational functions (ZR) and spherical harmonic rational functions (SHR, new model). Knowing that both SHP and ZR were more accurate than ZP for the modeling of normal and keratoconus corneas, it was expected that both spherical harmonic (SH) models (SHP and SHR) would be more accurate than their Zernike (Z) counterparts (ZP and ZR, respectively), and both rational (R) models (SHR and ZR) more accurate than their polynomial counterparts (SHP and ZP, respectively) for a low dimensional space (coefficient number J < 30). This was the case. The SH factor contributed more to accuracy than the R factor. Considering the corneal processing time as a function of J, P models were processed in quasi-linear time with a quasi-null slope and rational models in polynomial time. Z models were faster than SH models, and increasingly so in their R version. In sum, for corneal dimensionality reduction, SHR is the most accurate model, but its processing time is increasingly prohibitive unless the best coefficient combination is identified beforehand. ZP is the fastest model and is reasonably accurate with normal corneas for exploratory tasks. SHP is the best compromise between accuracy and speed.


Assuntos
Córnea , Ceratocone , Humanos , Topografia da Córnea/métodos , Algoritmos , Modelos Estatísticos
18.
IEEE Trans Inf Theory ; 69(3): 1695-1738, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37842015

RESUMO

In this paper, we consider asymptotically exact support recovery in the context of high dimensional and sparse Canonical Correlation Analysis (CCA). Our main results describe four regimes of interest based on information theoretic and computational considerations. In regimes of "low" sparsity we describe a simple, general, and computationally easy method for support recovery, whereas in a regime of "high" sparsity, it turns out that support recovery is information theoretically impossible. For the sake of information theoretic lower bounds, our results also demonstrate a non-trivial requirement on the "minimal" size of the nonzero elements of the canonical vectors that is required for asymptotically consistent support recovery. Subsequently, the regime of "moderate" sparsity is further divided into two subregimes. In the lower of the two sparsity regimes, we show that polynomial time support recovery is possible by using a sharp analysis of a co-ordinate thresholding [1] type method. In contrast, in the higher end of the moderate sparsity regime, appealing to the "Low Degree Polynomial" Conjecture [2], we provide evidence that polynomial time support recovery methods are inconsistent. Finally, we carry out numerical experiments to compare the efficacy of various methods discussed.

19.
Mon Hefte Math ; 202(4): 773-789, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37846214

RESUMO

Let V be a valuation ring of a global field K. We show that for all positive integers k and 1

20.
Biom J ; 65(8): e2300069, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37775940

RESUMO

The marginality principle guides analysts to avoid omitting lower-order terms from models in which higher-order terms are included as covariates. Lower-order terms are viewed as "marginal" to higher-order terms. We consider how this principle applies to three cases: regression models that may include the ratio of two measured variables; polynomial transformations of a measured variable; and factorial arrangements of defined interventions. For each case, we show that which terms or transformations are considered to be lower-order, and therefore marginal, depends on the scale of measurement, which is frequently arbitrary. Understanding the implications of this point leads to an intuitive understanding of the curse of dimensionality. We conclude that the marginality principle may be useful to analysts in some specific cases but caution against invoking it as a context-free recipe.


Assuntos
Algoritmos , Análise de Regressão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...