Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 61.121
Filtrar
1.
PLoS One ; 19(7): e0306028, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38950055

RESUMO

Even with the powerful statistical parameters derived from the Extreme Gradient Boost (XGB) algorithm, it would be advantageous to define the predicted accuracy to the level of a specific case, particularly when the model output is used to guide clinical decision-making. The probability density function (PDF) of the derived intracranial pressure predictions enables the computation of a definite integral around a point estimate, representing the event's probability within a range of values. Seven hold-out test cases used for the external validation of an XGB model underwent retinal vascular pulse and intracranial pressure measurement using modified photoplethysmography and lumbar puncture, respectively. The definite integral ±1 cm water from the median (DIICP) demonstrated a negative and highly significant correlation (-0.5213±0.17, p< 0.004) with the absolute difference between the measured and predicted median intracranial pressure (DiffICPmd). The concordance between the arterial and venous probability density functions was estimated using the two-sample Kolmogorov-Smirnov statistic, extending the distribution agreement across all data points. This parameter showed a statistically significant and positive correlation (0.4942±0.18, p< 0.001) with DiffICPmd. Two cautionary subset cases (Case 8 and Case 9), where disagreement was observed between measured and predicted intracranial pressure, were compared to the seven hold-out test cases. Arterial predictions from both cautionary subset cases converged on a uniform distribution in contrast to all other cases where distributions converged on either log-normal or closely related skewed distributions (gamma, logistic, beta). The mean±standard error of the arterial DIICP from cases 8 and 9 (3.83±0.56%) was lower compared to that of the hold-out test cases (14.14±1.07%) the between group difference was statistically significant (p<0.03). Although the sample size in this analysis was limited, these results support a dual and complementary analysis approach from independently derived retinal arterial and venous non-invasive intracranial pressure predictions. Results suggest that plotting the PDF and calculating the lower order moments, arterial DIICP, and the two sample Kolmogorov-Smirnov statistic may provide individualized predictive accuracy parameters.


Assuntos
Pressão Intracraniana , Aprendizado de Máquina , Probabilidade , Humanos , Pressão Intracraniana/fisiologia , Feminino , Masculino , Algoritmos , Adulto , Pessoa de Meia-Idade
2.
Int J Epidemiol ; 53(4)2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38996447

RESUMO

BACKGROUND: Empirical evaluation of inverse probability weighting (IPW) for self-selection bias correction is inaccessible without the full source population. We aimed to: (i) investigate how self-selection biases frequency and association measures and (ii) assess self-selection bias correction using IPW in a cohort with register linkage. METHODS: The source population included 17 936 individuals invited to the Copenhagen Aging and Midlife Biobank during 2009-11 (ages 49-63 years). Participants counted 7185 (40.1%). Register data were obtained for every invited person from 7 years before invitation to the end of 2020. The association between education and mortality was estimated using Cox regression models among participants, IPW participants and the source population. RESULTS: Participants had higher socioeconomic position and fewer hospital contacts before baseline than the source population. Frequency measures of participants approached those of the source population after IPW. Compared with primary/lower secondary education, upper secondary, short tertiary, bachelor and master/doctoral were associated with reduced risk of death among participants (adjusted hazard ratio [95% CI]: 0.60 [0.46; 0.77], 0.68 [0.42; 1.11], 0.37 [0.25; 0.54], 0.28 [0.18; 0.46], respectively). IPW changed the estimates marginally (0.59 [0.45; 0.77], 0.57 [0.34; 0.93], 0.34 [0.23; 0.50], 0.24 [0.15; 0.39]) but not only towards those of the source population (0.57 [0.51; 0.64], 0.43 [0.32; 0.60], 0.38 [0.32; 0.47], 0.22 [0.16; 0.29]). CONCLUSIONS: Frequency measures of study participants may not reflect the source population in the presence of self-selection, but the impact on association measures can be limited. IPW may be useful for (self-)selection bias correction, but the returned results can still reflect residual or other biases and random errors.


Assuntos
Mortalidade , Modelos de Riscos Proporcionais , Fatores Socioeconômicos , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Dinamarca/epidemiologia , Mortalidade/tendências , Viés de Seleção , Escolaridade , Probabilidade , Sistema de Registros
3.
Hum Brain Mapp ; 45(10): e26759, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-38989632

RESUMO

The inferior frontal sulcus (ifs) is a prominent sulcus on the lateral frontal cortex, separating the middle frontal gyrus from the inferior frontal gyrus. The morphology of the ifs can be difficult to distinguish from adjacent sulci, which are often misidentified as continuations of the ifs. The morphological variability of the ifs and its relationship to surrounding sulci were examined in 40 healthy human subjects (i.e., 80 hemispheres). The sulci were identified and labeled on the native cortical surface meshes of individual subjects, permitting proper intra-sulcal assessment. Two main morphological patterns of the ifs were identified across hemispheres: in Type I, the ifs was a single continuous sulcus, and in Type II, the ifs was discontinuous and appeared in two segments. The morphology of the ifs could be further subdivided into nine subtypes based on the presence of anterior and posterior sulcal extensions. The ifs was often observed to connect, either superficially or completely, with surrounding sulci, and seldom appeared as an independent sulcus. The spatial variability of the ifs and its various morphological configurations were quantified in the form of surface spatial probability maps which are made publicly available in the standard fsaverage space. These maps demonstrated that the ifs generally occupied a consistent position across hemispheres and across individuals. The normalized mean sulcal depths associated with the main morphological types were also computed. The present study provides the first detailed description of the ifs as a sulcal complex composed of segments and extensions that can be clearly differentiated from adjacent sulci. These descriptions, together with the spatial probability maps, are critical for the accurate identification of the ifs in anatomical and functional neuroimaging studies investigating the structural characteristics and functional organization of this region in the human brain.


Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Humanos , Masculino , Feminino , Adulto , Mapeamento Encefálico/métodos , Lobo Frontal/anatomia & histologia , Lobo Frontal/diagnóstico por imagem , Adulto Jovem , Processamento de Imagem Assistida por Computador/métodos , Probabilidade
4.
BMC Med Res Methodol ; 24(1): 147, 2024 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-39003440

RESUMO

BACKGROUND: Decision analytic models and meta-analyses often rely on survival probabilities that are digitized from published Kaplan-Meier (KM) curves. However, manually extracting these probabilities from KM curves is time-consuming, expensive, and error-prone. We developed an efficient and accurate algorithm that automates extraction of survival probabilities from KM curves. METHODS: The automated digitization algorithm processes images from a JPG or PNG format, converts them in their hue, saturation, and lightness scale and uses optical character recognition to detect axis location and labels. It also uses a k-medoids clustering algorithm to separate multiple overlapping curves on the same figure. To validate performance, we generated survival plots form random time-to-event data from a sample size of 25, 50, 150, and 250, 1000 individuals split into 1,2, or 3 treatment arms. We assumed an exponential distribution and applied random censoring. We compared automated digitization and manual digitization performed by well-trained researchers. We calculated the root mean squared error (RMSE) at 100-time points for both methods. The algorithm's performance was also evaluated by Bland-Altman analysis for the agreement between automated and manual digitization on a real-world set of published KM curves. RESULTS: The automated digitizer accurately identified survival probabilities over time in the simulated KM curves. The average RMSE for automated digitization was 0.012, while manual digitization had an average RMSE of 0.014. Its performance was negatively correlated with the number of curves in a figure and the presence of censoring markers. In real-world scenarios, automated digitization and manual digitization showed very close agreement. CONCLUSIONS: The algorithm streamlines the digitization process and requires minimal user input. It effectively digitized KM curves in simulated and real-world scenarios, demonstrating accuracy comparable to conventional manual digitization. The algorithm has been developed as an open-source R package and as a Shiny application and is available on GitHub: https://github.com/Pechli-Lab/SurvdigitizeR and https://pechlilab.shinyapps.io/SurvdigitizeR/ .


Assuntos
Algoritmos , Humanos , Estimativa de Kaplan-Meier , Análise de Sobrevida , Probabilidade
5.
Sci Rep ; 14(1): 15467, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38969702

RESUMO

In this article we address two related issues on the learning of probabilistic sequences of events. First, which features make the sequence of events generated by a stochastic chain more difficult to predict. Second, how to model the procedures employed by different learners to identify the structure of sequences of events. Playing the role of a goalkeeper in a video game, participants were told to predict step by step the successive directions-left, center or right-to which the penalty kicker would send the ball. The sequence of kicks was driven by a stochastic chain with memory of variable length. Results showed that at least three features play a role in the first issue: (1) the shape of the context tree summarizing the dependencies between present and past directions; (2) the entropy of the stochastic chain used to generate the sequences of events; (3) the existence or not of a deterministic periodic sequence underlying the sequences of events. Moreover, evidence suggests that best learners rely less on their own past choices to identify the structure of the sequences of events.


Assuntos
Jogos de Vídeo , Humanos , Masculino , Feminino , Adulto , Aprendizagem , Probabilidade , Adulto Jovem , Processos Estocásticos
6.
PLoS One ; 19(7): e0305264, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39028741

RESUMO

This study aimed to assess and compare the probability of tuberculosis (TB) transmission based on five dynamic models: the Wells-Riley equation, two Rudnick & Milton-proposed models based on air changes per hour (ACH) and liters per second per person (L/s/p), the model proposed by Issarow et al, and the Applied Susceptible-Exposed-Infected-Recovered (SEIR) TB transmission model. This study also aimed to determine the impact of model parameters on such probabilities in three Thai prisons. A cross-sectional study was conducted using data from 985 prison cells. The TB transmission probability for each cell was calculated using parameters relevant to the specific model formula, and the magnitude of the model agreement was examined by Spearman's rank correlation and Bland-Altman plot. Subsequently, a multiple linear regression analysis was conducted to investigate the influence of each model parameter on the estimated probability. Results revealed that the median (Quartiles 1 and 3) of TB transmission probability among these cells was 0.052 (0.017, 0.180). Compared with the pioneered Wells-Riley's model, the remaining models projected discrepant TB transmission probability from less to more commensurate to the degree of model modification from the pioneered model as follows: Rudnick & Milton (ACH), Issarow et al., and Rudnick & Milton (L/s/p), and the applied SEIR models. The ventilation rate and number of infectious TB patients in each cell or zone had the greatest impact on the estimated TB transmission probability in most models. Additionally, the number of inmates in each cell, the area per person in square meters, and the inmate turnover rate were identified as high-impact parameters in the applied SEIR model. All stakeholders must urgently address these influential parameters to reduce TB transmission in prisons. Moreover, further studies are required to determine their relative validity in accurately predicting TB incidence in prison settings.


Assuntos
Prisões , Probabilidade , Tuberculose , Humanos , Tailândia/epidemiologia , Tuberculose/transmissão , Tuberculose/epidemiologia , Estudos Transversais , Masculino , População do Sudeste Asiático
7.
Biom J ; 66(4): e2300156, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38847059

RESUMO

How to analyze data when there is violation of the positivity assumption? Several possible solutions exist in the literature. In this paper, we consider propensity score (PS) methods that are commonly used in observational studies to assess causal treatment effects in the context where the positivity assumption is violated. We focus on and examine four specific alternative solutions to the inverse probability weighting (IPW) trimming and truncation: matching weight (MW), Shannon's entropy weight (EW), overlap weight (OW), and beta weight (BW) estimators. We first specify their target population, the population of patients for whom clinical equipoise, that is, where we have sufficient PS overlap. Then, we establish the nexus among the different corresponding weights (and estimators); this allows us to highlight the shared properties and theoretical implications of these estimators. Finally, we introduce their augmented estimators that take advantage of estimating both the propensity score and outcome regression models to enhance the treatment effect estimators in terms of bias and efficiency. We also elucidate the role of the OW estimator as the flagship of all these methods that target the overlap population. Our analytic results demonstrate that OW, MW, and EW are preferable to IPW and some cases of BW when there is a moderate or extreme (stochastic or structural) violation of the positivity assumption. We then evaluate, compare, and confirm the finite-sample performance of the aforementioned estimators via Monte Carlo simulations. Finally, we illustrate these methods using two real-world data examples marked by violations of the positivity assumption.


Assuntos
Biometria , Pontuação de Propensão , Biometria/métodos , Humanos , Causalidade , Probabilidade
8.
Stat Med ; 43(18): 3463-3483, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-38853711

RESUMO

Analysis of integrated data often requires record linkage in order to join together the data residing in separate sources. In case linkage errors cannot be avoided, due to the lack a unique identity key that can be used to link the records unequivocally, standard statistical techniques may produce misleading inference if the linked data are treated as if they were true observations. In this paper, we propose methods for categorical data analysis based on linked data that are not prepared by the analyst, such that neither the match-key variables nor the unlinked records are available. The adjustment is based on the proportion of false links in the linked file and our approach allows the probabilities of correct linkage to vary across the records without requiring that one is able to estimate this probability for each individual record. It accommodates also the general situation where unmatched records that cannot possibly be correctly linked exist in all the sources. The proposed methods are studied by simulation and applied to real data.


Assuntos
Simulação por Computador , Registro Médico Coordenado , Modelos Estatísticos , Humanos , Registro Médico Coordenado/métodos , Interpretação Estatística de Dados , Probabilidade
9.
Stat Med ; 43(18): 3524-3538, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-38863133

RESUMO

Moderate calibration, the expected event probability among observations with predicted probability z being equal to z, is a desired property of risk prediction models. Current graphical and numerical techniques for evaluating moderate calibration of risk prediction models are mostly based on smoothing or grouping the data. As well, there is no widely accepted inferential method for the null hypothesis that a model is moderately calibrated. In this work, we discuss recently-developed, and propose novel, methods for the assessment of moderate calibration for binary responses. The methods are based on the limiting distributions of functions of standardized partial sums of prediction errors converging to the corresponding laws of Brownian motion. The novel method relies on well-known properties of the Brownian bridge which enables joint inference on mean and moderate calibration, leading to a unified "bridge" test for detecting miscalibration. Simulation studies indicate that the bridge test is more powerful, often substantially, than the alternative test. As a case study we consider a prediction model for short-term mortality after a heart attack, where we provide suggestions on graphical presentation and the interpretation of results. Moderate calibration can be assessed without requiring arbitrary grouping of data or using methods that require tuning of parameters.


Assuntos
Simulação por Computador , Modelos Estatísticos , Humanos , Medição de Risco/métodos , Infarto do Miocárdio/mortalidade , Estatísticas não Paramétricas , Calibragem , Probabilidade
10.
Environ Monit Assess ; 196(7): 647, 2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-38907768

RESUMO

In this study, the current distribution probability of Ephedra gerardiana (Somalata), a medicinally potent species of the Himalayas, was assessed, and its spatial distribution change was forecasted until the year 2100 under three Shared Socioeconomic Pathways. Here, we used the maximum entropy model (MaxEnt) on 274 spatially filtered occurrence data points accessed from GBIF and other publications, and 19 bioclimatic variables were used as predictors against the probability assessment. The area under the curve, Continuous Boyce Index, True Skill Statistics, and kappa values were used to evaluate and validate the model. It was observed that the SSP5-8.5, a fossil fuel-fed scenario, saw a maximum habitat decline for E. gerardiana driving its niche towards higher altitudes. Nepal Himalayas witnessed a maximum decline in suitable habitat for the species, whereas it gained area in Bhutan. In India, regions of Himachal Pradesh, Uttarakhand, Jammu and Kashmir, and Sikkim saw a maximum negative response to climate change by the year 2100. Mean annual temperature, isothermality, diurnal temperature range, and precipitation seasonality are the most influential variables isolated by the model that contribute in defining the species' habitat. The results provide evidence of the effects of climate change on the distribution of endemic species in the study area under different scenarios of emissions and anthropogenic coupling. Certainly, the area of consideration encompasses several protected areas, which will become more vulnerable to increased variability of climate, and regulating their boundaries might become a necessary step to conserve the regions' biodiversity in the future.


Assuntos
Mudança Climática , Ecossistema , Nepal , Índia , Butão , Ephedra , Monitoramento Ambiental , Probabilidade , Fatores Socioeconômicos , Modelos Teóricos
11.
PLoS One ; 19(6): e0304345, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38857287

RESUMO

Irreversible electroporation induces permanent permeabilization of lipid membranes of vesicles, resulting in vesicle rupture upon the application of a pulsed electric field. Electrofusion is a phenomenon wherein neighboring vesicles can be induced to fuse by exposing them to a pulsed electric field. We focus how the frequency of direct current (DC) pulses of electric field impacts rupture and electrofusion in cell-sized giant unilamellar vesicles (GUVs) prepared in a physiological buffer. The average time, probability, and kinetics of rupture and electrofusion in GUVs have been explored at frequency 500, 800, 1050, and 1250 Hz. The average time of rupture of many 'single GUVs' decreases with the increase in frequency, whereas electrofusion shows the opposite trend. At 500 Hz, the rupture probability stands at 0.45 ± 0.02, while the electrofusion probability is 0.71 ± 0.01. However, at 1250 Hz, the rupture probability increases to 0.69 ± 0.03, whereas the electrofusion probability decreases to 0.46 ± 0.03. Furthermore, when considering kinetics, at 500 Hz, the rate constant of rupture is (0.8 ± 0.1)×10-2 s-1, and the rate constant of fusion is (2.4 ± 0.1)×10-2 s-1. In contrast, at 1250 Hz, the rate constant of rupture is (2.3 ± 0.8)×10-2 s-1, and the rate constant of electrofusion is (1.0 ± 0.1)×10-2 s-1. These results are discussed by considering the electrical model of the lipid bilayer and the energy barrier of a prepore.


Assuntos
Eletroporação , Lipossomas Unilamelares , Lipossomas Unilamelares/química , Cinética , Eletroporação/métodos , Probabilidade , Fusão de Membrana
12.
Korean J Anesthesiol ; 77(3): 316-325, 2024 06.
Artigo em Inglês | MEDLINE | ID: mdl-38835136

RESUMO

The statistical significance of a clinical trial analysis result is determined by a mathematical calculation and probability based on null hypothesis significance testing. However, statistical significance does not always align with meaningful clinical effects; thus, assigning clinical relevance to statistical significance is unreasonable. A statistical result incorporating a clinically meaningful difference is a better approach to present statistical significance. Thus, the minimal clinically important difference (MCID), which requires integrating minimum clinically relevant changes from the early stages of research design, has been introduced. As a follow-up to the previous statistical round article on P values, confidence intervals, and effect sizes, in this article, we present hands-on examples of MCID and various effect sizes and discuss the terms statistical significance and clinical relevance, including cautions regarding their use.


Assuntos
Diferença Mínima Clinicamente Importante , Humanos , Probabilidade , Projetos de Pesquisa , Ensaios Clínicos como Assunto/métodos , Interpretação Estatística de Dados , Intervalos de Confiança
13.
PLoS One ; 19(6): e0303432, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38848327

RESUMO

For the purpose of this study, A statistical test of Biblical books was conducted using the recently discovered probability models for text homogeneity and text change point detection. Accordingly, translations of Biblical books of Tigrigna and Amharic (major languages spoken in Eritrea and Ethiopia) and English were studied. A Zipf-Mandelbrot distribution with a parameter range of 0.55 to 0.88 was obtained in these three Bibles. According to the statistical analysis of the texts' homogeneity, the translation of Bible in each of these three languages was a heterogeneous concatenation of different books or genres. Furthermore, an in-depth examination of the text segmentation of prat of a single genre-the English Bible letters revealed that the Pauline letters are heterogeneous concatenations of two homogeneous segments.


Assuntos
Bíblia , Modelos Estatísticos , Humanos , Probabilidade , Idioma , Etiópia
14.
Sci Rep ; 14(1): 12772, 2024 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-38834671

RESUMO

The diagnosis of acute appendicitis and concurrent surgery referral is primarily based on clinical presentation, laboratory and radiological imaging. However, utilizing such an approach results in as much as 10-15% of negative appendectomies. Hence, in the present study, we aimed to develop a machine learning (ML) model designed to reduce the number of negative appendectomies in pediatric patients with a high clinical probability of acute appendicitis. The model was developed and validated on a registry of 551 pediatric patients with suspected acute appendicitis that underwent surgical treatment. Clinical, anthropometric, and laboratory features were included for model training and analysis. Three machine learning algorithms were tested (random forest, eXtreme Gradient Boosting, logistic regression) and model explainability was obtained. Random forest model provided the best predictions achieving mean specificity and sensitivity of 0.17 ± 0.01 and 0.997 ± 0.001 for detection of acute appendicitis, respectively. Furthermore, the model outperformed the appendicitis inflammatory response (AIR) score across most sensitivity-specificity combinations. Finally, the random forest model again provided the best predictions for discrimination between complicated appendicitis, and either uncomplicated acute appendicitis or no appendicitis at all, with a joint mean sensitivity of 0.994 ± 0.002 and specificity of 0.129 ± 0.009. In conclusion, the developed ML model might save as much as 17% of patients with a high clinical probability of acute appendicitis from unnecessary surgery, while missing the needed surgery in only 0.3% of cases. Additionally, it showed better diagnostic accuracy than the AIR score, as well as good accuracy in predicting complicated acute appendicitis over uncomplicated and negative cases bundled together. This may be useful in centers that advocate for the conservative treatment of uncomplicated appendicitis. Nevertheless, external validation is needed to support these findings.


Assuntos
Apendicectomia , Apendicite , Aprendizado de Máquina , Humanos , Apendicite/cirurgia , Apendicite/diagnóstico , Criança , Feminino , Masculino , Adolescente , Pré-Escolar , Doença Aguda , Probabilidade , Sensibilidade e Especificidade , Algoritmos
15.
Sci Rep ; 14(1): 14557, 2024 06 24.
Artigo em Inglês | MEDLINE | ID: mdl-38914736

RESUMO

The study aims to develop an abnormal body temperature probability (ABTP) model for dairy cattle, utilizing environmental and physiological data. This model is designed to enhance the management of heat stress impacts, providing an early warning system for farm managers to improve dairy cattle welfare and farm productivity in response to climate change. The study employs the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm to analyze environmental and physiological data from 320 dairy cattle, identifying key factors influencing body temperature anomalies. This method supports the development of various models, including the Lyman Kutcher-Burman (LKB), Logistic, Schultheiss, and Poisson models, which are evaluated for their ability to predict abnormal body temperatures in dairy cattle effectively. The study successfully validated multiple models to predict abnormal body temperatures in dairy cattle, with a focus on the temperature-humidity index (THI) as a critical determinant. These models, including LKB, Logistic, Schultheiss, and Poisson, demonstrated high accuracy, as measured by the AUC and other performance metrics such as the Brier score and Hosmer-Lemeshow (HL) test. The results highlight the robustness of the models in capturing the nuances of heat stress impacts on dairy cattle. The research develops innovative models for managing heat stress in dairy cattle, effectively enhancing detection and intervention strategies. By integrating advanced technologies and novel predictive models, the study offers effective measures for early detection and management of abnormal body temperatures, improving cattle welfare and farm productivity in changing climatic conditions. This approach highlights the importance of using multiple models to accurately predict and address heat stress in livestock, making significant contributions to enhancing farm management practices.


Assuntos
Temperatura Corporal , Indústria de Laticínios , Animais , Bovinos , Temperatura Corporal/fisiologia , Indústria de Laticínios/métodos , Fatores de Risco , Doenças dos Bovinos/diagnóstico , Doenças dos Bovinos/fisiopatologia , Transtornos de Estresse por Calor/veterinária , Transtornos de Estresse por Calor/fisiopatologia , Feminino , Mudança Climática , Probabilidade , Medição de Risco/métodos
16.
BMC Med Res Methodol ; 24(1): 116, 2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38762731

RESUMO

BACKGROUND: Extended illness-death models (a specific class of multistate models) are a useful tool to analyse situations like hospital-acquired infections, ventilation-associated pneumonia, and transfers between hospitals. The main components of these models are hazard rates and transition probabilities. Calculation of different measures and their interpretation can be challenging due to their complexity. METHODS: By assuming time-constant hazards, the complexity of these models becomes manageable and closed mathematical forms for transition probabilities can be derived. Using these forms, we created a tool in R to visualize transition probabilities via stacked probability plots. RESULTS: In this article, we present this tool and give some insights into its theoretical background. Using published examples, we give guidelines on how this tool can be used. Our goal is to provide an instrument that helps obtain a deeper understanding of a complex multistate setting. CONCLUSION: While multistate models (in particular extended illness-death models), can be highly complex, this tool can be used in studies to both understand assumptions, which have been made during planning and as a first step in analysing complex data structures. An online version of this tool can be found at https://eidm.imbi.uni-freiburg.de/ .


Assuntos
Probabilidade , Humanos , Infecção Hospitalar/prevenção & controle , Infecção Hospitalar/epidemiologia , Modelos Estatísticos , Modelos de Riscos Proporcionais , Pneumonia Associada à Ventilação Mecânica/mortalidade , Pneumonia Associada à Ventilação Mecânica/epidemiologia , Pneumonia Associada à Ventilação Mecânica/prevenção & controle , Aplicativos Móveis/estatística & dados numéricos , Algoritmos
17.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38768225

RESUMO

Conventional supervised learning usually operates under the premise that data are collected from the same underlying population. However, challenges may arise when integrating new data from different populations, resulting in a phenomenon known as dataset shift. This paper focuses on prior probability shift, where the distribution of the outcome varies across datasets but the conditional distribution of features given the outcome remains the same. To tackle the challenges posed by such shift, we propose an estimation algorithm that can efficiently combine information from multiple sources. Unlike existing methods that are restricted to discrete outcomes, the proposed approach accommodates both discrete and continuous outcomes. It also handles high-dimensional covariate vectors through variable selection using an adaptive least absolute shrinkage and selection operator penalty, producing efficient estimates that possess the oracle property. Moreover, a novel semiparametric likelihood ratio test is proposed to check the validity of prior probability shift assumptions by embedding the null conditional density function into Neyman's smooth alternatives (Neyman, 1937) and testing study-specific parameters. We demonstrate the effectiveness of our proposed method through extensive simulations and a real data example. The proposed methods serve as a useful addition to the repertoire of tools for dealing dataset shifts.


Assuntos
Algoritmos , Simulação por Computador , Modelos Estatísticos , Probabilidade , Humanos , Funções Verossimilhança , Biometria/métodos , Interpretação Estatística de Dados , Aprendizado de Máquina Supervisionado
18.
PLoS Comput Biol ; 20(5): e1011999, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38691544

RESUMO

Bayesian decision theory (BDT) is frequently used to model normative performance in perceptual, motor, and cognitive decision tasks where the possible outcomes of actions are associated with rewards or penalties. The resulting normative models specify how decision makers should encode and combine information about uncertainty and value-step by step-in order to maximize their expected reward. When prior, likelihood, and posterior are probabilities, the Bayesian computation requires only simple arithmetic operations: addition, etc. We focus on visual cognitive tasks where Bayesian computations are carried out not on probabilities but on (1) probability density functions and (2) these probability density functions are derived from samples. We break the BDT model into a series of computations and test human ability to carry out each of these computations in isolation. We test three necessary properties of normative use of pdf information derived from a sample-accuracy, additivity and influence. Influence measures allow us to assess how much weight each point in the sample is assigned in making decisions and allow us to compare normative use (weighting) of samples to actual, point by point. We find that human decision makers violate accuracy and additivity systematically but that the cost of failure in accuracy or additivity would be minor in common decision tasks. However, a comparison of measured influence for each sample point with normative influence measures demonstrates that the individual's use of sample information is markedly different from the predictions of BDT. We will show that the normative BDT model takes into account the geometric symmetries of the pdf while the human decision maker does not. An alternative model basing decisions on a single extreme sample point provided a better account for participants' data than the normative BDT model.


Assuntos
Teorema de Bayes , Tomada de Decisões , Humanos , Tomada de Decisões/fisiologia , Biologia Computacional/métodos , Probabilidade , Feminino , Masculino , Teoria da Decisão , Adulto , Modelos Estatísticos , Cognição/fisiologia
19.
PLoS One ; 19(5): e0301415, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38809831

RESUMO

Epidemic or pathogen emergence is the phenomenon by which a poorly transmissible pathogen finds its evolutionary pathway to become a mutant that can cause an epidemic. Many mathematical models of pathogen emergence rely on branching processes. Here, we discuss pathogen emergence using Markov chains, for a more tractable analysis, generalizing previous work by Kendall and Bartlett about disease invasion. We discuss the probability of emergence failure for early epidemics, when the number of infected individuals is small and the number of the susceptible individuals is virtually unlimited. Our formalism addresses both directly transmitted and vector-borne diseases, in the cases where the original pathogen is 1) one step-mutation away from the epidemic strain, and 2) undergoing a long chain of neutral mutations that do not change the epidemiology. We obtain analytic results for the probabilities of emergence failure and two features transcending the transmission mechanism. First, the reproduction number of the original pathogen is determinant for the probability of pathogen emergence, more important than the mutation rate or the transmissibility of the emerged pathogen. Second, the probability of mutation within infected individuals must be sufficiently high for the pathogen undergoing neutral mutations to start an epidemic, the mutation threshold depending again on the basic reproduction number of the original pathogen. Finally, we discuss the parameterization of models of pathogen emergence, using SARS-CoV1 as an example of zoonotic emergence and HIV as an example for the emergence of drug resistance. We also discuss assumptions of our models and implications for epidemiology.


Assuntos
Epidemias , Cadeias de Markov , Mutação , Humanos , Modelos Teóricos , COVID-19/epidemiologia , COVID-19/transmissão , COVID-19/virologia , Número Básico de Reprodução , Probabilidade , Animais
20.
Evol Psychol ; 22(2): 14747049241254725, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38807479

RESUMO

In order to explain helping strangers in need in terms of reciprocal altruism, it is necessary to ensure that the help is reciprocated and that the costs of helping are thus compensated. Competence and willingness to make sacrifices for the benefactor of the person being helped are important cues for ensuring a return on help because reciprocity would not be possible if the person being helped had neither the competence nor the inclination to give back in the future. In this study, we used vignettes and manipulated the cause of suffering strangers' difficulties and prosociality to investigate participants' compassion for and willingness to help the stranger. In Study 1, we measured willingness to help by using hypothetical helping behaviors that were designed to vary in cost. In Study 2, we measured willingness to help by using the checkbox method in which participants were asked to sequentially check 10 × 10 checkboxes on a webpage, which asked the participants to pay a small but real cost. In both studies, the controllability of the cause and the prosociality were found to independently affect compassion. These two factors also independently affected willingness to help, as measured by both the hypothetical questions and the checkbox method. We consequently discussed the reasons for the independent processing of the competence and behavioral tendency cues.


Assuntos
Altruísmo , Empatia , Comportamento de Ajuda , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Probabilidade , Relações Interpessoais , Adolescente
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...