Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 61.079
Filter
1.
Phys Rev E ; 109(4-1): 044305, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38755869

ABSTRACT

Humans are exposed to sequences of events in the environment, and the interevent transition probabilities in these sequences can be modeled as a graph or network. Many real-world networks are organized hierarchically and while much is known about how humans learn basic transition graph topology, whether and to what degree humans can learn hierarchical structures in such graphs remains unknown. We probe the mental estimates of transition probabilities via the surprisal effect phenomenon: humans react more slowly to less expected transitions. Using mean-field predictions and numerical simulations, we show that surprisal effects are stronger for finer-level than coarser-level hierarchical transitions, and that surprisal effects at coarser levels are difficult to detect for limited learning times or in small samples. Using a serial response experiment with human participants (n=100), we replicate our predictions by detecting a surprisal effect at the finer level of the hierarchy but not at the coarser level of the hierarchy. We then evaluate the presence of a trade-off in learning, whereby humans who learned the finer level of the hierarchy better also tended to learn the coarser level worse, and vice versa. This study elucidates the processes by which humans learn sequential events in hierarchical contexts. More broadly, our work charts a road map for future investigation of the neural underpinnings and behavioral manifestations of graph learning.


Subject(s)
Learning , Humans , Male , Female , Models, Theoretical , Probability , Adult
2.
Ulster Med J ; 93(1): 18-23, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38707974

ABSTRACT

Verbal probability expressions such as 'likely' and 'possible' are commonly used to communicate uncertainty in diagnosis, treatment effectiveness as well as the risk of adverse events. Probability terms that are interpreted consistently can be used to standardize risk communication. A systematic review was conducted. Research studies that evaluated numeric meanings of probability terms were reviewed. Terms with consistent numeric interpretation across studies were selected and were used to construct a Visual Risk Scale. Five probability terms showed reliable interpretation by laypersons and healthcare professionals in empirical studies. 'Very Likely' was interpreted as 90% chance (range 80 to 95%); 'Likely/Probable,' 70% (60 to 80%); 'Possible,' 40% (30 to 60%); 'Unlikely,' 20% (10 to 30%); and 'Very Unlikely' with 10% chance (5% to 15%). The corresponding frequency terms were: Very Frequently, Frequently, Often, Infrequently, and Rarely, respectively. Probability terms should be presented with their corresponding numeric ranges during discussions with patients. Numeric values should be presented as X-in-100 natural frequency statements, even for low values; and not as percentages, X-in-1000, X-in-Y, odds, fractions, 1-in-X, or as number needed to treat (NNT). A Visual Risk Scale was developed for use in clinical shared decision making.


Subject(s)
Communication , Probability , Humans , Risk Assessment/methods , Uncertainty , Physician-Patient Relations
3.
PLoS One ; 19(5): e0303042, 2024.
Article in English | MEDLINE | ID: mdl-38709744

ABSTRACT

Probabilistic hesitant fuzzy sets (PHFSs) are superior to hesitant fuzzy sets (HFSs) in avoiding the problem of preference information loss among decision makers (DMs). Owing to this benefit, PHFSs have been extensively investigated. In probabilistic hesitant fuzzy environments, the correlation coefficients have become a focal point of research. As research progresses, we discovered that there are still a few unresolved issues concerning the correlation coefficients of PHFSs. To overcome the limitations of existing correlation coefficients for PHFSs, we propose new correlation coefficients in this study. In addition, we present a multi-criteria group decision-making (MCGDM) method under unknown weights based on the newly proposed correlation coefficients. In addition, considering the limitations of DMs' propensity to use language variables for expression in the evaluation process, we propose a method for transforming the evaluation information of the DMs' linguistic variables into probabilistic hesitant fuzzy information in the newly proposed MCGDM method. To demonstrate the applicability of the proposed correlation coefficients and MCGDM method, we applied them to a comprehensive clinical evaluation of orphan drugs. Finally, the reliability, feasibility and efficacy of the newly proposed correlation coefficients and MCGDM method were validated.


Subject(s)
Fuzzy Logic , Humans , Orphan Drug Production , Decision Making , Probability , Algorithms
4.
PLoS Comput Biol ; 20(5): e1011999, 2024 May.
Article in English | MEDLINE | ID: mdl-38691544

ABSTRACT

Bayesian decision theory (BDT) is frequently used to model normative performance in perceptual, motor, and cognitive decision tasks where the possible outcomes of actions are associated with rewards or penalties. The resulting normative models specify how decision makers should encode and combine information about uncertainty and value-step by step-in order to maximize their expected reward. When prior, likelihood, and posterior are probabilities, the Bayesian computation requires only simple arithmetic operations: addition, etc. We focus on visual cognitive tasks where Bayesian computations are carried out not on probabilities but on (1) probability density functions and (2) these probability density functions are derived from samples. We break the BDT model into a series of computations and test human ability to carry out each of these computations in isolation. We test three necessary properties of normative use of pdf information derived from a sample-accuracy, additivity and influence. Influence measures allow us to assess how much weight each point in the sample is assigned in making decisions and allow us to compare normative use (weighting) of samples to actual, point by point. We find that human decision makers violate accuracy and additivity systematically but that the cost of failure in accuracy or additivity would be minor in common decision tasks. However, a comparison of measured influence for each sample point with normative influence measures demonstrates that the individual's use of sample information is markedly different from the predictions of BDT. We will show that the normative BDT model takes into account the geometric symmetries of the pdf while the human decision maker does not. An alternative model basing decisions on a single extreme sample point provided a better account for participants' data than the normative BDT model.


Subject(s)
Bayes Theorem , Decision Making , Humans , Decision Making/physiology , Computational Biology/methods , Probability , Female , Male , Decision Theory , Adult , Models, Statistical , Cognition/physiology
5.
Crit Rev Toxicol ; 54(4): 252-289, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38753561

ABSTRACT

INTRODUCTION: Causal epidemiology for regulatory risk analysis seeks to evaluate how removing or reducing exposures would change disease occurrence rates. We define interventional probability of causation (IPoC) as the change in probability of a disease (or other harm) occurring over a lifetime or other specified time interval that would be caused by a specified change in exposure, as predicted by a fully specified causal model. We define the closely related concept of causal assigned share (CAS) as the predicted fraction of disease risk that would be removed or prevented by a specified reduction in exposure, holding other variables fixed. Traditional approaches used to evaluate the preventable risk implications of epidemiological associations, including population attributable fraction (PAF) and the Bradford Hill considerations, cannot reveal whether removing a risk factor would reduce disease incidence. We argue that modern formal causal models coupled with causal artificial intelligence (CAI) and realistically partial and imperfect knowledge of underlying disease mechanisms, show great promise for determining and quantifying IPoC and CAS for exposures and diseases of practical interest. METHODS: We briefly review key CAI concepts and terms and then apply them to define IPoC and CAS. We present steps to quantify IPoC using a fully specified causal Bayesian network (BN) model. Useful bounds for quantitative IPoC and CAS calculations are derived for a two-stage clonal expansion (TSCE) model for carcinogenesis and illustrated by applying them to benzene and formaldehyde based on available epidemiological and partial mechanistic evidence. RESULTS: Causal BN models for benzene and risk of acute myeloid leukemia (AML) incorporating mechanistic, toxicological and epidemiological findings show that prolonged high-intensity exposure to benzene can increase risk of AML (IPoC of up to 7e-5, CAS of up to 54%). By contrast, no causal pathway leading from formaldehyde exposure to increased risk of AML was identified, consistent with much previous mechanistic, toxicological and epidemiological evidence; therefore, the IPoC and CAS for formaldehyde-induced AML are likely to be zero. CONCLUSION: We conclude that the IPoC approach can differentiate between likely and unlikely causal factors and can provide useful upper bounds for IPoC and CAS for some exposures and diseases of practical importance. For causal factors, IPoC can help to estimate the quantitative impacts on health risks of reducing exposures, even in situations where mechanistic evidence is realistically incomplete and individual-level exposure-response parameters are uncertain. This illustrates the strength that can be gained for causal inference by using causal models to generate testable hypotheses and then obtaining toxicological data to test the hypotheses implied by the models-and, where necessary, refine the models. This virtuous cycle provides additional insight into causal determinations that may not be available from weight-of-evidence considerations alone.


Subject(s)
Benzene , Formaldehyde , Leukemia, Myeloid, Acute , Humans , Benzene/toxicity , Leukemia, Myeloid, Acute/epidemiology , Leukemia, Myeloid, Acute/chemically induced , Formaldehyde/toxicity , Causality , Probability , Risk Assessment , Environmental Exposure , Risk Factors
6.
Article in English | MEDLINE | ID: mdl-38743847

ABSTRACT

INTRODUCTION: Pediatric ankle injuries are a common presentation in the emergency department (ED). A quarter of pediatric ankle fractures show no radiographic evidence of a fracture. Physicians often correlate non-weight bearing and tenderness with an occult fracture. We present this study to predict the probability of an occult fracture using radiographic soft-tissue swelling on initial ED radiographs. METHODS: This is a retrospective study at a Level 1 pediatric trauma center from 2021 to 22. Soft-tissue swelling between the lateral malleolus and skin was measured on radiographs, and weight-bearing status was documented. Statistical analysis was conducted using Stata software. DISCUSSION: The study period involved 32 patients with an occult fracture, with 8 (25%) diagnosed with a fracture on follow-up radiographs. The probability of an occult fracture was calculated as a function of the ankle swelling in millimeters (mm) using a computer-generated predictive model. False-negative and false-positive rates were plotted as a function of the degree of ankle swelling. CONCLUSION: Magnitude of ankle soft-tissue swelling as measured on initial ED radiographs is predictive of an occult fracture. Although weight-bearing status was not a sign of occult fracture, it improves the predictive accuracy of soft-tissue swelling.


Subject(s)
Ankle Fractures , Edema , Fractures, Closed , Radiography , Humans , Ankle Fractures/diagnostic imaging , Retrospective Studies , Male , Female , Child , Edema/diagnostic imaging , Fractures, Closed/diagnostic imaging , Adolescent , Emergency Service, Hospital , Weight-Bearing , Probability , Child, Preschool , Predictive Value of Tests
7.
Sci Rep ; 14(1): 10226, 2024 05 03.
Article in English | MEDLINE | ID: mdl-38702379

ABSTRACT

Tracheal pooling for Mycoplasma hyopneumoniae (M. hyopneumoniae) DNA detection allows for decreased diagnostic cost, one of the main constraints in surveillance programs. The objectives of this study were to estimate the sensitivity of pooled-sample testing for the detection of M. hyopneumoniae in tracheal samples and to develop probability of M. hyopneumoniae detection estimates for tracheal samples pooled by 3, 5, and 10. A total of 48 M. hyopneumoniae PCR-positive field samples were pooled 3-, 5-, and 10-times using field M. hyopneumoniae DNA-negative samples and tested in triplicate. The sensitivity was estimated at 0.96 (95% credible interval [Cred. Int.]: 0.93, 0.98) for pools of 3, 0.95 (95% Cred. Int: 0.92, 0.98) for pools of 5, and 0.93 (95% Cred. Int.: 0.89, 0.96) for pools of 10. All pool sizes resulted in PCR-positive if the individual tracheal sample Ct value was < 33. Additionally, there was no significant decrease in the probability of detecting at least one M. hyopneumoniae-infected pig given any pool size (3, 5, or 10) of tracheal swabs. Furthermore, this manuscript applies the probability of detection estimates to various real-life diagnostic testing scenarios. Combining increased total animals sampled with pooling can be a cost-effective tool to maximize the performance of M. hyopneumoniae surveillance programs.


Subject(s)
Mycoplasma hyopneumoniae , Pneumonia of Swine, Mycoplasmal , Trachea , Mycoplasma hyopneumoniae/isolation & purification , Mycoplasma hyopneumoniae/genetics , Animals , Trachea/microbiology , Swine , Pneumonia of Swine, Mycoplasmal/diagnosis , Pneumonia of Swine, Mycoplasmal/microbiology , Polymerase Chain Reaction/methods , DNA, Bacterial/analysis , Sensitivity and Specificity , Specimen Handling/methods , Probability
8.
PLoS One ; 19(5): e0299255, 2024.
Article in English | MEDLINE | ID: mdl-38722923

ABSTRACT

Despite the huge importance that the centrality metrics have in understanding the topology of a network, too little is known about the effects that small alterations in the topology of the input graph induce in the norm of the vector that stores the node centralities. If so, then it could be possible to avoid re-calculating the vector of centrality metrics if some minimal changes occur in the network topology, which would allow for significant computational savings. Hence, after formalising the notion of centrality, three of the most basic metrics were herein considered (i.e., Degree, Eigenvector, and Katz centrality). To perform the simulations, two probabilistic failure models were used to describe alterations in network topology: Uniform (i.e., all nodes can be independently deleted from the network with a fixed probability) and Best Connected (i.e., the probability a node is removed depends on its degree). Our analysis suggests that, in the case of degree, small variations in the topology of the input graph determine small variations in Degree centrality, independently of the topological features of the input graph; conversely, both Eigenvector and Katz centralities can be extremely sensitive to changes in the topology of the input graph. In other words, if the input graph has some specific features, even small changes in the topology of the input graph can have catastrophic effects on the Eigenvector or Katz centrality.


Subject(s)
Algorithms , Computer Simulation , Models, Theoretical , Models, Statistical , Probability
9.
J Oleo Sci ; 73(5): 675-681, 2024.
Article in English | MEDLINE | ID: mdl-38692891

ABSTRACT

Protein soils must be removed for both appearance and hygienic reasons. They are denatured by heat treatment or bleaching and cleaned using enzymes. Among the various types of protein soils, blood soils are the most noticeable and known to be denatured by heat and bleaching by oxidation. We verified herein that the detergency of heat and oxidatively denatured hemoglobin is greatly improved by the enzyme immersing treatment in the detergency with SDS and can be analyzed using the probability density functional method. The probability density functional method evaluates the cleaning power by assuming that the adhesion and cleaning force of soils are not uniquely determined, but instead have a distribution in intensity, with a usefulness that had recently been demonstrated. This analytical method showed that the cleaning power of the enzyme immersing treatment improved when the soil adhesive force was decreased due to denatured protein degradation, even though the cleaning power of the SDS remained unchanged, and the values were consistent with those in the cleaning test. In conclusion, the probability density functional method can be used to analyze enzymatic degradation of denatured protein soils and the resulting changes in their detergency.


Subject(s)
Protein Denaturation , Sodium Dodecyl Sulfate/chemistry , Oxidation-Reduction , Hot Temperature , Hemoglobins/chemistry , Soil/chemistry , Probability
10.
BMC Med Res Methodol ; 24(1): 116, 2024 May 18.
Article in English | MEDLINE | ID: mdl-38762731

ABSTRACT

BACKGROUND: Extended illness-death models (a specific class of multistate models) are a useful tool to analyse situations like hospital-acquired infections, ventilation-associated pneumonia, and transfers between hospitals. The main components of these models are hazard rates and transition probabilities. Calculation of different measures and their interpretation can be challenging due to their complexity. METHODS: By assuming time-constant hazards, the complexity of these models becomes manageable and closed mathematical forms for transition probabilities can be derived. Using these forms, we created a tool in R to visualize transition probabilities via stacked probability plots. RESULTS: In this article, we present this tool and give some insights into its theoretical background. Using published examples, we give guidelines on how this tool can be used. Our goal is to provide an instrument that helps obtain a deeper understanding of a complex multistate setting. CONCLUSION: While multistate models (in particular extended illness-death models), can be highly complex, this tool can be used in studies to both understand assumptions, which have been made during planning and as a first step in analysing complex data structures. An online version of this tool can be found at https://eidm.imbi.uni-freiburg.de/ .


Subject(s)
Probability , Humans , Cross Infection/prevention & control , Cross Infection/epidemiology , Models, Statistical , Proportional Hazards Models , Pneumonia, Ventilator-Associated/mortality , Pneumonia, Ventilator-Associated/epidemiology , Pneumonia, Ventilator-Associated/prevention & control , Mobile Applications/statistics & numerical data , Algorithms
11.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38768225

ABSTRACT

Conventional supervised learning usually operates under the premise that data are collected from the same underlying population. However, challenges may arise when integrating new data from different populations, resulting in a phenomenon known as dataset shift. This paper focuses on prior probability shift, where the distribution of the outcome varies across datasets but the conditional distribution of features given the outcome remains the same. To tackle the challenges posed by such shift, we propose an estimation algorithm that can efficiently combine information from multiple sources. Unlike existing methods that are restricted to discrete outcomes, the proposed approach accommodates both discrete and continuous outcomes. It also handles high-dimensional covariate vectors through variable selection using an adaptive least absolute shrinkage and selection operator penalty, producing efficient estimates that possess the oracle property. Moreover, a novel semiparametric likelihood ratio test is proposed to check the validity of prior probability shift assumptions by embedding the null conditional density function into Neyman's smooth alternatives (Neyman, 1937) and testing study-specific parameters. We demonstrate the effectiveness of our proposed method through extensive simulations and a real data example. The proposed methods serve as a useful addition to the repertoire of tools for dealing dataset shifts.


Subject(s)
Algorithms , Computer Simulation , Models, Statistical , Probability , Humans , Likelihood Functions , Biometry/methods , Data Interpretation, Statistical , Supervised Machine Learning
12.
PLoS One ; 19(5): e0297792, 2024.
Article in English | MEDLINE | ID: mdl-38722936

ABSTRACT

Intuitively, combining multiple sources of evidence should lead to more accurate decisions than considering single sources of evidence individually. In practice, however, the proper computation may be difficult, or may require additional data that are inaccessible. Here, based on the concept of conditional independence, we consider expressions that can serve either as recipes for integrating evidence based on limited data, or as statistical benchmarks for characterizing evidence integration processes. Consider three events, A, B, and C. We find that, if A and B are conditionally independent with respect to C, then the probability that C occurs given that both A and B are known, P(C|A, B), can be easily calculated without the need to measure the full three-way dependency between A, B, and C. This simplified approach can be used in two general ways: to generate predictions by combining multiple (conditionally independent) sources of evidence, or to test whether separate sources of evidence are functionally independent of each other. These applications are demonstrated with four computer-simulated examples, which include detecting a disease based on repeated diagnostic testing, inferring biological age based on multiple biomarkers of aging, discriminating two spatial locations based on multiple cue stimuli (multisensory integration), and examining how behavioral performance in a visual search task depends on selection histories. Besides providing a sound prescription for predicting outcomes, this methodology may be useful for analyzing experimental data of many types.


Subject(s)
Computer Simulation , Humans , Probability , Models, Statistical , Aging/physiology
13.
J Speech Lang Hear Res ; 67(5): 1490-1513, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38573844

ABSTRACT

PURPOSE: Children with developmental language disorder (DLD) tend to interpret noncanonical sentences like passives using event probability (EP) information regardless of structure (e.g., by interpreting "The dog was chased by the squirrel" as "The dog chased the squirrel"). Verbs are a major source of EP information in adults and children with typical development (TD), who know that "chase" implies an unequal relationship among participants. Individuals with DLD have poor verb knowledge and verb-based sentence processing. Yet, they also appear to rely more on EP information than their peers. This paradox raises two questions: (a) How do children with DLD use verb-based EP information alongside other information in online passive sentence interpretation? (b) How does verb vocabulary knowledge support EP information use? METHOD: We created novel EP biases by showing animations of agents with consistent action tendencies (e.g., clumsy vs. helpful actions). We then used eye tracking to examine how this EP information was used during online passive sentence processing. Participants were 4- to 5-year-old children with DLD (n = 20) and same-age peers with TD (n = 20). RESULTS: In Experiment 1, children with DLD quickly integrated verb-based EP information with morphosyntax close to the verb but failed to do so with distant morphosyntax. In Experiment 2, the quality of children's sentence-specific verb vocabulary knowledge was positively associated with the use of EP information in both groups. CONCLUSION: Depending on the morphosyntactic context, children with DLD and TD used EP information differently, but verb vocabulary knowledge aided its use. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.25491805.


Subject(s)
Language Development Disorders , Vocabulary , Humans , Female , Male , Child, Preschool , Language Development Disorders/psychology , Child Language , Probability , Eye-Tracking Technology , Comprehension
14.
Environ Monit Assess ; 196(5): 482, 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38683463

ABSTRACT

The flood of Damodar river is a well-known fact which is used to the whole riverine society of the basin as well as to the eastern India. The study aims to estimate the spatio-temporal probability of floods and identify susceptible zones in the Lower Damodar Basin (LDB). A flood frequency analysis around 90 years hydrological series is performed using the Log-Pearson Type III model. The frequency ratio model has also been applied to determine the spatial context of flood. This reveals the extent to which the LDB could be inundated in response to peak discharge conditions, especially during the monsoon season. The findings indicate that 36.64% of the LDB falls under high to very high flood susceptibility categories, revealing an increasing downstream flood vulnerability trend. Hydro-geomorphic factors substantially contribute to the susceptibility of the LDB to high magnitude floods. A significant shift in flood recurrence intervals, from biennial occurrences in the pre-dam period to decadal or vicennial occurrences in the post-dam period, is observed. Despite a reduction in high-magnitude flood incidents due to dam and barrage construction, irregular flood events persist. The effect of flood in the LDB region is considered to be either positive as well as negative in terms of wholistic sense and impact. The analytical results of this research could serve to identify flood-prone zones and guide the development of flood resilience policies, thereby promoting sustainability within the LDB floodplain.


Subject(s)
Environmental Monitoring , Floods , Rivers , India , Environmental Monitoring/methods , Rivers/chemistry , Probability , Spatio-Temporal Analysis , Hydrology
15.
Nature ; 629(8012): 624-629, 2024 May.
Article in English | MEDLINE | ID: mdl-38632401

ABSTRACT

The cost of drug discovery and development is driven primarily by failure1, with only about 10% of clinical programmes eventually receiving approval2-4. We previously estimated that human genetic evidence doubles the success rate from clinical development to approval5. In this study we leverage the growth in genetic evidence over the past decade to better understand the characteristics that distinguish clinical success and failure. We estimate the probability of success for drug mechanisms with genetic support is 2.6 times greater than those without. This relative success varies among therapy areas and development phases, and improves with increasing confidence in the causal gene, but is largely unaffected by genetic effect size, minor allele frequency or year of discovery. These results indicate we are far from reaching peak genetic insights to aid the discovery of targets for more effective drugs.


Subject(s)
Clinical Trials as Topic , Drug Approval , Drug Discovery , Treatment Outcome , Humans , Alleles , Clinical Trials as Topic/economics , Clinical Trials as Topic/statistics & numerical data , Drug Approval/economics , Drug Discovery/economics , Drug Discovery/methods , Drug Discovery/statistics & numerical data , Drug Discovery/trends , Gene Frequency , Genetic Predisposition to Disease , Molecular Targeted Therapy , Probability , Time Factors , Treatment Failure
16.
Cogn Sci ; 48(4): e13436, 2024 04.
Article in English | MEDLINE | ID: mdl-38564245

ABSTRACT

We report the results of one visual-world eye-tracking experiment and two referent selection tasks in which we investigated the effects of information structure in the form of prosody and word order manipulation on the processing of subject pronouns er and der in German. Factors such as subjecthood, focus, and topicality, as well as order of mention have been linked to an increased probability of certain referents being selected as the pronoun's antecedent and described as increasing this referent's prominence, salience, or accessibility. The goal of this study was to find out whether pronoun processing is primarily guided by linguistic factors (e.g., grammatical role) or nonlinguistic factors (e.g., first-mention), and whether pronoun interpretation can be described in terms of referents' "prominence" / "accessibility" / "salience." The results showed an overall subject preference for er, whereas der was affected by the object role and focus marking. While focus increases the attentional load and enhances memory representation for the focused referent making the focused referent more available, ultimately it did not affect the final interpretation of er, suggesting that "prominence" or the related concepts do not explain referent selection preferences. Overall, the results suggest a primacy of linguistic factors in determining pronoun resolution.


Subject(s)
Emotions , Linguistics , Male , Humans , Eye-Tracking Technology , Probability
17.
Cogn Sci ; 48(4): e13437, 2024 04.
Article in English | MEDLINE | ID: mdl-38564270

ABSTRACT

Statistical learning enables humans to involuntarily process and utilize different kinds of patterns from the environment. However, the cognitive mechanisms underlying the simultaneous acquisition of multiple regularities from different perceptual modalities remain unclear. A novel multidimensional serial reaction time task was developed to test 40 participants' ability to learn simple first-order and complex second-order relations between uni-modal visual and cross-modal audio-visual stimuli. Using the difference in reaction times between sequenced and random stimuli as the index of domain-general statistical learning, a significant difference and dissociation of learning occurred between the initial and final learning phases. Furthermore, we used a negative and positive occurrence-frequency-and-reaction-time correlation to indicate implicit and explicit learning, respectively, and found that learning simple uni-modal patterns involved an implicit-to-explicit segue, while acquiring complex cross-modal patterns required an explicit-to-implicit segue, resulting in a X-shape crossing of regularity learning. Thus, we propose an X-way hypothesis to elucidate the dynamic interplay between the implicit and explicit systems at two distinct stages when acquiring various regularities in a multidimensional probability space.


Subject(s)
Learning , Humans , Probability , Reaction Time
18.
Environ Geochem Health ; 46(5): 165, 2024 Apr 09.
Article in English | MEDLINE | ID: mdl-38592368

ABSTRACT

Soil pollution around Pb-Zn smelters has attracted widespread attention around the world. In this study, we compiled a database of eight potentially toxic elements (PTEs) Pb, Zn, Cd, As, Cr, Ni, Cu, and Mn in the soil of Pb-Zn smelting areas by screening the published research papers from 2000 to 2023. The pollution assessment and risk screening of eight PTEs were carried out by geo-accumulation index (Igeo), potential ecological risk index (PERI) and health risk assessment model, and Monte Carlo simulation employed to further evaluate the probabilistic health risks. The results suggested that the mean values of the eight PTEs all exceeded the corresponding values in the upper crust, and more than 60% of the study sites had serious Pb and Cd pollution (Igeo > 4), with Brazil, Belgium, China, France and Slovenia having higher levels of pollution than other regions. Besides, PTEs in smelting area caused serious ecological risk (PERI = 10912.12), in which Cd was the main contributor to PREI (86.02%). The average hazard index (HI) of the eight PTEs for adults and children was 7.19 and 9.73, respectively, and the average value of total carcinogenic risk (TCR) was 4.20 × 10-3 and 8.05 × 10-4, respectively. Pb and As are the main contributors to non-carcinogenic risk, while Cu and As are the main contributors to carcinogenic risk. The probability of non-carcinogenic risk in adults and children was 84.05% and 97.57%, while carcinogenic risk was 92.56% and 79.73%, respectively. In summary, there are high ecological and health risks of PTEs in the soil of Pb-Zn smelting areas, and Pb, Cd, As and Cu are the key elements that cause contamination and risk, which need to be paid attention to and controlled. This study is expected to provide guidance for soil remediation in Pb-Zn smelting areas.


Subject(s)
Cadmium , Lead , Adult , Child , Humans , Lead/toxicity , Carcinogenesis , Carcinogens , Environmental Pollution , Probability , Risk Assessment , Soil , Zinc
19.
Stat Med ; 43(13): 2672-2694, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38622063

ABSTRACT

Propensity score methods, such as inverse probability-of-treatment weighting (IPTW), have been increasingly used for covariate balancing in both observational studies and randomized trials, allowing the control of both systematic and chance imbalances. Approaches using IPTW are based on two steps: (i) estimation of the individual propensity scores (PS), and (ii) estimation of the treatment effect by applying PS weights. Thus, a variance estimator that accounts for both steps is crucial for correct inference. Using a variance estimator which ignores the first step leads to overestimated variance when the estimand is the average treatment effect (ATE), and to under or overestimated estimates when targeting the average treatment effect on the treated (ATT). In this article, we emphasize the importance of using an IPTW variance estimator that correctly considers the uncertainty in PS estimation. We present a comprehensive tutorial to obtain unbiased variance estimates, by proposing and applying a unifying formula for different types of PS weights (ATE, ATT, matching and overlap weights). This can be derived either via the linearization approach or M-estimation. Extensive R code is provided along with the corresponding large-sample theory. We perform simulation studies to illustrate the behavior of the estimators under different treatment and outcome prevalences and demonstrate appropriate behavior of the analytical variance estimator. We also use a reproducible analysis of observational lung cancer data as an illustrative example, estimating the effect of receiving a PET-CT scan on the receipt of surgery.


Subject(s)
Propensity Score , Humans , Observational Studies as Topic , Computer Simulation , Probability , Randomized Controlled Trials as Topic , Models, Statistical , Lung Neoplasms
20.
J Health Econ ; 95: 102875, 2024 May.
Article in English | MEDLINE | ID: mdl-38598916

ABSTRACT

This paper assesses analytical strategies that respect the bounded-count nature of health outcomes encountered often in empirical applications. Absent in the literature is a comprehensive discussion and critique of strategies for analyzing and understanding such data. The paper's goal is to provide an in-depth consideration of prominent issues arising in and strategies for undertaking such analyses, emphasizing the merits and limitations of various analytical tools empirical researchers may contemplate. Three main topics are covered. First, bounded-count health outcomes' measurement properties are reviewed and their implications assessed. Second, issues arising when bounded-count outcomes are the objects of concern in evaluations are described. Third, the (conditional) probability and moment structures of bounded-count outcomes are derived and corresponding specification and estimation strategies presented with particular attention to partial effects. Many questions may be asked of such data in health research and a researcher's choice of analytical method is often consequential.


Subject(s)
Outcome Assessment, Health Care , Humans , Data Interpretation, Statistical , Models, Statistical , Probability
SELECTION OF CITATIONS
SEARCH DETAIL
...