Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 309
Filter
1.
Cortex ; 177: 130-149, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38852224

ABSTRACT

Although event-related potential (ERP) research on language processing has capitalized on key, theoretically influential components such as the N400 and P600, their measurement properties-especially the variability in their temporal and spatial parameters-have rarely been examined. The current study examined the measurement properties of the N400 and P600 effects elicited by semantic and syntactic anomalies, respectively, during sentence processing. We used a bootstrap resampling procedure to randomly draw many thousands of resamples varying in sample size and stimulus count from a larger sample of 187 participants and 40 stimulus sentences of each type per condition. Our resampling investigation focused on three issues: (a) statistical power; (b) variability in the magnitudes of the effects; and (c) variability in the temporal and spatial profiles of the effects. At the level of grand averages, the N400 and P600 effects were both robust and substantial. However, across resamples, there was a high degree of variability in effect magnitudes, onset times, and scalp distributions, which may be greater than is currently appreciated in the literature, especially for the P600 effects. These results provide a useful basis for designing future studies using these two well-established ERP components. At the same time, the results also highlight challenges that need to be addressed in future research (e.g., how best to analyze the ERP data without engaging in such questionable research practices as p-hacking).

2.
Sensors (Basel) ; 24(12)2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38931715

ABSTRACT

Lithium, a critical natural resource integral to modern technology, has influenced diverse industries since its discovery in the 1950s. Of particular interest is lithium-7, the most prevalent lithium isotope on Earth, playing a vital role in applications such as batteries, metal alloys, medicine, and nuclear research. However, its extraction presents significant environmental and logistical challenges. This article explores the potential for lithium exploration on the Moon, driven by its value as a resource and the prospect of cost reduction due to the Moon's lower gravity, which holds promise for future space exploration endeavors. Additionally, the presence of lithium in the solar wind and its implications for material transport across celestial bodies are subjects of intrigue. Drawing from a limited dataset collected during the Apollo missions (Apollo 12, 15, 16, and 17) and leveraging artificial intelligence techniques and sample expansion through bootstrapping, this study develops predictive models for lithium-7 concentration based on spectral patterns. The study areas encompass the Aitken crater, Hadley Rima, and the Taurus-Littrow Valley, where higher lithium concentrations are observed in basaltic lunar regions. This research bridges lunar geology and the formation of the solar system, providing valuable insights into celestial resources and enhancing our understanding of space. The data used in this study were obtained from the imaging sensors (infrared, visible, and ultraviolet) of the Clementine satellite, which significantly contributed to the success of our research. Furthermore, the study addresses various aspects related to statistical analysis, sample quality validation, resampling, and bootstrapping. Supervised machine learning model training and validation, as well as data import and export, were explored. The analysis of data generated by the Clementine probe in the near-infrared (NIR) and ultraviolet-visible (UVVIS) spectra revealed evidence of the presence of lithium-7 (Li-7) on the lunar surface. The distribution of Li-7 on the lunar surface is non-uniform, with varying concentrations in different regions of the Moon identified, supporting the initial hypothesis associating surface Li-7 concentration with exposure to solar wind. While a direct numerical relationship between lunar topography and Li-7 concentration has not been established due to morphological diversity and methodological limitations, preliminary results suggest significant economic and technological potential in lunar lithium exploration and extraction.

3.
Cureus ; 16(4): e59151, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38803738

ABSTRACT

Background In applied sciences, statistical models are pivotal for uncovering relationships in complex datasets. The applied linear model establishes associative links between variables. While qualitative predictors are essential, their integration into linear models poses challenges. The dummy variable approach transforms qualitative variables into binary ones for regression analysis. Multilayer Feedforward Neural Networks (MLFFNN) offer validation of regression models, and fuzzy regression offers alternative methods to address the ambiguity of qualitative predictors. This study aims to enhance the integration of qualitative predictors in applied linear models through statistical methodologies. Material and methods This study design involves the transformation of qualitative predictors into dummy variables, the bootstrapping technique to improve the parameter estimates, the Multilayer Feedforward Neural Network, and fuzzy regression. This study uses the programming language R as an analysis tool. Results The multiple linear regression model demonstrates precision and a significant fit (p<0.05), with an R-squared value of 0.95 and mean square error (MSE) of 9.97. Comparing actual and predicted values, fuzzy regression exhibits superior predictability over linear regression. The MLFFNN yields a reduced MSE net of 0.362, indicating enhanced prediction precision for derived models. Conclusion This study presents a precise methodology for integrating qualitative variables into linear regression, supported by the combination of specific statistical methodologies to enhance predictive modeling. By integrating fuzzy linear regression, MLFF neural networks, and bootstrapping, the proposed technique emerges as the most effective approach for modeling and prediction. These findings underscore the efficacy of this method in seamlessly integrating qualitative variables into linear models, ultimately enhancing accuracy and prediction capabilities.

4.
Anal Chim Acta ; 1305: 342597, 2024 May 29.
Article in English | MEDLINE | ID: mdl-38677839

ABSTRACT

BACKGROUND: Increasingly, measurement uncertainty has been used by pure and applied analytical chemistry to ensure decision-making in commercial transactions and technical-scientific applications. Until recently, it was considered that measurement uncertainty boiled down to analytical uncertainty; however, over the last two decades, uncertainty arising from sampling has also been considered. However, the second version of the EURACHEM guide, published in 2019, assumes that the frequency distribution is approximately normal or can be normalized through logarithmic transformations, without treating data that deviate from the normality. RESULTS: Here, six examples (four from Eurachem guide) were treated by classical ANOVA and submitted to an innovative nonparametric approach for estimating the uncertainty contribution arising from sampling. Based on bootstrapping method, confidence intervals were used to guarantee metrological compatibility between the uncertainty ratios arising from the results of the traditional parametric tests and the unprecedented proposed nonparametric methodology. SIGNIFICANCE AND NOVELTY: The present study proposed an innovative methodology for covering this gap in the literature based on nonparametric statistics (NONPANOVA) using the median absolute deviation concepts. Supplementary material based on Excel spreadsheets was developed, assisting users in the statistical treatment of their real examples.

5.
Lang Learn Dev ; 20(1): 19-39, 2024.
Article in English | MEDLINE | ID: mdl-38645571

ABSTRACT

To learn new words, particularly verbs, child learners have been shown to benefit from the linguistic contexts in which the words appear. However, cross-linguistic differences affect how this process unfolds. One previous study found that children's abilities to learn a new verb differed across Korean and English as a function of the sentence in which the verb occurred (Arunachalam et al., 2013). The authors hypothesized that the properties of word order and argument drop, which vary systematically in these two languages, were driving the differences. In the current study, we pursued this finding to ask if the difference persists later in development, or if children acquiring different languages come to appear more similar as their linguistic knowledge and learning capacities increase. Preschool-aged monolingual English learners (N = 80) and monolingual Korean learners (N = 64) were presented with novel verbs in contexts that varied in word order and argument drop and accompanying visual stimuli. We assessed their learning by measuring accuracy in a forced-choice pointing task, and we measured eye gaze during the learning phase as an indicator of the processes by which they mapped the novel verbs to meaning. Unlike previous studies which identified differences between English and Korean learning 2-year-olds in a similar task, our results revealed similarities between the two language groups with these older preschoolers. We interpret our results as evidence that over the course of early childhood, children become adept at learning from a larger variety of contexts, such that differences between learners of different languages are attenuated.

6.
Methods Mol Biol ; 2744: 375-390, 2024.
Article in English | MEDLINE | ID: mdl-38683332

ABSTRACT

DNA barcoding has largely established itself as a mainstay for rapid molecular taxonomic identification in both academic and applied research. The use of DNA barcoding as a molecular identification method depends on a "DNA barcode gap"-the separation between the maximum within-species difference and the minimum between-species difference. Previous work indicates the presence of a gap hinges on sampling effort for focal taxa and their close relatives. Furthermore, both theory and empirical work indicate a gap may not occur for related pairs of biological species. Here, we present a novel evaluation approach in the form of an easily calculated set of nonparametric metrics to quantify the extent of proportional overlap in inter- and intraspecific distributions of pairwise differences among target species and their conspecifics. The metrics are based on a simple count of the number of overlapping records for a species falling within the bounds of maximum intraspecific distance and minimum interspecific distance. Our approach takes advantage of the asymmetric directionality inherent in pairwise genetic distance distributions, which has not been previously done in the DNA barcoding literature. We apply the metrics to the predatory diving beetle genus Agabus as a case study because this group poses significant identification challenges due to its morphological uniformity despite both relative sampling ease and well-established taxonomy. Results herein show that target species and their nearest neighbor species were found to be tightly clustered and therefore difficult to distinguish. Such findings demonstrate that DNA barcoding can fail to fully resolve species in certain cases. Moving forward, we suggest the implementation of the proposed metrics be integrated into a common framework to be reported in any study that uses DNA barcoding for identification. In so doing, the importance of the DNA barcode gap and its components for the success of DNA-based identification using DNA barcodes can be better appreciated.


Subject(s)
DNA Barcoding, Taxonomic , DNA Barcoding, Taxonomic/methods , Animals , Coleoptera/genetics , Coleoptera/classification , DNA/genetics , DNA/analysis , Species Specificity
7.
J Exp Child Psychol ; 244: 105933, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38657522

ABSTRACT

Cheating is a pervasive unethical behavior. Existing research involving young children has mainly focused on contextual factors affecting cheating behavior, whereas cognitive factors have been relatively understudied. This study investigated the unique role of verbal and performance intelligence on young children's cheating behavior (N = 50; mean age = 5.73 years; 25 boys). Bootstrapping hierarchical logistic regression showed that children's Verbal IQ scores were significantly and negatively correlated with their cheating behavior above and beyond the contributions of age, gender, and Performance IQ scores. Children with higher Verbal IQ scores were less inclined to cheat. However, neither children's Performance IQ nor their Total IQ scores had a significant and unique correlation with cheating. These findings suggest that intelligence plays a significant role in children's cheating but that this role is limited to verbal intelligence only. In addition, this study highlights the need for researchers to go beyond the contextual factors to study the early development of cheating behavior.


Subject(s)
Intelligence , Humans , Male , Female , Child, Preschool , Child , Deception , Child Behavior/psychology , Verbal Behavior
8.
Front Psychol ; 15: 1304485, 2024.
Article in English | MEDLINE | ID: mdl-38440243

ABSTRACT

Syncopation - the occurrence of a musical event on a metrically weak position preceding a rest on a metrically strong position - represents an important challenge in the study of the mapping between rhythm and meter. In this contribution, we present the hypothesis that syncopation is an effective strategy to elicit the bootstrapping of a multi-layered, hierarchically organized metric structure from a linear rhythmic surface. The hypothesis is inspired by a parallel with the problem of linearization in natural language syntax, which is the problem of how hierarchically organized phrase-structure markers are mapped onto linear sequences of words. The hypothesis has important consequences for the role of meter in music perception and cognition and, more particularly, for its role in the relationship between rhythm and bodily entrainment.

9.
J Electromyogr Kinesiol ; 75: 102872, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38458102

ABSTRACT

The number of motor units included in calculations of mean firing rates varies widely in the literature. It is unknown how the number of decomposed motor units included in the calculation of firing rate per participant compares to the total number of active motor units in the muscle, and if this is different for males and females. Bootstrapped distributions and confidence intervals (CI) of mean motor unit firing rates decomposed from the tibialis anterior were used to represent the total number of active motor units for individual participants in trials from 20 to 100 % of maximal voluntary contraction. Bootstrapped distributions of mean firing rates were constructed using different numbers of motor units, from one to the maximum number for each participant, and compared to the CIs. A probability measure for each number of motor units involved in firing rate was calculated and then averaged across all individuals. Motor unit numbers required for similar levels of probability increased as contraction intensity increased (p < 0.001). Increased levels of probability also required higher numbers of motor units (p < 0.001). There was no effect of sex (p ≥ 0.97) for any comparison. This methodology should be repeated in other muscles, and aged populations.


Subject(s)
Muscle Contraction , Muscle, Skeletal , Male , Female , Humans , Aged , Muscle, Skeletal/physiology , Muscle Contraction/physiology , Motor Neurons/physiology , Recruitment, Neurophysiological/physiology , Electromyography , Isometric Contraction/physiology
10.
Pharm Stat ; 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-38494795

ABSTRACT

In vitro dissolution testing is a regulatory required critical quality measure for solid dose pharmaceutical drug products. Setting the acceptance criteria to meet compendial criteria is required for a product to be filed and approved for marketing. Statistical approaches for analyzing dissolution data, setting specifications and visualizing results could vary according to product requirements, company's practices, and scientific judgements. This paper provides a general description of the steps taken in the evaluation and setting of in vitro dissolution specifications at release and on stability.

11.
Sensors (Basel) ; 24(6)2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38544148

ABSTRACT

Parkinson's disease is one of the major neurodegenerative diseases that affects the postural stability of patients, especially during gait initiation. There is actually an increasing demand for the development of new non-pharmacological tools that can easily classify healthy/affected patients as well as the degree of evolution of the disease. The experimental characterization of gait initiation (GI) is usually done through the simultaneous acquisition of about 20 variables, resulting in very large datasets. Dimension reduction tools are therefore suitable, considering the complexity of the physiological processes involved. The principal Component Analysis (PCA) is very powerful at reducing the dimensionality of large datasets and emphasizing correlations between variables. In this paper, the Principal Component Analysis (PCA) was enhanced with bootstrapping and applied to the study of the GI to identify the 3 majors sets of variables influencing the postural control disability of Parkinsonian patients during GI. We show that the combination of these methods can lead to a significant improvement in the unsupervised classification of healthy/affected patients using a Gaussian mixture model, since it leads to a reduced confidence interval on the estimated parameters. The benefits of this method for the identification and study of the efficiency of potential treatments is not addressed in this paper but could be addressed in future works.


Subject(s)
Gait Disorders, Neurologic , Parkinson Disease , Humans , Principal Component Analysis , Confidence Intervals , Parkinson Disease/therapy , Gait/physiology , Postural Balance/physiology
12.
Eur J Pharm Sci ; 196: 106745, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38471596

ABSTRACT

f2 with or without bootstrapping is the most common method to compare in vitro dissolution profiles, but methods to compare dissolution profiles have become less harmonized. The objective was to compare outcomes from bootstrap f2 and f2 (i.e. not-bootstrapped f2) using a large set of in vitro dissolution data. Non-parametric bootstrapping was performed on the 104 profile comparisons that did not meet the percent coefficient of variation (CV%) criteria to use average dissolution data. Bootstrap f2 was taken as the lower 90 % confidence interval of bootstrapped samples. There was concordance between bootstrap f2 and f2 in 92 of the 104 comparisons (88 %). There were no false positives. However, 12 % were false negative. Inspection of these discordance pairs suggests that bootstrap f2 serves as a conservative approach to assess profile similarity, particularly when a value of 50 is being used as decision criteria.

13.
J Food Prot ; 87(5): 100264, 2024 May.
Article in English | MEDLINE | ID: mdl-38493872

ABSTRACT

A surrogate is commonly used for process validations. The industry often uses the target log cycle reduction for the test (LCRTest) microorganism (surrogate) to be equal to the desired log cycle reduction for the target (LCRTarget) microorganism (pathogen). When the surrogate is too conservative with far greater resistance than the pathogen, the food may be overprocessed with quality and cost consequences. In aseptic processing, the Institute for Thermal Processing Specialists recommends using relative resistance (DTarget)/(DTest) to calculate LCRTest (product of LCRTarget and relative resistance). This method uses the mean values of DTarget and DTest and does not consider the estimating variability. We defined kill ratio (KR) as the inverse of relative resistance.The industry uses an extremely conservative KR of 1 in the validation of food processes for low-moisture foods, which ensures an adequate reduction of LCRTest, but can result in quality degradation. This study suggests an approach based on bootstrap sampling to determine conservative KR, leading to practical recommendations considering experimental and biological variability in food matrices. Previously collected thermal inactivation kinetics data of Salmonella spp. (target organism) and Enterococcus faecium (test organism) in Non-Fat Dried Milk (NFDM) and Whole Milk Powder (WMP) at 85, 90, and 95°C were used to calculate the mean KR. Bootstrapping was performed on mean inactivation rates to get a distribution of 1000 bootstrap KR values for each of the treatments. Based on minimum temperatures used in the industrial process and acceptable level of risk (e.g., 1, 5, or 10% of samples that would not achieve LCRTest), a conservative KR value can be estimated. Consistently, KR increased with temperature and KR for WMP was higher than NFDM. Food industries may use this framework based on the minimum processing temperature and acceptable level of risk for process validations to minimize quality degradation.


Subject(s)
Colony Count, Microbial , Food Contamination , Food Microbiology , Hot Temperature , Humans , Food Contamination/analysis , Food Handling/methods , Consumer Product Safety , Kinetics
14.
Biom J ; 66(2): e2300063, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38519877

ABSTRACT

Variable selection is usually performed to increase interpretability, as sparser models are easier to understand than full models. However, a focus on sparsity is not always suitable, for example, when features are related due to contextual similarities or high correlations. Here, it may be more appropriate to identify groups and their predictive members, a task that can be accomplished with bi-level selection procedures. To investigate whether such techniques lead to increased interpretability, group exponential LASSO (GEL), sparse group LASSO (SGL), composite minimax concave penalty (cMCP), and least absolute shrinkage, and selection operator (LASSO) as reference methods were used to select predictors in time-to-event, regression, and classification tasks in bootstrap samples from a cohort of 1001 patients. Different groupings based on prior knowledge, correlation structure, and random assignment were compared in terms of selection relevance, group consistency, and collinearity tolerance. The results show that bi-level selection methods are superior to LASSO in all criteria. The cMCP demonstrated superiority in selection relevance, while SGL was convincing in group consistency. An all-round capacity was achieved by GEL: the approach jointly selected correlated and content-related predictors while maintaining high selection relevance. This method seems recommendable when variables are grouped, and interpretation is of primary interest.

15.
Front Public Health ; 12: 1250343, 2024.
Article in English | MEDLINE | ID: mdl-38525341

ABSTRACT

Background: The COVID-19 pandemic has proved deadly all over the globe; however, one of the most lethal outbreaks occurred in Ecuador. Aims: This study aims to highlight the pandemic's impact on the most affected countries worldwide in terms of excess deaths per capita and per day. Methods: An ecological study of all-cause mortality recorded in Ecuador was performed. To calculate the excess deaths relative to the historical average for the same dates in 2017, 2018, and 2019, we developed a bootstrap method based on the central tendency measure of mean. A Poisson fitting analysis was used to identify trends on officially recorded all-cause deaths and COVID-19 deaths. A bootstrapping technique was used to emulate the sampling distribution of our expected deaths estimator µâŒ¢deaths by simulating the data generation and model fitting processes daily since the first confirmed case. Results: In Ecuador, during 2020, 115,070 deaths were reported and 42,453 were cataloged as excess mortality when compared to 2017-2019 period. Ecuador is the country with the highest recorded excess mortality in the world within the shortest timespan. In one single day, Ecuador recorded 1,120 deaths (6/100,000), which represents an additional 408% of the expected fatalities. Conclusion: Adjusting for population size and time, the hardest-hit country due to the COVID-19 pandemic was Ecuador. The mortality excess rate shows that the SARS-CoV-2 virus spread rapidly in Ecuador, especially in the coastal region. Our results and the proposed new methodology could help to address the real situation of the number of deaths during the initial phase of pandemics.


Subject(s)
COVID-19 , Pandemics , Humans , Ecuador/epidemiology , COVID-19/epidemiology , Disease Outbreaks , Population Density
16.
Anal Bioanal Chem ; 416(5): 1249-1267, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38289355

ABSTRACT

Non-targeted analysis (NTA) is an increasingly popular technique for characterizing undefined chemical analytes. Generating quantitative NTA (qNTA) concentration estimates requires the use of training data from calibration "surrogates," which can yield diminished predictive performance relative to targeted analysis. To evaluate performance differences between targeted and qNTA approaches, we defined new metrics that convey predictive accuracy, uncertainty (using 95% inverse confidence intervals), and reliability (the extent to which confidence intervals contain true values). We calculated and examined these newly defined metrics across five quantitative approaches applied to a mixture of 29 per- and polyfluoroalkyl substances (PFAS). The quantitative approaches spanned a traditional targeted design using chemical-specific calibration curves to a generalizable qNTA design using bootstrap-sampled calibration values from "global" chemical surrogates. As expected, the targeted approaches performed best, with major benefits realized from matched calibration curves and internal standard correction. In comparison to the benchmark targeted approach, the most generalizable qNTA approach (using "global" surrogates) showed a decrease in accuracy by a factor of ~4, an increase in uncertainty by a factor of ~1000, and a decrease in reliability by ~5%, on average. Using "expert-selected" surrogates (n = 3) instead of "global" surrogates (n = 25) for qNTA yielded improvements in predictive accuracy (by ~1.5×) and uncertainty (by ~70×) but at the cost of further-reduced reliability (by ~5%). Overall, our results illustrate the utility of qNTA approaches for a subclass of emerging contaminants and present a framework on which to develop new approaches for more complex use cases.

17.
Int J Gynaecol Obstet ; 165(3): 1114-1121, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38193307

ABSTRACT

OBJECTIVE: To consider the classical use of "pH < 7.0 and/or a base deficiency ≥12 mmol/L" as markers of the risk of neonatal hypoxic-ischemic encephalopathy (HIE), recalling various criticisms of the use of these markers in favor of that of neonatal eucapnic pH, which appears to be a better marker of this risk. METHODS: Fifty-five cases of acidemia with pH < 7.00 were collected from a cohort from the Nice University Hospital with eight cases of HIE. We compared the receiver operating characteristics curves established from the positive likelihood ratio (+LR) for each case of: umbilical cord artery pH (pHa), neonatal eucapnic pH (pH euc-n) in isolation (not matched to pHa), and matched pHa to its own pH euc-n. RESULTS: The areas under the curve (AUC) are identical for pHa and pH euc-n, but AUC for the matched pair pHa-pH euc-n appears superior but non-significant because of the small number in our cohort. However, using the bootstrap method, the partial AUC for a sensitivity greater than 75% indicates the significant superiority (P < 0.01) of the matched pair pHa-pH euc-n approach. CONCLUSION: The originality of this study lies in the use of two methodologic approaches: (1) standardized partial analysis of the AUCs of the pHa curve and that of pHa matched to its own pH euc-n, and (2) bootstrap statistical technique, that allowed us to conclude (P < 0.01) that the combined use of pH measured at the cord coupled with its eucapnic correction is better for diagnosing metabolic acidosis and best predicting the risk of HIE.


Subject(s)
Fetal Blood , Hypoxia-Ischemia, Brain , Humans , Hydrogen-Ion Concentration , Infant, Newborn , Female , Fetal Blood/chemistry , ROC Curve , Acidosis , Male , Pregnancy , Area Under Curve , Umbilical Arteries , Predictive Value of Tests , Biomarkers/blood
18.
Behav Res Methods ; 56(2): 750-764, 2024 Feb.
Article in English | MEDLINE | ID: mdl-36814007

ABSTRACT

Mediation analysis in repeated measures studies can shed light on the mechanisms through which experimental manipulations change the outcome variable. However, the literature on interval estimation for the indirect effect in the 1-1-1 single mediator model is sparse. Most simulation studies to date evaluating mediation analysis in multilevel data considered scenarios that do not match the expected numbers of level 1 and level 2 units typically encountered in experimental studies, and no study to date has compared resampling and Bayesian methods for constructing intervals for the indirect effect in this context. We conducted a simulation study to compare statistical properties of interval estimates of the indirect effect obtained using four bootstrap and two Bayesian methods in the 1-1-1 mediation model with and without random effects. Bayesian credibility intervals had coverage closest to the nominal value and no instances of excessive Type I error rates, but lower power than resampling methods. Findings indicated that the pattern of performance for resampling methods often depended on the presence of random effects. We provide suggestions for selecting an interval estimator for the indirect effect depending on the most important statistical property for a given study, as well as code in R for implementing all methods evaluated in the simulation study. Findings and code from this project will hopefully support the use of mediation analysis in experimental research with repeated measures.


Subject(s)
Mediation Analysis , Models, Statistical , Humans , Bayes Theorem , Computer Simulation , Multilevel Analysis
19.
Ultramicroscopy ; 257: 113891, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38043363

ABSTRACT

Electron magnetic circular dichroism (EMCD) is a powerful technique for estimating element-specific magnetic moments of materials on nanoscale with the potential to reach atomic resolution in transmission electron microscopes. However, the fundamentally weak EMCD signal strength complicates quantification of magnetic moments, as this requires very high precision, especially in the denominator of the sum rules. Here, we employ a statistical resampling technique known as bootstrapping to an experimental EMCD dataset to produce an empirical estimate of the noise-dependent error distribution resulting from application of EMCD sum rules to bcc iron in a 3-beam orientation. We observe clear experimental evidence that noisy EMCD signals preferentially bias the estimation of magnetic moments, further supporting this with error distributions produced by Monte-Carlo simulations. Finally, we propose guidelines for the recognition and minimization of this bias in the estimation of magnetic moments.

20.
Psych J ; 13(1): 31-43, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38105573

ABSTRACT

The present research examined whether Mandarin-speaking children could use function words to learn novel verbs and recognize verbs in a new sentential context. In Experiment 1, 3- to 6-year-old children were taught two novel verbs supported by the verb marker "zài." The 5- and 6-year-old children successfully used the function word "zài" to learn novel verbs, but the 3- and 4-year-olds failed to interpret the novel words as verbs. In Experiment 2 and 3, the children had to recognize the newly learned verbs in new sentences containing a different function word (a different verb-biased marker "le" or a non-verb-biased marker "shì"). Results showed that the 5-year-old children could recognize the newly learned verbs with another verb-biased marker "le," but only the 6-year-old children could recognize the newly learned verbs with the non-verb-biased marker "shì." The study verified that Mandarin-speaking children could use the function word "zài" to determine a novel word as a verb and revealed that such an ability appeared between the ages of 4 and 5 years. In addition, the ability to extend a newly learned verb across different morphosyntactic markers is developed in 5- to 6-year-olds.


Subject(s)
Language Development , Learning , Humans , Child, Preschool , Child , Language
SELECTION OF CITATIONS
SEARCH DETAIL
...