Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38676427

RESUMO

Pairwise likelihood is a limited-information method widely used to estimate latent variable models, including factor analysis of categorical data. It can often avoid evaluating high-dimensional integrals and, thus, is computationally more efficient than relying on the full likelihood. Despite its computational advantage, the pairwise likelihood approach can still be demanding for large-scale problems that involve many observed variables. We tackle this challenge by employing an approximation of the pairwise likelihood estimator, which is derived from an optimization procedure relying on stochastic gradients. The stochastic gradients are constructed by subsampling the pairwise log-likelihood contributions, for which the subsampling scheme controls the per-iteration computational complexity. The stochastic estimator is shown to be asymptotically equivalent to the pairwise likelihood one. However, finite-sample performance can be improved by compounding the sampling variability of the data with the uncertainty introduced by the subsampling scheme. We demonstrate the performance of the proposed method using simulation studies and two real data applications.

2.
Psychometrika ; 89(1): 267-295, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38383880

RESUMO

Ensuring fairness in instruments like survey questionnaires or educational tests is crucial. One way to address this is by a Differential Item Functioning (DIF) analysis, which examines if different subgroups respond differently to a particular item, controlling for their overall latent construct level. DIF analysis is typically conducted to assess measurement invariance at the item level. Traditional DIF analysis methods require knowing the comparison groups (reference and focal groups) and anchor items (a subset of DIF-free items). Such prior knowledge may not always be available, and psychometric methods have been proposed for DIF analysis when one piece of information is unknown. More specifically, when the comparison groups are unknown while anchor items are known, latent DIF analysis methods have been proposed that estimate the unknown groups by latent classes. When anchor items are unknown while comparison groups are known, methods have also been proposed, typically under a sparsity assumption - the number of DIF items is not too large. However, DIF analysis when both pieces of information are unknown has not received much attention. This paper proposes a general statistical framework under this setting. In the proposed framework, we model the unknown groups by latent classes and introduce item-specific DIF parameters to capture the DIF effects. Assuming the number of DIF items is relatively small, an L 1 -regularised estimator is proposed to simultaneously identify the latent classes and the DIF items. A computationally efficient Expectation-Maximisation (EM) algorithm is developed to solve the non-smooth optimisation problem for the regularised estimator. The performance of the proposed method is evaluated by simulation studies and an application to item response data from a real-world educational test.


Assuntos
Psicometria , Psicometria/métodos , Humanos , Modelos Estatísticos , Inquéritos e Questionários/normas , Avaliação Educacional/métodos , Simulação por Computador
3.
Psychometrika ; 88(4): 1097-1122, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37550561

RESUMO

Establishing the invariance property of an instrument (e.g., a questionnaire or test) is a key step for establishing its measurement validity. Measurement invariance is typically assessed by differential item functioning (DIF) analysis, i.e., detecting DIF items whose response distribution depends not only on the latent trait measured by the instrument but also on the group membership. DIF analysis is confounded by the group difference in the latent trait distributions. Many DIF analyses require knowing several anchor items that are DIF-free in order to draw inferences on whether each of the rest is a DIF item, where the anchor items are used to identify the latent trait distributions. When no prior information on anchor items is available, or some anchor items are misspecified, item purification methods and regularized estimation methods can be used. The former iteratively purifies the anchor set by a stepwise model selection procedure, and the latter selects the DIF-free items by a LASSO-type regularization approach. Unfortunately, unlike the methods based on a correctly specified anchor set, these methods are not guaranteed to provide valid statistical inference (e.g., confidence intervals and p-values). In this paper, we propose a new method for DIF analysis under a multiple indicators and multiple causes (MIMIC) model for DIF. This method adopts a minimal [Formula: see text] norm condition for identifying the latent trait distributions. Without requiring prior knowledge about an anchor set, it can accurately estimate the DIF effects of individual items and further draw valid statistical inferences for quantifying the uncertainty. Specifically, the inference results allow us to control the type-I error for DIF detection, which may not be possible with item purification and regularized estimation methods. We conduct simulation studies to evaluate the performance of the proposed method and compare it with the anchor-set-based likelihood ratio test approach and the LASSO approach. The proposed method is applied to analysing the three personality scales of the Eysenck personality questionnaire-revised (EPQ-R).


Assuntos
Psicometria , Psicometria/métodos , Inquéritos e Questionários , Funções Verossimilhança , Incerteza
4.
Psychometrika ; 88(2): 527-553, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37002429

RESUMO

Researchers have widely used exploratory factor analysis (EFA) to learn the latent structure underlying multivariate data. Rotation and regularised estimation are two classes of methods in EFA that they often use to find interpretable loading matrices. In this paper, we propose a new family of oblique rotations based on component-wise [Formula: see text] loss functions [Formula: see text] that is closely related to an [Formula: see text] regularised estimator. We develop model selection and post-selection inference procedures based on the proposed rotation method. When the true loading matrix is sparse, the proposed method tends to outperform traditional rotation and regularised estimation methods in terms of statistical accuracy and computational cost. Since the proposed loss functions are nonsmooth, we develop an iteratively reweighted gradient projection algorithm for solving the optimisation problem. We also develop theoretical results that establish the statistical consistency of the estimation, model selection, and post-selection inference. We evaluate the proposed method and compare it with regularised estimation and traditional rotation methods via simulation studies. We further illustrate it using an application to the Big Five personality assessment.


Assuntos
Algoritmos , Psicometria , Simulação por Computador
5.
Psychometrika ; 87(4): 1473-1502, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35524934

RESUMO

Latent variable models have been playing a central role in psychometrics and related fields. In many modern applications, the inference based on latent variable models involves one or several of the following features: (1) the presence of many latent variables, (2) the observed and latent variables being continuous, discrete, or a combination of both, (3) constraints on parameters, and (4) penalties on parameters to impose model parsimony. The estimation often involves maximizing an objective function based on a marginal likelihood/pseudo-likelihood, possibly with constraints and/or penalties on parameters. Solving this optimization problem is highly non-trivial, due to the complexities brought by the features mentioned above. Although several efficient algorithms have been proposed, there lacks a unified computational framework that takes all these features into account. In this paper, we fill the gap. Specifically, we provide a unified formulation for the optimization problem and then propose a quasi-Newton stochastic proximal algorithm. Theoretical properties of the proposed algorithms are established. The computational efficiency and robustness are shown by simulation studies under various settings for latent variable model estimation.


Assuntos
Algoritmos , Modelos Teóricos , Funções Verossimilhança , Psicometria , Simulação por Computador
6.
Neuroscience ; 494: 51-68, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35158017

RESUMO

Neuron apoptosis is a feature of secondary injury after traumatic brain injury (TBI). Evidence implies that excess calcium (Ca2+) ions and reactive oxidative species (ROS) play critical roles in apoptosis. In reaction to increased ROS, the anti-oxidative master transcription factor, Transient receptor potential Ankyrin 1 (TRPA1) allows Ca2+ ions to enter cells. However, the effect of TBI on the expression of TRPA1 and the role of TRPA1 in TBI are unclear. In the present study, TBI in the mouse brain was simulated using the weight-drop model. The process of neuronal oxidative stress was simulated in HT22 neuronal cells by treatment with hydrogen peroxide. We found that TRPA1 was significantly upregulated in neurons at 24 h after TBI. Neuronal apoptosis was increased in the in vivo and in vitro models; however, this increase was reduced by the functional inhibition of TRPA1 in both models. After TBI, TRPA1 was upregulated via nuclear factor, erythroid 2 like 2 (Nrf2) in neurons. TRPA1-mediated neuronal apoptosis after TBI might be achieved in part through the CaMKII/AKT/ERK signaling pathway. To sum up, TBI-triggered TRPA1 upregulation in neurons is mediated by Nrf2 and the functional blockade of TRPA1 attenuates neuronal apoptosis and improves neuronal dysfunction, partially mediated through the activation of the calcium/calmodulin dependent protein kinase II (CaMKII) extracellular regulated kinase (ERK)/protein kinase B (AKT) signaling pathway. Our results suggest that functional blockade of TRPA1 might be a promising therapeutic intervention related to ROS and Nrf2 in TBI.


Assuntos
Lesões Encefálicas Traumáticas , Canal de Cátion TRPA1 , Animais , Apoptose , Lesões Encefálicas Traumáticas/metabolismo , Cálcio/metabolismo , Proteína Quinase Tipo 2 Dependente de Cálcio-Calmodulina/metabolismo , Camundongos , Fator 2 Relacionado a NF-E2/metabolismo , Estresse Oxidativo , Proteínas Proto-Oncogênicas c-akt/metabolismo , Espécies Reativas de Oxigênio/metabolismo , Transdução de Sinais , Canal de Cátion TRPA1/metabolismo
7.
Am J Alzheimers Dis Other Demen ; 37: 15333175211070912, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35041557

RESUMO

Objective: To assess whether diabetes alone or in association with Apolipoprotein E (APOE) ε4 genotype increases the risk of Alzheimer's Disease (AD) diagnosis. Methods: A retrospective cohort study of 33,456 participants from the National Alzheimer's Coordinating Center database. Results: Participants with one or two APOE ε4 alleles had 2.71 (CI:2.55-2.88) and 9.37 (CI:8.14-10.78) times higher odds of AD diagnosis, respectively, relative to those with zero ε4 alleles. In contrast, diabetic participants showed 1.07 (CI:0.96-1.18) times higher odds of AD relative to nondiabetics. Diabetes did not exacerbate the odds of AD in APOE ε4 carriers. APOE ε4 carriage was correlated with declines in long-term memory and verbal fluency, which were strongly correlated with conversion to AD. However, diabetes was correlated with working memory decline, which had a relatively weak correlation with AD. Conclusions: Unlike APOE ε4, there was little evidence that diabetes was a risk factor for AD.


Assuntos
Doença de Alzheimer , Diabetes Mellitus , Alelos , Doença de Alzheimer/genética , Apolipoproteína E4/genética , Apolipoproteínas E/genética , Diabetes Mellitus/epidemiologia , Diabetes Mellitus/genética , Genótipo , Humanos , Estudos Retrospectivos , Fatores de Risco
8.
Clin Neurol Neurosurg ; 212: 107079, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34871991

RESUMO

BACKGROUND AND OBJECTIVE: Cerebral Contusion (CC) is one of the most serious injury types in patients with traumatic brain injury (TBI). Traumatic intraparenchymal hematoma (TICH) expansion severely affects the patient's prognosis. In this study, the baseline data, imaging features, and laboratory examinations of patients with CC were summarized and analyzed to develop and validate a nomogram predictive model assessing the risk factors for TICH expansion. METHODS: Totally 258 patients were included and retrospectively analyzed herein, who met the CC inclusion criteria, from July 2018 to July 2021. TICH expansion was defined as increased hematoma volume ≥ 30% relative to primary volume or an absolute hematoma increase ≥ 5 ml at CT review. RESULTS: Univariate and binary logistic regression analyses were performed to screen out the independent predictors significantly correlated with TICH expansion: Age, subdural hematoma (SDH), contusion site, multihematoma fuzzy sign (MFS), contusion volume, and traumatic coagulation abnormalities (TCA). Based on these, the nomogram model was established. The differences between the contusion volume and glasgow outcome scale (GOS) were analyzed by the nonparametric tests. Larger contusion volume was associated with poor prognosis. CONCLUSION: This study established a Nomogram model to predict TICH expansion in patients with CC. Meanwhile, the study found that the risk of bleeding tended to decrease when the hematoma volume was > 15 ml, but the larger initial hematoma volume would indicate worse prognosis. We advocate the use of predictive models for TICH expansion risk assessment in hospitalized CC patients, which is low-cost and easy-to-apply, especially in acute settings.


Assuntos
Contusão Encefálica/diagnóstico , Hemorragia Intracraniana Traumática/diagnóstico , Modelos Neurológicos , Nomogramas , Adulto , Idoso , Contusão Encefálica/diagnóstico por imagem , Feminino , Humanos , Hemorragia Intracraniana Traumática/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Guias de Prática Clínica como Assunto , Prognóstico , Estudos Retrospectivos , Adulto Jovem
9.
Psychometrika ; 85(4): 1052-1075, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33346883

RESUMO

Problem solving has been recognized as a central skill that today's students need to thrive and shape their world. As a result, the measurement of problem-solving competency has received much attention in education in recent years. A popular tool for the measurement of problem solving is simulated interactive tasks, which require students to uncover some of the information needed to solve the problem through interactions with a computer-simulated environment. A computer log file records a student's problem-solving process in details, including his/her actions and the time stamps of these actions. It thus provides rich information for the measurement of students' problem-solving competency. On the other hand, extracting useful information from log files is a challenging task, due to its complex data structure. In this paper, we show how log file process data can be viewed as a marked point process, based on which we propose a continuous-time dynamic choice model. The proposed model can serve as a measurement model for scaling students along the latent traits of problem-solving competency and action speed, based on data from one or multiple tasks. A real data example is given based on data from Program for International Student Assessment 2012.


Assuntos
Resolução de Problemas , Estudantes , Simulação por Computador , Feminino , Humanos , Masculino , Psicometria
10.
Psychometrika ; 85(4): 996-1012, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33346885

RESUMO

The likelihood ratio test (LRT) is widely used for comparing the relative fit of nested latent variable models. Following Wilks' theorem, the LRT is conducted by comparing the LRT statistic with its asymptotic distribution under the restricted model, a [Formula: see text] distribution with degrees of freedom equal to the difference in the number of free parameters between the two nested models under comparison. For models with latent variables such as factor analysis, structural equation models and random effects models, however, it is often found that the [Formula: see text] approximation does not hold. In this note, we show how the regularity conditions of Wilks' theorem may be violated using three examples of models with latent variables. In addition, a more general theory for LRT is given that provides the correct asymptotic theory for these LRTs. This general theory was first established in Chernoff (J R Stat Soc Ser B (Methodol) 45:404-413, 1954) and discussed in both van der Vaart (Asymptotic statistics, Cambridge, Cambridge University Press, 2000) and Drton (Ann Stat 37:979-1012, 2009), but it does not seem to have received enough attention. We illustrate this general theory with the three examples.


Assuntos
Modelos Teóricos , Humanos , Funções Verossimilhança , Psicometria
11.
Psychometrika ; 85(2): 358-372, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32451743

RESUMO

We revisit a singular value decomposition (SVD) algorithm given in Chen et al. (Psychometrika 84:124-146, 2019b) for exploratory item factor analysis (IFA). This algorithm estimates a multidimensional IFA model by SVD and was used to obtain a starting point for joint maximum likelihood estimation in Chen et al. (2019b). Thanks to the analytic and computational properties of SVD, this algorithm guarantees a unique solution and has computational advantage over other exploratory IFA methods. Its computational advantage becomes significant when the numbers of respondents, items, and factors are all large. This algorithm can be viewed as a generalization of principal component analysis to binary data. In this note, we provide the statistical underpinning of the algorithm. In particular, we show its statistical consistency under the same double asymptotic setting as in Chen et al. (2019b). We also demonstrate how this algorithm provides a scree plot for investigating the number of factors and provide its asymptotic theory. Further extensions of the algorithm are discussed. Finally, simulation studies suggest that the algorithm has good finite sample performance.


Assuntos
Algoritmos , Simulação por Computador , Análise Fatorial , Análise de Componente Principal , Psicometria
12.
Br J Math Stat Psychol ; 73(2): 237-260, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-31418456

RESUMO

Intensive longitudinal studies are becoming progressively more prevalent across many social science areas, and especially in psychology. New technologies such as smart-phones, fitness trackers, and the Internet of Things make it much easier than in the past to collect data for intensive longitudinal studies, providing an opportunity to look deep into the underlying characteristics of individuals under a high temporal resolution. In this paper we introduce a new modelling framework for latent curve analysis that is more suitable for the analysis of intensive longitudinal data than existing latent curve models. Specifically, through the modelling of an individual-specific continuous-time latent process, some unique features of intensive longitudinal data are better captured, including intensive measurements in time and unequally spaced time points of observations. Technically, the continuous-time latent process is modelled by a Gaussian process model. This model can be regarded as a semi-parametric extension of the classical latent curve models and falls under the framework of structural equation modelling. Procedures for parameter estimation and statistical inference are provided under an empirical Bayes framework and evaluated by simulation studies. We illustrate the use of the proposed model though the analysis of an ecological momentary assessment data set.


Assuntos
Modelos Estatísticos , Psicologia/estatística & dados numéricos , Afeto , Algoritmos , Teorema de Bayes , Transtorno da Personalidade Borderline/psicologia , Simulação por Computador , Interpretação Estatística de Dados , Transtorno Depressivo Maior/psicologia , Transtorno Distímico/psicologia , Humanos , Funções Verossimilhança , Estudos Longitudinais , Distribuição Normal , Probabilidade , Processos Estocásticos , Fatores de Tempo
13.
Br J Math Stat Psychol ; 73(1): 44-71, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-30511445

RESUMO

In this paper, we explore the use of the stochastic EM algorithm (Celeux & Diebolt (1985) Computational Statistics Quarterly, 2, 73) for large-scale full-information item factor analysis. Innovations have been made on its implementation, including an adaptive-rejection-based Gibbs sampler for the stochastic E step, a proximal gradient descent algorithm for the optimization in the M step, and diagnostic procedures for determining the burn-in size and the stopping of the algorithm. These developments are based on the theoretical results of Nielsen (2000, Bernoulli, 6, 457), as well as advanced sampling and optimization techniques. The proposed algorithm is computationally efficient and virtually tuning-free, making it scalable to large-scale data with many latent traits (e.g. more than five latent traits) and easy to use for practitioners. Standard errors of parameter estimation are also obtained based on the missing-information identity (Louis, 1982, Journal of the Royal Statistical Society, Series B, 44, 226). The performance of the algorithm is evaluated through simulation studies and an application to the analysis of the IPIP-NEO personality inventory. Extensions of the proposed algorithm to other latent variable models are discussed.


Assuntos
Algoritmos , Análise Fatorial , Processos Estocásticos , Simulação por Computador , Humanos , Análise de Regressão
14.
J Vis ; 19(6): 6, 2019 06 03.
Artigo em Inglês | MEDLINE | ID: mdl-31173631

RESUMO

A representation of shape that is low dimensional and stable across minor disruptions is critical for object recognition. Computer vision research suggests that such a representation can be supported by the medial axis-a computational model for extracting a shape's internal skeleton. However, few studies have shown evidence of medial axis processing in humans, and even fewer have examined how the medial axis is extracted in the presence of disruptive contours. Here, we tested whether human skeletal representations of shape reflect the medial axis transform (MAT), a computation sensitive to all available contours, or a pruned medial axis, which ignores contours that may be considered "noise." Across three experiments, participants (N = 2062) were shown complete, perturbed, or illusory two-dimensional shapes on a tablet computer and were asked to tap the shapes anywhere once. When directly compared with another viable model of shape perception (based on principal axes), participants' collective responses were better fit by the medial axis, and a direct test of boundary avoidance suggested that this result was not likely because of a task-specific cognitive strategy (Experiment 1). Moreover, participants' responses reflected a pruned computation in shapes with small or large internal or external perturbations (Experiment 2) and under conditions of illusory contours (Experiment 3). These findings extend previous work by suggesting that humans extract a relatively stable medial axis of shapes. A relatively stable skeletal representation, reflected by a pruned model, may be well equipped to support real-world shape perception and object recognition.


Assuntos
Simulação por Computador , Percepção de Forma/fisiologia , Visão Ocular/fisiologia , Humanos
15.
Front Psychol ; 10: 486, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30936843

RESUMO

Complex problem-solving (CPS) ability has been recognized as a central 21st century skill. Individuals' processes of solving crucial complex problems may contain substantial information about their CPS ability. In this paper, we consider the prediction of duration and final outcome (i.e., success/failure) of solving a complex problem during task completion process, by making use of process data recorded in computer log files. Solving this problem may help answer questions like "how much information about an individual's CPS ability is contained in the process data?," "what CPS patterns will yield a higher chance of success?," and "what CPS patterns predict the remaining time for task completion?" We propose an event history analysis model for this prediction problem. The trained prediction model may provide us a better understanding of individuals' problem-solving patterns, which may eventually lead to a good design of automated interventions (e.g., providing hints) for the training of CPS ability. A real data example from the 2012 Programme for International Student Assessment (PISA) is provided for illustration.

16.
Psychometrika ; 84(1): 124-146, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30456747

RESUMO

Joint maximum likelihood (JML) estimation is one of the earliest approaches to fitting item response theory (IRT) models. This procedure treats both the item and person parameters as unknown but fixed model parameters and estimates them simultaneously by solving an optimization problem. However, the JML estimator is known to be asymptotically inconsistent for many IRT models, when the sample size goes to infinity and the number of items keeps fixed. Consequently, in the psychometrics literature, this estimator is less preferred to the marginal maximum likelihood (MML) estimator. In this paper, we re-investigate the JML estimator for high-dimensional exploratory item factor analysis, from both statistical and computational perspectives. In particular, we establish a notion of statistical consistency for a constrained JML estimator, under an asymptotic setting that both the numbers of items and people grow to infinity and that many responses may be missing. A parallel computing algorithm is proposed for this estimator that can scale to very large datasets. Via simulation studies, we show that when the dimensionality is high, the proposed estimator yields similar or even better results than those from the MML estimator, but can be obtained computationally much more efficiently. An illustrative real data example is provided based on the revised version of Eysenck's Personality Questionnaire (EPQ-R).


Assuntos
Análise Fatorial , Funções Verossimilhança , Algoritmos , Simulação por Computador , Interpretação Estatística de Dados , Feminino , Humanos , Método de Monte Carlo , Personalidade , Testes de Personalidade , Psicometria/métodos , Inquéritos e Questionários
17.
Br J Math Stat Psychol ; 72(1): 108-135, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30277574

RESUMO

Personalized learning refers to instruction in which the pace of learning and the instructional approach are optimized for the needs of each learner. With the latest advances in information technology and data science, personalized learning is becoming possible for anyone with a personal computer, supported by a data-driven recommendation system that automatically schedules the learning sequence. The engine of such a recommendation system is a recommendation strategy that, based on data from other learners and the performance of the current learner, recommends suitable learning materials to optimize certain learning outcomes. A powerful engine achieves a balance between making the best possible recommendations based on the current knowledge and exploring new learning trajectories that may potentially pay off. Building such an engine is a challenging task. We formulate this problem within the Markov decision framework and propose a reinforcement learning approach to solving the problem.


Assuntos
Algoritmos , Instrução por Computador/métodos , Aprendizagem , Reforço Psicológico , Simulação por Computador , Tomada de Decisões , Escolaridade , Humanos , Cadeias de Markov , Software
18.
J Nutr Sci ; 7: e24, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30258573

RESUMO

Globally, the prevalence of childhood obesity has substantially increased at an alarming rate. This study investigated associations between dietary patterns and overweight/obesity in 3- to 6-year-old children. Recruited children were from four prefecture-level cities in Eastern China. Childhood overweight and obesity were defined according to WHO Child Growth Standards. Individual dietary patterns were assessed by a comprehensive self-administered FFQ using thirty-five food items. Using factor analysis two dietary patterns were derived: the traditional Chinese pattern was characterised by high consumption of cereals, vegetables and fresh juices while the modern pattern was characterised by high consumption of Western fast food, Chinese fast food, sweets/sugary foods and carbonated beverages. The associations of dietary patterns with overweight/obesity were evaluated by logistic regression models. Data of 8900 preschool children from thirty-five kindergartens recruited from March to June 2015 were used in the final analysis. Adherence to the modern dietary pattern was positively associated with children's age while adherence to the traditional dietary pattern was positively associated with maternal education; these associations were statistically significant. After adjustment, we found that being in the highest tertile of any identified dietary patterns was not significantly associated with overweight and obesity. Dietary patterns are not associated with overweight/obesity in Chinese preschool children. Prospective studies are needed to establish a causal link between dietary patterns and childhood obesity.

19.
Appl Psychol Meas ; 42(1): 3-4, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29881109
20.
Life Sci ; 207: 110-116, 2018 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-29859985

RESUMO

AIMS: Endothelial-to-mesenchymal transition (EndMT) contribute to diabetic cardiac fibrosis, the underlying mechanisms are poorly understood. In the study, we aimed to investigate the role of miR-328 in EndMT mediated by high glucose (HG) and the signaling pathways implicated in human umbilical vein endothelial cells (HUVECs). MATERIALS AND METHODS: EndMT of HUVECs was determined by immunofluorescent staining and western blot of the markers CD31 and α-SMA. Real-time polymerase chain reaction was used to detect mRNA expression of miR-328 and transforming growth factor ß1 (TGF-ß1). SB431542 was used to study the relation of miR-328 and TGF-ß1 during EndMT induced by HG. Over-expression and inhibition of miR-328 were achieved by transduction of miR-328 and antagomiR-328. The effects of miR-328 on expression of type I and III collagen, p-MEK1/2, p-ERK1/2 were examined by Western blot. KEY FINDINGS: The level of miR-328 was significantly up-regulated in HG-induced EndMT. MiR-328 showed the independent capability of inducing EndMT, which was not related to TGF-ß1, and this effect was abrogated by antagomiR-328. MiR-328 affected type I collagen in a time- and dose-dependent manner and enhanced protein expression of type I and III collagens. Further investigation displayed that a significantly higher expression of p-MEK1/2 and p-ERK1/2 in HUVECs transduced with miR-328, and a lower expression of p-MEK1/2 and p-ERK1/2 in cells transduced with antagomiR-328. SIGNIFICANCE: These results suggest a novel role for miR-328 in HG-induced EndMT, MEK1/2-ERK1/2 pathway is likely to be involved in the associated effects. Our findings may suggest antagomiR-328 as an alternative agent in prevention of HG-induced EndMT.


Assuntos
Endotélio/metabolismo , Transição Epitelial-Mesenquimal , Células Endoteliais da Veia Umbilical Humana/citologia , MicroRNAs/metabolismo , Colágeno Tipo I/metabolismo , Colágeno Tipo III/metabolismo , MAP Quinases Reguladas por Sinal Extracelular/metabolismo , Glucose/farmacologia , Humanos , Transdução de Sinais , Fator de Crescimento Transformador beta1/metabolismo , Regulação para Cima
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...