Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 60
Filter
1.
Arch Gerontol Geriatr ; 117: 105259, 2024 02.
Article in English | MEDLINE | ID: mdl-37952423

ABSTRACT

OBJECTIVE: To examine the associations between individual chronic diseases and multidimensional frailty comprising physical, psychological, and social frailty. METHODS: Dutch individuals (N = 47,768) age ≥ 65 years completed a general health questionnaire sent by the Public Health Services (response rate of 58.5 %), including data concerning self-reported chronic diseases, multidimensional frailty, and sociodemographic characteristics. Multidimensional frailty was assessed with the Tilburg Frailty Indicator (TFI). Total frailty and each frailty domain were regressed onto background characteristics and the six most prevalent chronic diseases: diabetes mellitus, cancer, hypertension, arthrosis, urinary incontinence, and severe back disorder. Multimorbidity was defined as the presence of combinations of these six diseases. RESULTS: The six chronic diseases had medium and strong associations with total ((f2 = 0.122) and physical frailty (f2 = 0.170), respectively, and weak associations with psychological (f2 = 0.023) and social frailty (f2 = 0.008). The effects of the six diseases on the frailty variables differed strongly across diseases, with urinary incontinence and severe back disorder impairing frailty most. No synergetic effects were found; the effects of a disease on frailty did not get noteworthy stronger in the presence of another disease. CONCLUSIONS: Chronic diseases, in particular urinary incontinence and severe back disorder, were associated with frailty. We thus recommend assigning different weights to individual chronic diseases in a measure of multimorbidity that aims to examine effects of multimorbidity on multidimensional frailty. Because there were no synergetic effects of chronic diseases, the measure does not need to include interactions between diseases.


Subject(s)
Frailty , Urinary Incontinence , Humans , Aged , Frail Elderly , Multimorbidity , Surveys and Questionnaires , Geriatric Assessment/methods , Chronic Disease , Urinary Incontinence/epidemiology
2.
Psychol Methods ; 2023 Dec 25.
Article in English | MEDLINE | ID: mdl-38147039

ABSTRACT

Self-report scales are widely used in psychology to compare means in latent constructs across groups, experimental conditions, or time points. However, for these comparisons to be meaningful and unbiased, the scales must demonstrate measurement invariance (MI) across compared time points or (experimental) groups. MI testing determines whether the latent constructs are measured equivalently across groups or time, which is essential for meaningful comparisons. We conducted a systematic review of 426 psychology articles with openly available data, to (a) examine common practices in conducting and reporting of MI testing, (b) assess whether we could reproduce the reported MI results, and (c) conduct MI tests for the comparisons that enabled sufficiently powerful MI testing. We identified 96 articles that contained a total of 929 comparisons. Results showed that only 4% of the 929 comparisons underwent MI testing, and the tests were generally poorly reported. None of the reported MI tests were reproducible, and only 26% of the 174 newly performed MI tests reached sufficient (scalar) invariance, with MI failing completely in 58% of tests. Exploratory analyses suggested that in nearly half of the comparisons where configural invariance was rejected, the number of factors differed between groups. These results indicate that MI tests are rarely conducted and poorly reported in psychological studies. We observed frequent violations of MI, suggesting that reported differences between (experimental) groups may not be solely attributed to group differences in the latent constructs. We offer recommendations aimed at improving reporting and computational reproducibility practices in psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

3.
Behav Res Methods ; 2023 Nov 10.
Article in English | MEDLINE | ID: mdl-37950113

ABSTRACT

Preregistration has gained traction as one of the most promising solutions to improve the replicability of scientific effects. In this project, we compared 193 psychology studies that earned a Preregistration Challenge prize or preregistration badge to 193 related studies that were not preregistered. In contrast to our theoretical expectations and prior research, we did not find that preregistered studies had a lower proportion of positive results (Hypothesis 1), smaller effect sizes (Hypothesis 2), or fewer statistical errors (Hypothesis 3) than non-preregistered studies. Supporting our Hypotheses 4 and 5, we found that preregistered studies more often contained power analyses and typically had larger sample sizes than non-preregistered studies. Finally, concerns about the publishability and impact of preregistered studies seem unwarranted, as preregistered studies did not take longer to publish and scored better on several impact measures. Overall, our data indicate that preregistration has beneficial effects in the realm of statistical power and impact, but we did not find robust evidence that preregistration prevents p-hacking and HARKing (Hypothesizing After the Results are Known).

4.
Psychon Bull Rev ; 30(4): 1609-1620, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36635588

ABSTRACT

Employing two vignette studies, we examined how psychology researchers interpret the results of a set of four experiments that all test a given theory. In both studies, we found that participants' belief in the theory increased with the number of statistically significant results, and that the result of a direct replication had a stronger effect on belief in the theory than the result of a conceptual replication. In Study 2, we additionally found that participants' belief in the theory was lower when they assumed the presence of p-hacking, but that belief in the theory did not differ between preregistered and non-preregistered replication studies. In analyses of individual participant data from both studies, we examined the heuristics academics use to interpret the results of four experiments. Only a small proportion (Study 1: 1.6%; Study 2: 2.2%) of participants used the normative method of Bayesian inference, whereas many of the participants' responses were in line with generally dismissed and problematic vote-counting approaches. Our studies demonstrate that many psychology researchers overestimate the evidence in favor of a theory if one or more results from a set of replication studies are statistically significant, highlighting the need for better statistical education.


Subject(s)
Heuristics , Politics , Humans , Bayes Theorem , Psychology
5.
BMC Geriatr ; 22(1): 7, 2022 01 03.
Article in English | MEDLINE | ID: mdl-34979945

ABSTRACT

BACKGROUND: Multidimensional frailty, including physical, psychological, and social components, is associated to disability, lower quality of life, increased healthcare utilization, and mortality. In order to prevent or delay frailty, more knowledge of its determinants is necessary; one of these determinants is lifestyle. The aim of this study is to determine the association between lifestyle factors smoking, alcohol use, nutrition, physical activity, and multidimensional frailty. METHODS: This cross-sectional study was conducted in two samples comprising in total 45,336 Dutch community-dwelling individuals aged 65 years or older. These samples completed a questionnaire including questions about smoking, alcohol use, physical activity, sociodemographic factors (both samples), and nutrition (one sample). Multidimensional frailty was assessed with the Tilburg Frailty Indicator (TFI). RESULTS: Higher alcohol consumption, physical activity, healthy nutrition, and less smoking were associated with less total, physical, psychological and social frailty after controlling for effects of other lifestyle factors and sociodemographic characteristics of the participants (age, gender, marital status, education, income). Effects of physical activity on total and physical frailty were up to considerable, whereas the effects of other lifestyle factors on frailty were small. CONCLUSIONS: The four lifestyle factors were not only associated with physical frailty but also with psychological and social frailty. The different associations of frailty domains with lifestyle factors emphasize the importance of assessing frailty broadly and thus to pay attention to the multidimensional nature of this concept. The findings offer healthcare professionals starting points for interventions with the purpose to prevent or delay the onset of frailty, so community-dwelling older people have the possibility to aging in place accompanied by a good quality of life.


Subject(s)
Frailty , Independent Living , Aged , Cross-Sectional Studies , Frail Elderly , Frailty/diagnosis , Frailty/epidemiology , Geriatric Assessment , Humans , Life Style , Quality of Life , Sociodemographic Factors , Surveys and Questionnaires
6.
Depress Anxiety ; 39(2): 134-146, 2022 02.
Article in English | MEDLINE | ID: mdl-34951503

ABSTRACT

BACKGROUND: Although cognitive behavioral therapy (CBT) is effective in the treatment of anxiety disorders, few evidence-based alternatives exist. Autonomy enhancing treatment (AET) aims to decrease the vulnerability for anxiety disorders by targeting underlying autonomy deficits and may therefore have similar effects on anxiety as CBT, but yield broader effects. METHODS: A multicenter cluster-randomized clinical trial was conducted including 129 patients with DSM-5 anxiety disorders, on average 33.66 years of age (SD = 12.57), 91 (70.5%) female, and most (92.2%) born in the Netherlands. Participants were randomized over 15-week groupwise AET or groupwise CBT and completed questionnaires on anxiety, general psychopathology, depression, quality of life, autonomy-connectedness and self-esteem, pre-, mid-, and posttreatment, and after 3, 6, and 12 months (six measurements). RESULTS: Contrary to the hypotheses, effects on the broader outcome measures did not differ between AET and CBT (d = .16 or smaller at post-test). Anxiety reduction was similar across conditions (d = .059 at post-test) and neither therapy was superior on long term. CONCLUSION: This was the first clinical randomized trial comparing AET to CBT. The added value of AET does not seem to lie in enhanced effectiveness on broader outcome measures or on long term compared to CBT. However, the study supports the effectiveness of AET and thereby contributes to extended treatment options for anxiety disorders.


Subject(s)
Anxiety Disorders , Cognitive Behavioral Therapy , Adult , Anxiety/therapy , Anxiety Disorders/therapy , Female , Humans , Male , Quality of Life/psychology , Self Concept , Treatment Outcome
7.
J Am Med Dir Assoc ; 22(3): 607.e1-607.e6, 2021 03.
Article in English | MEDLINE | ID: mdl-32883597

ABSTRACT

OBJECTIVE: To predict mortality with the Tilburg Frailty Indicator (TFI) in a sample of community-dwelling older people, using a follow-up of 7 years. DESIGN: Longitudinal. SETTING AND PARTICIPANTS: 479 Dutch community-dwelling people aged 75 years or older. MEASUREMENTS: The TFI, a self-report questionnaire, was used to collect data about total, physical, psychological, and social frailty. The municipality of Roosendaal (a town in the Netherlands) provided the mortality dates. RESULTS: Total, physical, and psychological frailty predicted mortality, with unadjusted hazard ratios of 1.295, 1.168, and 1.194, and areas under the receiver operating characteristic curves of 0.664, 0.671, and 0.567, respectively. After adjustment for age and gender, the areas under the curves for total, physical, and psychological frailty were 0.704, 0.702, and 0.652, respectively. Analyses using individual components of the TFI show that difficulty in walking and unexplained weight loss predict mortality. CONCLUSIONS AND IMPLICATIONS: This study has shown the predictive validity of the TFI for mortality in community-dwelling older people. Our study demonstrated that physical and psychological frailty predicted mortality. Of the individual TFI components, difficulty in walking consistently predicted mortality. For identifying frailty, using the integral instrument is recommended because total, physical, psychological, and social frailty and its components have proven their value in predicting adverse outcomes of frailty, for example, increase in health care use and a lower quality of life.


Subject(s)
Frailty , Aged , Frail Elderly , Frailty/diagnosis , Geriatric Assessment , Humans , Netherlands/epidemiology , Psychometrics , Quality of Life , Surveys and Questionnaires
8.
PLoS Biol ; 18(12): e3000937, 2020 12.
Article in English | MEDLINE | ID: mdl-33296358

ABSTRACT

Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of "researcher degrees of freedom" aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called "OSF Preregistration," http://osf.io/prereg/). The Prereg Challenge format was a "structured" workflow with detailed instructions and an independent review to confirm completeness; the "Standard" format was "unstructured" with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the "structured" format restricted the opportunistic use of researcher degrees of freedom better (Cliff's Delta = 0.49) than the "unstructured" format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.


Subject(s)
Data Collection/methods , Research Design/statistics & numerical data , Data Collection/standards , Data Collection/trends , Humans , Quality Control , Registries/statistics & numerical data , Research Design/trends
9.
J Intell ; 8(4)2020 Oct 02.
Article in English | MEDLINE | ID: mdl-33023250

ABSTRACT

In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson's correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small-study effects, potentially indicating publication bias and overestimated effects. We found no differences in small-study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We concluded that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.

10.
PLoS One ; 15(7): e0236079, 2020.
Article in English | MEDLINE | ID: mdl-32735597

ABSTRACT

In this preregistered study, we investigated whether the statistical power of a study is higher when researchers are asked to make a formal power analysis before collecting data. We compared the sample size descriptions from two sources: (i) a sample of pre-registrations created according to the guidelines for the Center for Open Science Preregistration Challenge (PCRs) and a sample of institutional review board (IRB) proposals from Tilburg School of Behavior and Social Sciences, which both include a recommendation to do a formal power analysis, and (ii) a sample of pre-registrations created according to the guidelines for Open Science Framework Standard Pre-Data Collection Registrations (SPRs) in which no guidance on sample size planning is given. We found that PCRs and IRBs (72%) more often included sample size decisions based on power analyses than the SPRs (45%). However, this did not result in larger planned sample sizes. The determined sample size of the PCRs and IRB proposals (Md = 90.50) was not higher than the determined sample size of the SPRs (Md = 126.00; W = 3389.5, p = 0.936). Typically, power analyses in the registrations were conducted with G*power, assuming a medium effect size, α = .05 and a power of .80. Only 20% of the power analyses contained enough information to fully reproduce the results and only 62% of these power analyses pertained to the main hypothesis test in the pre-registration. Therefore, we see ample room for improvements in the quality of the registrations and we offer several recommendations to do so.


Subject(s)
Ethics Committees, Research , Sample Size , Statistics as Topic/methods
11.
Psychol Bull ; 146(10): 922-940, 2020 10.
Article in English | MEDLINE | ID: mdl-32700942

ABSTRACT

We examined the evidence for heterogeneity (of effect sizes) when only minor changes to sample population and settings were made between studies and explored the association between heterogeneity and average effect size in a sample of 68 meta-analyses from 13 preregistered multilab direct replication projects in social and cognitive psychology. Among the many examined effects, examples include the Stroop effect, the "verbal overshadowing" effect, and various priming effects such as "anchoring" effects. We found limited heterogeneity; 48/68 (71%) meta-analyses had nonsignificant heterogeneity, and most (49/68; 72%) were most likely to have zero to small heterogeneity. Power to detect small heterogeneity (as defined by Higgins, Thompson, Deeks, & Altman, 2003) was low for all projects (mean 43%), but good to excellent for medium and large heterogeneity. Our findings thus show little evidence of widespread heterogeneity in direct replication studies in social and cognitive psychology, suggesting that minor changes in sample population and settings are unlikely to affect research outcomes in these fields of psychology. We also found strong correlations between observed average effect sizes (standardized mean differences and log odds ratios) and heterogeneity in our sample. Our results suggest that heterogeneity and moderation of effects is unlikely for a 0 average true effect size, but increasingly likely for larger average true effect size. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Meta-Analysis as Topic , Psychology/statistics & numerical data , Female , Humans , Motor Activity , Reproducibility of Results , Stroop Test/statistics & numerical data
12.
PLoS One ; 15(5): e0233107, 2020.
Article in English | MEDLINE | ID: mdl-32459806

ABSTRACT

To determine the reproducibility of psychological meta-analyses, we investigated whether we could reproduce 500 primary study effect sizes drawn from 33 published meta-analyses based on the information given in the meta-analyses, and whether recomputations of primary study effect sizes altered the overall results of the meta-analysis. Results showed that almost half (k = 224) of all sampled primary effect sizes could not be reproduced based on the reported information in the meta-analysis, mostly because of incomplete or missing information on how effect sizes from primary studies were selected and computed. Overall, this led to small discrepancies in the computation of mean effect sizes, confidence intervals and heterogeneity estimates in 13 out of 33 meta-analyses. We provide recommendations to improve transparency in the reporting of the entire meta-analytic process, including the use of preregistration, data and workflow sharing, and explicit coding practices.


Subject(s)
Psychology/methods , Confidence Intervals , Meta-Analysis as Topic , Reproducibility of Results
13.
PLoS One ; 14(4): e0215052, 2019.
Article in English | MEDLINE | ID: mdl-30978228

ABSTRACT

Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias exists, how strongly it affects different scientific literatures is currently less well-known. We examined evidence of publication bias in a large-scale data set of primary studies that were included in 83 meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and 499 systematic reviews from the Cochrane Database of Systematic Reviews (CDSR; representing meta-analyses from medicine). Publication bias was assessed on all homogeneous subsets (3.8% of all subsets of meta-analyses published in Psychological Bulletin) of primary studies included in meta-analyses, because publication bias methods do not have good statistical properties if the true effect size is heterogeneous. Publication bias tests did not reveal evidence for bias in the homogeneous subsets. Overestimation was minimal but statistically significant, providing evidence of publication bias that appeared to be similar in both fields. However, a Monte-Carlo simulation study revealed that the creation of homogeneous subsets resulted in challenging conditions for publication bias methods since the number of effect sizes in a subset was rather small (median number of effect sizes equaled 6). Our findings are in line with, in its most extreme case, publication bias ranging from no bias until only 5% statistically nonsignificant effect sizes being published. These and other findings, in combination with the small percentages of statistically significant primary effect sizes (28.9% and 18.9% for subsets published in Psychological Bulletin and CDSR), led to the conclusion that evidence for publication bias in the studied homogeneous subsets is weak, but suggestive of mild publication bias in both psychology and medicine.


Subject(s)
Data Interpretation, Statistical , Medicine , Psychology , Publication Bias , Data Management , Databases, Factual , Humans , Monte Carlo Method , Quality Control , Selection Bias
14.
Psychol Sci ; 30(4): 576-586, 2019 04.
Article in English | MEDLINE | ID: mdl-30789796

ABSTRACT

We examined the percentage of p values (.05 < p ≤ .10) reported as marginally significant in 44,200 articles, across nine psychology disciplines, published in 70 journals belonging to the American Psychological Association between 1985 and 2016. Using regular expressions, we extracted 42,504 p values between .05 and .10. Almost 40% of p values in this range were reported as marginally significant, although there were considerable differences between disciplines. The practice is most common in organizational psychology (45.4%) and least common in clinical psychology (30.1%). Contrary to what was reported by previous researchers, our results showed no evidence of an increasing trend in any discipline; in all disciplines, the percentage of p values reported as marginally significant was decreasing or constant over time. We recommend against reporting these results as marginally significant because of the low evidential value of p values between .05 and .10.


Subject(s)
Psychology, Clinical , Psychology , Research/statistics & numerical data , Research/standards , Bias , Humans , Prevalence , Societies, Scientific
15.
Int J Aging Hum Dev ; 88(3): 250-265, 2019 04.
Article in English | MEDLINE | ID: mdl-29482331

ABSTRACT

This study examined the effects of secrecy on quality of life in a sample consisting of older adults (>50 years; N = 301). Three key components of secrecy were examined with the Tilburg Secrecy Scale-25 (TSS25; possession of a secret, self-concealment, and cognitive preoccupation). The TSS25 distinguishes between the tendency to conceal personal information (self-concealment) and the tendency to worry or ruminate about the secret (cognitive preoccupation), thereby enabling investigation of the effects of secrecy on quality of life in detail. Confirming previous findings in younger samples, we found a positive effect of possession of a secret on quality of life, after controlling for both TSS25's self-concealment and cognitive preoccupation. This suggests that keeping secrets may have a positive association with quality of life in older adults as well, as long as they do not have the tendency to self-conceal and are not cognitively preoccupied with their secret.


Subject(s)
Aging/psychology , Confidentiality/psychology , Quality of Life/psychology , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Surveys and Questionnaires
16.
Psychol Methods ; 24(1): 116-134, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30489099

ABSTRACT

One of the main goals of meta-analysis is to test for and estimate the heterogeneity of effect sizes. We examined the effect of publication bias on the Q test and assessments of heterogeneity as a function of true heterogeneity, publication bias, true effect size, number of studies, and variation of sample sizes. The present study has two main contributions and is relevant to all researchers conducting meta-analysis. First, we show when and how publication bias affects the assessment of heterogeneity. The expected values of heterogeneity measures H² and I² were analytically derived, and the power and Type I error rate of the Q test were examined in a Monte Carlo simulation study. Our results show that the effect of publication bias on the Q test and assessment of heterogeneity is large, complex, and nonlinear. Publication bias can both dramatically decrease and increase heterogeneity in true effect size, particularly if the number of studies is large and population effect size is small. We therefore conclude that the Q test of homogeneity and heterogeneity measures H² and I² are generally not valid when publication bias is present. Our second contribution is that we introduce a web application, Q-sense, which can be used to determine the impact of publication bias on the assessment of heterogeneity within a certain meta-analysis and to assess the robustness of the meta-analytic estimate to publication bias. Furthermore, we apply Q-sense to 2 published meta-analyses, showing how publication bias can result in invalid estimates of effect size and heterogeneity. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Data Interpretation, Statistical , Meta-Analysis as Topic , Publication Bias , Humans , Normal Distribution
17.
Soc Sci Res ; 77: 79-87, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30466880

ABSTRACT

Analytical sociology explains macro-level outcomes by referring to micro-level behaviors, and its hypotheses thus take macro-level entities (e.g. groups) as their units of analysis. The statistical analysis of these macro-level units is problematic, since macro units are often few in number, leading to low statistical power. Additionally, micro-level processes take place within macro units, but tests on macro-level units cannot adequately deal with these processes. Consequently, much analytical sociology focuses on testing micro-level predictions. We propose a better alternative; a method to test macro hypotheses on micro data, using randomization tests. The advantages of our method are (i) increased statistical power, (ii) possibilities to control for micro covariates, and (iii) the possibility to test macro hypotheses without macro units. We provide a heuristic description of our method and illustrate it with data from a published study. Data and R-scripts for this paper are available in the Open Science Framework (https://osf.io/scfx3/).

18.
Res Synth Methods ; 10(2): 225-239, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30589219

ABSTRACT

The effect sizes of studies included in a meta-analysis do often not share a common true effect size due to differences in for instance the design of the studies. Estimates of this so-called between-study variance are usually imprecise. Hence, reporting a confidence interval together with a point estimate of the amount of between-study variance facilitates interpretation of the meta-analytic results. Two methods that are recommended to be used for creating such a confidence interval are the Q-profile and generalized Q-statistic method that both make use of the Q-statistic. These methods are exact if the assumptions underlying the random-effects model hold, but these assumptions are usually violated in practice such that confidence intervals of the methods are approximate rather than exact confidence intervals. We illustrate by means of two Monte-Carlo simulation studies with odds ratio as effect size measure that coverage probabilities of both methods can be substantially below the nominal coverage rate in situations that are representative for meta-analyses in practice. We also show that these too low coverage probabilities are caused by violations of the assumptions of the random-effects model (ie, normal sampling distributions of the effect size measure and known sampling variances) and are especially prevalent if the sample sizes in the primary studies are small.


Subject(s)
Confidence Intervals , Meta-Analysis as Topic , Models, Statistical , Statistics as Topic , Computer Simulation , Monte Carlo Method , Normal Distribution , Odds Ratio , Probability , Research Design , Sample Size
19.
Psychoneuroendocrinology ; 96: 52-60, 2018 10.
Article in English | MEDLINE | ID: mdl-29902667

ABSTRACT

BACKGROUND: Maternal psychological distress during pregnancy is related to adverse child behavioral and emotional outcomes later in life, such as ADHD and anxiety/depression. The underlying mechanisms for this, however, are still largely unknown. The hypothalamic-pituitary-adrenal (HPA)-axis, with its most important effector hormone cortisol, has been proposed as a mechanism, but results have been inconsistent. The current study investigated the association between maternal psychological distress (i.e. anxiety and depressive symptoms) and maternal cortisol levels during pregnancy using a mixed models approach. METHOD: During three pregnancy trimesters, mothers (N = 170) collected four salivary samples for two consecutive days. Mothers reported symptoms of anxiety and depression three times during pregnancy (at 13.3 ±â€¯1.1, 20.2 ±â€¯1.5, and 33.8 ±â€¯1.5 weeks of pregnancy, respectively) using the anxiety subscale of the Symptom Checklist (SCL-90), the Spielberger State and Trait Anxiety Inventory (STAI), and the Edinburgh Postnatal Depression Scale (EPDS). Specific fears and worries during pregnancy were measured with the short version of the Pregnancy Related Anxiety Questionnaire (PRAQ-R). RESULTS: We found a significant effect of SCL-90 anxiety subscale on cortisol levels at awakening (p = .008), indicating that mothers with higher anxiety showed lower cortisol at awakening. Maternal psychological variables explained 10.5% of the variance at the person level in awakening cortisol level, but none in the overall diurnal cortisol model. CONCLUSION: More research is necessary to unravel the underlying mechanisms of the association between maternal psychological distress and cortisol and the search for mechanisms other than the HPA-axis should be continued and extended.


Subject(s)
Pregnancy/psychology , Stress, Psychological/physiopathology , Adult , Anxiety/metabolism , Depression/physiopathology , Depression/psychology , Emotions/physiology , Female , Humans , Hydrocortisone/analysis , Hypothalamo-Hypophyseal System , Mothers , Pituitary-Adrenal System , Pregnancy/physiology , Pregnancy Complications/physiopathology , Pregnancy Complications/psychology , Prenatal Exposure Delayed Effects/physiopathology , Saliva/chemistry , Stress, Psychological/metabolism
20.
Arch Gerontol Geriatr ; 76: 114-119, 2018.
Article in English | MEDLINE | ID: mdl-29494871

ABSTRACT

PURPOSE: This study aimed to determine the predictive value of the Brazilian Tilburg Frailty Indicator (TFI) for adverse health outcomes (falls, hospitalization, disability and death), in a follow-up period of twelve months. METHODS: This longitudinal study was carried out with a sample of people using primary health care services in Rio de Janeiro, Brazil. At baseline the sample consisted of 963 people aged 60 years and older. A subset of all respondents participated again one year later (n = 640, 66.6% response rate). We used the TFI, the Katz's scale for assessing ADL disability and the Lawton Scale for assessing IADL disability. Falls, hospitalization and death were also assessed using a questionnaire. RESULTS: The prevalence of frailty was 44.2% and the mean score of the TFI was 4.4 (SD = 3.0). There was a higher risk of loss in functional capacity in ADL (OR = 3.03, CI95% 1.45-6.29) and in IADL (OR = 1.51, CI95% 1.05-2.17), falls (OR = 2.08, CI95% 1.21-3.58), hospitalization (OR = 1.83, CI95% 1.10-3.06), and death (HR = 2.73, CI95% 1.04-7.19) for frail when compared to non-frail elderly, in the bivariate analyses. Controlling for the sociodemographic variables, the frailty domains together improved the prediction of hospitalization, falls and loss in functional capacity in ADL, but not loss in functional capacity in IADL. CONCLUSION: The TFI is a good predictor of adverse health outcomes among elderly users of primary care services in Brazil and appears an adequate and easy to administer tool for monitoring their health conditions.


Subject(s)
Frailty/diagnosis , Geriatric Assessment/methods , Health Status Indicators , Accidental Falls/statistics & numerical data , Aged , Aged, 80 and over , Brazil/epidemiology , Disability Evaluation , Female , Follow-Up Studies , Frail Elderly/statistics & numerical data , Frailty/epidemiology , Frailty/physiopathology , Hospitalization/statistics & numerical data , Humans , Male , Middle Aged , Prevalence , Primary Health Care , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...