Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 81
Filter
1.
J Surv Stat Methodol ; 11(5): 1011-1031, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37975065

ABSTRACT

In push-to-web surveys that use postal mail to contact sampled cases, participation is contingent on the mail being opened and the survey invitations being delivered. The design of the mailings is crucial to the success of the survey. We address the question of how to design invitation mailings that can grab potential respondents' attention and sway them to be interested in the survey in a short window of time. In the household screening stage of a national survey, the American Family Health Study, we experimentally tested three mailing design techniques for recruiting respondents: (1) a visible cash incentive in the initial mailing, (2) a second incentive for initial nonrespondents, and (3) use of Priority Mail in the nonresponse follow-up mailing. We evaluated the three techniques' overall effects on response rates as well as how they differentially attracted respondents with different characteristics. We found that all three techniques were useful in increasing the screening response rates, but there was little evidence that they had differential effects on sample subgroups that could help to reduce nonresponse biases.

2.
PLoS One ; 18(5): e0285848, 2023.
Article in English | MEDLINE | ID: mdl-37200348

ABSTRACT

OBJECTIVE: The All of Us Research Program collects data from multiple information sources, including health surveys, to build a national longitudinal research repository that researchers can use to advance precision medicine. Missing survey responses pose challenges to study conclusions. We describe missingness in All of Us baseline surveys. STUDY DESIGN AND SETTING: We extracted survey responses between May 31, 2017, to September 30, 2020. Missing percentages for groups historically underrepresented in biomedical research were compared to represented groups. Associations of missing percentages with age, health literacy score, and survey completion date were evaluated. We used negative binomial regression to evaluate participant characteristics on the number of missed questions out of the total eligible questions for each participant. RESULTS: The dataset analyzed contained data for 334,183 participants who submitted at least one baseline survey. Almost all (97.0%) of the participants completed all baseline surveys, and only 541 (0.2%) participants skipped all questions in at least one of the baseline surveys. The median skip rate was 5.0% of the questions, with an interquartile range (IQR) of 2.5% to 7.9%. Historically underrepresented groups were associated with higher missingness (incidence rate ratio (IRR) [95% CI]: 1.26 [1.25, 1.27] for Black/African American compared to White). Missing percentages were similar by survey completion date, participant age, and health literacy score. Skipping specific questions were associated with higher missingness (IRRs [95% CI]: 1.39 [1.38, 1.40] for skipping income, 1.92 [1.89, 1.95] for skipping education, 2.19 [2.09-2.30] for skipping sexual and gender questions). CONCLUSION: Surveys in the All of Us Research Program will form an essential component of the data researchers can use to perform their analyses. Missingness was low in All of Us baseline surveys, but group differences exist. Additional statistical methods and careful analysis of surveys could help mitigate challenges to the validity of conclusions.


Subject(s)
Population Health , Humans , Surveys and Questionnaires , Health Surveys , Sexual Behavior
3.
PLoS One ; 18(3): e0282591, 2023.
Article in English | MEDLINE | ID: mdl-36893179

ABSTRACT

Although the potential for participant selection bias is readily acknowledged in the momentary data collection literature, very little is known about uptake rates in these studies or about differences in the people that participate versus those who do not. This study analyzed data from an existing Internet panel of older people (age 50 and greater) who were offered participation into a momentary study (n = 3,169), which made it possible to compute uptake and to compare many characteristics of participation status. Momentary studies present participants with brief surveys multiple times a day over several days; these surveys ask about immediate or recent experiences. A 29.1% uptake rate was observed when all respondents were considered, whereas a 39.2% uptake rate was found when individuals who did not have eligible smartphones (necessary for ambulatory data collection) were eliminated from the analyses. Taking into account the participation rate for being in this Internet panel, we estimate uptake rates for the general population to be about 5%. A consistent pattern of differences emerged between those who accepted the invitation to participate versus those who did not (in univariate analyses): participants were more likely to be female, younger, have higher income, have higher levels of education, rate their health as better, be employed, not be retired, not be disabled, have better self-rated computer skills, and to have participated in more prior Internet surveys (all p < .0026). Many variables were not associated with uptake including race, big five personality scores, and subjective well-being. For several of the predictors, the magnitude of the effects on uptake was substantial. These results indicate the possibility that, depending upon the associations being investigated, person selection bias could be present in momentary data collection studies.


Subject(s)
Ecological Momentary Assessment , Research Design , Humans , Female , Aged , Middle Aged , Male , Selection Bias , Surveys and Questionnaires , Smartphone
4.
J Surv Stat Methodol ; 11(1): 124-140, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36714299

ABSTRACT

Survey researchers have carefully modified their data collection operations for various reasons, including the rising costs of data collection and the ongoing Coronavirus disease (COVID-19) pandemic, both of which have made in-person interviewing difficult. For large national surveys that require household (HH) screening to determine survey eligibility, cost-efficient screening methods that do not include in-person visits need additional evaluation and testing. A new study, known as the American Family Health Study (AFHS), recently initiated data collection with a national probability sample, using a sequential mixed-mode mail/web protocol for push-to-web US HH screening (targeting persons aged 18-49 years). To better understand optimal approaches for this type of national screening effort, we embedded two randomized experiments in the AFHS data collection. The first tested the use of bilingual respondent materials where mailed invitations to the screener were sent in both English and Spanish to 50 percent of addresses with a high predicted likelihood of having a Spanish speaker and 10 percent of all other addresses. We found that the bilingual approach did not increase the response rate of high-likelihood Spanish-speaking addresses, but consistent with prior work, it increased the proportion of eligible Hispanic respondents identified among completed screeners, especially among addresses predicted to have a high likelihood of having Spanish speakers. The second tested a form of nonresponse follow-up, where a subsample of active sampled HHs that had not yet responded to the screening invitations was sent a priority mailing with a $5 incentive, adding to the $2 incentive provided for all sampled HHs in the initial screening invitation. We found this approach to be quite valuable for increasing the screening survey response rate.

5.
J Surv Stat Methodol ; 10(5): 1172-1182, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36397764

ABSTRACT

Ownership of a bank account is an objective measure and should be relatively easy to elicit via survey questions. Yet, depending on the interview mode, the wording of the question and its placement within the survey may influence respondents' answers. The Health and Retirement Study (HRS) asset module, as administered online to members of the Understanding America Study (UAS), yielded substantially lower rates of reported bank account ownership than either a single question on ownership in the Current Population Survey (CPS) or the full asset module administered to HRS panelists (both interviewer-administered surveys). We designed and implemented an experiment in the UAS comparing the original HRS question eliciting bank account ownership with two alternative versions that were progressively simplified. We document strong evidence that the original question leads to systematic underestimation of bank account ownership. In contrast, the proportion of bank account owners obtained from the simplest alternative version of the question is very similar to the population benchmark estimate. We investigate treatment effect heterogeneity by cognitive ability and financial literacy. We find that questionnaire simplification affects responses of individuals with higher cognitive ability substantially less than those with lower cognitive ability. Our results suggest that high-quality data from surveys start from asking the right questions, which should be as simple and precise as possible and carefully adapted to the mode of interview.

6.
Article in English | MEDLINE | ID: mdl-36120182

ABSTRACT

Introduction: Updating the mode of data collection may affect response rates or survey results. The ongoing, national Monitoring the Future (MTF) panel study has traditionally used mailed paper surveys. In 2018, MTF experimented with a web-push data collection design for young adults ages 19-30, concluding that the web-push design improved response rates and did not change substance use estimates after controlling for sociodemographic characteristics (Patrick et al., 2021). The current study sought to replicate the web-push experiment with MTF adults ages 35 to 60 in 2020. Methods: In 2020, the MTF panel study included an experiment to test a web-push protocol for respondents ages 35 to 60 (N = 14,379). Participants were randomized to the web-push (i.e., a web survey invitation, with paper surveys available for non-respondents) or traditional MTF (i.e., mailed paper surveys) data collection condition. Results: Results indicated no significant difference in overall response rate for the web-push vs. standard MTF conditions in this age group. Differences in reported estimates of past 30-day substance use prevalence by condition were not significant after adjusting for sociodemographic characteristics. In multivariable models, participants in the web-push condition were less likely to respond via web (than paper) if they were Black, smoked cigarettes in the past 30 days, were unmarried, or did not have a college degree. Conclusions: Overall, the move to the web-push design had minimal impact on response rates and substance use prevalence estimates for this age group. However, in the web-push condition, sociodemographic differences were associated with mode of response.

7.
Longit Life Course Stud ; 13(4): 621-646, 2022 05 01.
Article in English | MEDLINE | ID: mdl-35900891

ABSTRACT

Longitudinal surveys traditionally conducted by interviewers are facing increasing pressures to explore alternatives such as sequential mixed-mode designs, which start with a cheaper self-administered mode (online) then follow up using more expensive methods such as telephone or face-to-face interviewing. Using a designed experiment conducted as part of the 2018 wave of the Health and Retirement Study (HRS) in the US, we compare a sequential mixed-mode design (web then telephone) with the standard telephone-only protocol. Using an intent-to-treat analysis, we focus on response quality and response distributions for several domains key to HRS: physical and psychological health, financial status, expectations and family composition. Respondents assigned to the sequential mixed-mode (web) had slightly higher missing data rates and more focal responses than those assigned to telephone-only. However, we find no evidence of differential quality in verifying and updating roster information. We find slightly lower rates of asset ownership reported by those assigned to the web mode. Conditional on ownership, we find no detectable mode effects on the value of assets. We find more negative (pessimistic) expectations for those assigned to the web mode. We find little evidence of poorer health reported by those assigned to the web mode. We find that effects of mode assignment on measurement are present, but for most indicators the effects are small. Finding ways to remediate the differences in item-missing data and focal values should help reduce mode effects in mixed-mode surveys or those transitioning from interviewer- to self-administration.


Subject(s)
Data Accuracy , Telephone , Surveys and Questionnaires , Longitudinal Studies
9.
Field methods ; 34(1): 3-19, 2022.
Article in English | MEDLINE | ID: mdl-35360526

ABSTRACT

Event history calendars (EHCs) are frequently used in social measurement to capture important information about the time ordering of events in people's lives, and enable inference about the relationships of the events with other outcomes of interest. To date, EHCs have primarily been designed for face-to-face or telephone survey interviewing, and few calendar tools have been developed for more private, self-administered modes of data collection. Web surveys offer benefits in terms of both self-administration, which can reduce social desirability bias, and timeliness. We developed and tested a web application enabling the calendar-based measurement of contraceptive method use histories. These measures provide valuable information for researchers studying family planning and fertility behaviors. This study describes the development of the web application, and presents a comparison of data collected from online panels using the application with data from a benchmark face-to-face survey collecting similar measures (the National Survey of Family Growth).

10.
J Surv Stat Methodol ; 10(1): 149-160, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35083357

ABSTRACT

Given the promise of the web push plus e-mail survey design for providing cost-effective and high-quality data (Patrick et al. 2018, 2019) as an alternative to a paper-and-pencil mailed survey design for the longitudinal Monitoring the Future (MTF) study, the current study sought to further enhance the web push condition. The MTF sample is based on US nationally representative samples of 12th grade students surveyed annually. The MTF control group for the current study included participants who completed the in-school baseline survey in the 12th grade and were selected to participate in their first follow-up survey in 2017 via mailed surveys (N = 1,222). A supplementary sample (N = ∼2,450) was assigned to one of the two sequential mixed-mode conditions. Those in condition 1 (N = 1,198), or mail push, were invited to complete mailed surveys and later given a web survey option. Those in condition 2 (N = 1,173), or enhanced web push, were invited to complete a web survey (the same as in the 2014 study, but with the addition of text messages and quick response (QR) codes and the web survey was optimized for mobile devices) and then later given a mailed survey option. Research aims were to examine response rates across conditions, as well as how responses were distributed across mode (paper, web), devices (computer, smartphone, table), and method of accessing the web survey (hand-entered URL, QR code, e-mail link, SMS link). Response rates differed significantly: the MTF control group was 34.2 percent, mail push was 35.4 percent, and enhanced web push was 42.05 percent. The higher response rate in the enhanced web push condition suggests that the additional strategies were effective at bringing in more respondents. Key estimates produced by the enhanced web push condition did not differ from those of the MTF control group.

11.
J Crit Care ; 64: 160-164, 2021 08.
Article in English | MEDLINE | ID: mdl-33906105

ABSTRACT

PURPOSE: To measure the rate of recall of study participation and study attrition in survivors of acute respiratory distress syndrome(ARDS). MATERIALS/METHODS: In this ancillary study of the Re-evaluation of Systemic Early neuromuscular blockade(ROSE) trial, we measured the rate of study participation recall 3 months following discharge and subsequent study attrition at 6 months. We compared patient and hospital characteristics, and long-term outcomes by recall. As surrogate decision-makers provided initial consent, we measured the rate of patient reconsent and its association with study recall. RESULTS: Of 487 patients evaluated, recall status was determined in 386(82.7%). Among these, 287(74.4%) patients recalled participation in the ROSE trial, while 99(25.6%) did not. There was no significant difference in 6-month attrition among patients who recalled study participation (9.1%) and those who did not (12.1%) (p = 0.38). Patient characteristics were similar between groups, except SOFA scores, ventilator-free days, and length of stay. 330(68%) were reconsented. Compared to those not reconsented, significantly more patients who were reconsented recalled study participation(78% vs. 66%;p = 0.01). CONCLUSIONS: One in 4 ARDS survivors do not recall their participation in a clinical trial during hospitalization 3 months following hospital discharge, which did not influence 6-month attrition. However, more patients recall study participation if reconsent is obtained.


Subject(s)
Respiratory Distress Syndrome , Survivors , Clinical Trials as Topic , Humans , Mental Recall , Patient Discharge , Respiratory Distress Syndrome/therapy , Survivors/psychology
12.
Addiction ; 116(1): 191-199, 2021 01.
Article in English | MEDLINE | ID: mdl-32533797

ABSTRACT

AIMS: The experiment tested the effects of a web-push survey research protocol, compared with the standard mailed paper-and-pencil protocol, among young adults aged 19-30 years in the 'Monitoring the Future' (MTF) longitudinal study. DESIGN, SETTING AND PARTICIPANTS: The US-based MTF study has measured substance use trends among young adults in panel samples followed biennially, using consistent mailed survey procedures from 1977 to 2017. In 2018, young adult participants in the MTF longitudinal component scheduled to be surveyed at ages 19-30 in 2018 (from high school senior cohorts of 2006-17, n = 14 709) were randomly assigned to receive the standard mail/paper survey procedures or new web-push procedures. MEASUREMENTS: Primary outcomes were responding to the survey and prevalence estimates for past 30-day use of alcohol, cigarettes, marijuana and illicit drugs. FINDINGS: The web-push response rate was 39.07% [95% confidence interval (CI) = 37.889, 40.258]; this was significantly better than the standard MTF response rate of 35.12% (95% CI = 33.964, 36.285). After adjusting for covariates, the web-push condition was associated with a 19% increase in the odds of responding compared with standard MTF (adjusted odds ratio = 1.188; 95% CI = 1.096, 1.287). Substance use prevalence estimates were very similar and differences became negligible when using attrition weights and controlling for socio-demographic characteristics. CONCLUSIONS: The web-push protocol produced a higher response rate than the mailed pencil and paper protocol in the Monitoring the Future panel study, without substantially affecting estimates of substance use once attrition weights and socio-demographic variables were factored in.


Subject(s)
Internet/statistics & numerical data , Postal Service/statistics & numerical data , Research Design/statistics & numerical data , Substance-Related Disorders/epidemiology , Surveys and Questionnaires/statistics & numerical data , Adult , Female , Humans , Longitudinal Studies , Male , Prevalence , Young Adult
13.
Addiction ; 116(5): 1144-1151, 2021 05.
Article in English | MEDLINE | ID: mdl-32888343

ABSTRACT

BACKGROUND AND AIMS: Increasing numbers of school-based drug surveys are transitioning data collection to electronic tablets from paper-and-pencil, which may produce a survey mode effect and consequent discontinuity in time trends for population estimates of drug prevalence. This study tested whether (a) overall, self-reported drug use prevalence is higher on electronic tablets versus paper-and-pencil surveys, (b) socio-demographics moderate survey mode effects and (c) levels of missing data are lower for electronic tablet versus paper-and-pencil modes. DESIGN: A randomized controlled experiment. SETTING: Results are nationally representative of students in the contiguous United States. PARTICIPANTS: A total of 41 866 8th, 10th and 12th grade students who participated in the 2019 Monitoring the Future school-based survey administration. INTERVENTION AND COMPARATOR: Surveys were administered to students in a randomly selected half of schools with electronic tablets (intervention) and with paper-and-pencil format (comparator) for the other half. MEASUREMENTS: Primary outcome was the total number of positive drug use responses. Secondary outcomes were the percentage of respondents completing all drug questions, percentage of drug questions unanswered and mean number of missing drug items. FINDINGS: The relative risk (RR) for total number of positive drug use responses for electronic tablets versus paper-and-pencil surveys were small and their 95% confidence intervals (CI) included the value of one for reporting intervals of life-time (RR = 1.03; 95% CI, 0.93-1.14), past 12 months (RR = 1.01; 95% CI, 0.91-1.11), past 30 days (RR = 1.05; 95% CI, 0.93-1.20) and for heavy use (RR = 1.10; 95% CI, 0.93-1.29). Multiplicative interaction tests indicated no moderation of these relative risks by race (white versus non-white), population density, census region, public/private school, year of school participation, survey version or non-complete drug responses. Levels of missing data were significantly lower for electronic tablets versus paper-and-pencil surveys. CONCLUSIONS: Adolescent drug prevalence estimates in the United States differed little across electronic tablet versus paper-and-pencil survey modes, and showed little to no effect modification by socio-demographics. Levels of missing data were lower for electronic tablets.


Subject(s)
Pharmaceutical Preparations , Adolescent , Humans , Prevalence , Schools , Students , Surveys and Questionnaires , United States/epidemiology
14.
Int J Womens Dermatol ; 6(4): 290-293, 2020 Sep.
Article in English | MEDLINE | ID: mdl-33015289

ABSTRACT

BACKGROUND: Patient satisfaction is a proxy for quality clinical care. Understanding the factors that drive patient satisfaction scores is important because they are publicly reported, may be used in determining hospital and physician compensation, and may allow patients to preselect physicians. OBJECTIVE: This single-center survey study of adult patients at the Michigan Medicine outpatient dermatology clinics aimed to investigate how patients respond differently to theoretical dermatologic scenarios with varying dermatologist gender. METHODS: Each questionnaire contained one of four clinical scenarios illustrating overall positive or negative encounters with a male or female dermatologist, followed by questions derived from the Press Ganey survey to assess patient satisfaction. RESULTS: A total of 452 completed questionnaires were collected. There were statistically significant differences in overall patient satisfaction scores between positive versus negative female and positive versus negative male dermatologists, but there were no differences in scores between positive female and positive male dermatologists or between negative female and negative male dermatologists. There were also no differences in overall scores after controlling for patient demographic characteristics or patient-dermatologist gender concordance. CONCLUSION: Previous studies have suggested that male physicians receive better patient satisfaction scores compared to female physicians. However, our study found that, in response to hypothetical scenarios of positive and negative dermatology encounters, dermatologist gender did not affect any domain of patient satisfaction scores. Limitations include the use of hypothetical patient-dermatologist encounters and possible lack of generalizability because the study was conducted at one academic center in southeast Michigan with a predominantly Caucasian patient population.

15.
BMC Med Res Methodol ; 20(1): 251, 2020 10 08.
Article in English | MEDLINE | ID: mdl-33032535

ABSTRACT

BACKGROUND: In health research, population estimates are generally obtained from probability-based surveys. In market research surveys are frequently conducted from volunteer web panels. Propensity score adjustment (PSA) is often used at analysis to try to remove bias in the web survey, but empirical evidence of its effectiveness is mixed. We assess the ability of PSA to remove bias in the context of sensitive sexual health research and the potential of web panel surveys to replace or supplement probability surveys. METHODS: Four web panel surveys asked a subset of questions from the third British National Survey of Sexual Attitudes and Lifestyles (Natsal-3). Five propensity scores were generated for each web survey. The scores were developed from progressively larger sets of variables, beginning with demographic variables only and ending with demographic, sexual identity, lifestyle, attitudinal and sexual behaviour variables together. The surveys were weighted to match Natsal-3 based on propensity score quintiles. The performance of each survey and weighting was assessed by calculating the average 'absolute' odds ratio (inverse of the odds ratio if less than 1) across 22 pre-specified sexual behaviour outcomes of interest comparing the weighted web survey with Natsal-3. The average standard error across odds ratios was examined to assess the impact of weighting upon variance. RESULTS: Propensity weighting reduced bias relative to Natsal-3 as more variables were added for males, but had little effect for females, and variance increased for some surveys. Surveys with more biased estimates before propensity weighting showed greater reduction in bias from adjustment. Inconsistencies in performance were evident across surveys and outcomes. For most surveys and outcomes any reduction in bias was only partial and for some outcomes the bias increased. CONCLUSIONS: Even after propensity weighting using a rich range of information, including some sexual behaviour variables, some bias remained and variance increased for some web surveys. Whilst our findings support the use of PSA for web panel surveys, the reduction in bias is likely to be partial and unpredictable, consistent with the findings from market research. Our results do not support the use of volunteer web panels to generate unbiased population health estimates.


Subject(s)
Health Behavior , Sexual Behavior , Female , Health Surveys , Humans , Male , Propensity Score , Selection Bias
16.
Surv Pract ; 12(1)2019.
Article in English | MEDLINE | ID: mdl-31867145

ABSTRACT

This study examines the two-year follow up (data collected in 2016 at modal age 21/22) of an original mixed-mode longitudinal survey experiment (data collected at modal age 19/20 in 2014). The study compares participant retention in the experimental conditions to retention in the standard Monitoring the Future (MTF) control condition (participants who completed an in-school baseline survey in 12th grade in 2012 or 2013 and were selected to participate in the first follow-up survey by mail in 2014, N=2,451). A supplementary sample who completed the 12th grade baseline survey in 2012 or 2013 but were not selected to participate in the main MTF follow-up (N=4,950) were recruited and randomly assigned to one of three experimental conditions: 1: Mail Push, 2: Web Push, 3: Web Push + Email in 2014 and again in 2016. Results from the first experiment indicated that Condition 3 (Web Push + Email) was promising based on similar response rates and lower costs (Patrick et al. 2018). The current study examines how experimental condition and type of 2014 response were associated with response in 2016, the extent to which response mode and device type changed from 2014 to 2016, and cumulative cost comparisons across conditions. Results indicated that responding via web in 2014 was associated with greater odds of participation again in 2016 regardless of condition; respondents tended to respond in the same mode although the "push" condition did move respondents toward web over paper; device type varied between waves; and the cumulative cost savings of Web Push + Email grew larger compared to the MTF Control. The web push strategy is therefore promising for maintaining respondent engagement while reducing cost.

17.
Public Opin Q ; 83(Suppl 1): 210-235, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31337924

ABSTRACT

The rising penetration of smartphones now gives researchers the chance to collect data from smartphone users through passive mobile data collection via apps. Examples of passively collected data include geolocation, physical movements, online behavior and browser history, and app usage. However, to passively collect data from smartphones, participants need to agree to download a research app to their smartphone. This leads to concerns about nonconsent and nonparticipation. In the current study, we assess the circumstances under which smartphone users are willing to participate in passive mobile data collection. We surveyed 1,947 members of a German nonprobability online panel who own a smartphone using vignettes that described hypothetical studies where data are automatically collected by a research app on a participant's smartphone. The vignettes varied the levels of several dimensions of the hypothetical study, and respondents were asked to rate their willingness to participate in such a study. Willingness to participate in passive mobile data collection is strongly influenced by the incentive promised for study participation but also by other study characteristics (sponsor, duration of data collection period, option to switch off the app) as well as respondent characteristics (privacy and security concerns, smartphone experience).

18.
Public Opin Q ; 83(Suppl 1): 289-308, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31337925

ABSTRACT

Numerous surveys link interview data to administrative records, conditional on respondent consent, in order to explore new and innovative research questions. Optimizing the linkage consent rate is a critical step toward realizing the scientific advantages of record linkage and minimizing the risk of linkage consent bias. Linkage consent rates have been shown to be particularly sensitive to certain design features, such as where the consent question is placed in the questionnaire and how the question is framed. However, the interaction of these design features and their relative contributions to the linkage consent rate have never been jointly studied, raising the practical question of which design feature (or combination of features) should be prioritized from a consent rate perspective. We address this knowledge gap by reporting the results of a placement and framing experiment embedded within separate telephone and Web surveys. We find a significant interaction between placement and framing of the linkage consent question on the consent rate. The effect of placement was larger than the effect of framing in both surveys, and the effect of framing was only evident in the Web survey when the consent question was placed at the end of the questionnaire. Both design features had negligible impact on linkage consent bias for a series of administrative variables available for consenters and non-consenters. We conclude this research note with guidance on the optimal administration of the linkage consent question.

19.
Soc Sci Res ; 82: 113-125, 2019 08.
Article in English | MEDLINE | ID: mdl-31300072

ABSTRACT

Social processes that change quickly are difficult to study, because they require frequent survey measurement. Weekly, daily, or even hourly measurement may be needed depending on the topic. With more frequent measurement comes the prospect of more complex patterns of missing data. The mechanisms creating the missing data may be varied, ranging from technical issues such as lack of an Internet connection to refusal to complete a requested survey. We examine one approach to mitigating the damage of these missing data - a follow-up or closeout interview that is completed after the frequent measurement. The Relationship Dynamics and Social Life (RDSL) study used this approach. The study asked women weekly about their attitudes and behaviors related to sexual relationships and pregnancy. The surveys were carried out for 130 weeks and concluded with a closeout interview. We explore the patterns of missing data in the RDSL study and then examine associations between the data collected in the closeout survey and key variables collected in the weekly survey. We then assess the extent to which data from the closeout survey are useful in repairing the damage caused by missing data.


Subject(s)
Data Accuracy , Data Collection/standards , Follow-Up Studies , Interviews as Topic/standards , Longitudinal Studies , Research Design/standards , Adult , Aged , Aged, 80 and over , Data Collection/statistics & numerical data , Female , Humans , Interviews as Topic/statistics & numerical data , Male , Middle Aged , Research Design/statistics & numerical data
20.
Epidemiology ; 30(4): 597-608, 2019 07.
Article in English | MEDLINE | ID: mdl-31045611

ABSTRACT

BACKGROUND: The All of Us Research Program is building a national longitudinal cohort and collecting data from multiple information sources (e.g., biospecimens, electronic health records, and mobile/wearable technologies) to advance precision medicine. Participant-provided information, collected via surveys, will complement and augment these information sources. We report the process used to develop and refine the initial three surveys for this program. METHODS: The All of Us survey development process included: (1) prioritization of domains for scientific needs, (2) examination of existing validated instruments, (3) content creation, (4) evaluation and refinement via cognitive interviews and online testing, (5) content review by key stakeholders, and (6) launch in the All of Us electronic participant portal. All content was translated into Spanish. RESULTS: We conducted cognitive interviews in English and Spanish with 169 participants, and 573 individuals completed online testing. Feedback led to over 40 item content changes. Lessons learned included: (1) validated survey instruments performed well in diverse populations reflective of All of Us; (2) parallel evaluation of multiple languages can ensure optimal survey deployment; (3) recruitment challenges in diverse populations required multiple strategies; and (4) key stakeholders improved integration of surveys into larger Program context. CONCLUSIONS: This efficient, iterative process led to successful testing, refinement, and launch of three All of Us surveys. Reuse of All of Us surveys, available at http://researchallofus.org, may facilitate large consortia targeting diverse populations in English and Spanish to capture participant-provided information to supplement other data, such as genetic, physical measurements, or data from electronic health records.


Subject(s)
Health Surveys/methods , Precision Medicine , Adolescent , Adult , Aged , Aged, 80 and over , Factor Analysis, Statistical , Female , Humans , Longitudinal Studies , Male , Middle Aged , Pilot Projects , Qualitative Research , Translations , United States , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...