Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Surv Stat Methodol ; 11(5): 1089-1109, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38028817

ABSTRACT

Random-digit dialing (RDD) telephone surveys are challenged by declining response rates and increasing costs. Many surveys that were traditionally conducted via telephone are seeking cost-effective alternatives, such as address-based sampling (ABS) with self-administered web or mail questionnaires. At a fraction of the cost of both telephone and ABS surveys, opt-in web panels are an attractive alternative. The 2019-2020 National Alcohol Survey (NAS) employed three methods: (1) an RDD telephone survey (traditional NAS method); (2) an ABS push-to-web survey; and (3) an opt-in web panel. The study reported here evaluated differences in the three data-collection methods, which we will refer to as "mode effects," on alcohol consumption and health topics. To evaluate mode effects, multivariate regression models were developed predicting these characteristics, and the presence of a mode effect on each outcome was determined by the significance of the three-level effect (RDD-telephone, ABS-web, opt-in web panel) in each model. Those results were then used to adjust for mode effects and produce a "telephone-equivalent" estimate for the ABS and panel data sources. The study found that ABS-web and RDD were similar for most estimates but exhibited differences for sensitive questions including getting drunk and experiencing depression. The opt-in web panel exhibited more differences between it and the other two survey modes. One notable example is the reporting of drinking alcohol at least 3-4 times per week, which was 21 percent for RDD-phone, 24 percent for ABS-web, and 34 percent for opt-in web panel. The regression model adjusts for mode effects, improving comparability with past surveys conducted by telephone; however, the models result in higher variance of the estimates. This method of adjusting for mode effects has broad applications to mode and sample transitions throughout the survey research industry.

2.
3.
Res Social Adm Pharm ; 17(5): 921-929, 2021 05.
Article in English | MEDLINE | ID: mdl-32800458

ABSTRACT

Population-based surveys have long been a key tool for health researchers, policy makers and program managers. The addition of bio-measures, including physical measures and specimen collection, to self-reported health and health behaviors can increase the value of the research for health sciences. At the same time, these bio-measures are likely to increase the perceived burden and intrusiveness to the respondent. Relatively little research has been reported on respondent willingness to participate in surveys that involve physical measures and specimen collection and whether there is any associated non-response bias. This paper explores the willingness of respondents to participate in surveys that involve physical measures and biomarkers. A Census-balanced sample of nearly 2000 adults from a national mobile panel of persons residing in the U.S. were interviewed. Willingness to participate in six specific bio-measures was assessed. The survey finds a high correlation in the willingness of respondents to participate among these specific bio-measures. This suggests there is a general propensity towards (and against) bio-measures among potential respondents, despite some differences in willingness to participate in the more sensitive, intrusive or burdensome biomarkers. This study finds the general propensity to participate in bio-measures is correlated with a number of key measures of health and illness. This suggests that the inclusion of biomarkers in health surveys may introduce some bias in key measures that need to be balanced against the value of the additional information.


Subject(s)
Health Behavior , Adult , Biomarkers , Health Surveys , Humans , Self Report , Surveys and Questionnaires
4.
Front Psychol ; 6: 1578, 2015.
Article in English | MEDLINE | ID: mdl-26539138

ABSTRACT

This study investigates how an onscreen virtual agent's dialog capability and facial animation affect survey respondents' comprehension and engagement in "face-to-face" interviews, using questions from US government surveys whose results have far-reaching impact on national policies. In the study, 73 laboratory participants were randomly assigned to respond in one of four interviewing conditions, in which the virtual agent had either high or low dialog capability (implemented through Wizard of Oz) and high or low facial animation, based on motion capture from a human interviewer. Respondents, whose faces were visible to the Wizard (and videorecorded) during the interviews, answered 12 questions about housing, employment, and purchases on the basis of fictional scenarios designed to allow measurement of comprehension accuracy, defined as the fit between responses and US government definitions. Respondents answered more accurately with the high-dialog-capability agents, requesting clarification more often particularly for ambiguous scenarios; and they generally treated the high-dialog-capability interviewers more socially, looking at the interviewer more and judging high-dialog-capability agents as more personal and less distant. Greater interviewer facial animation did not affect response accuracy, but it led to more displays of engagement-acknowledgments (verbal and visual) and smiles-and to the virtual interviewer's being rated as less natural. The pattern of results suggests that a virtual agent's dialog capability and facial animation differently affect survey respondents' experience of interviews, behavioral displays, and comprehension, and thus the accuracy of their responses. The pattern of results also suggests design considerations for building survey interviewing agents, which may differ depending on the kinds of survey questions (sensitive or not) that are asked.

5.
Am J Public Health ; 105(5): e43-50, 2015 May.
Article in English | MEDLINE | ID: mdl-25790399

ABSTRACT

OBJECTIVES: We explored changes in sexual orientation question item completion in a large statewide health survey. METHODS: We used 2003 to 2011 California Health Interview Survey data to investigate sexual orientation item nonresponse and sexual minority self-identification trends in a cross-sectional sample representing the noninstitutionalized California household population aged 18 to 70 years (n = 182 812 adults). RESULTS: Asians, Hispanics, limited-English-proficient respondents, and those interviewed in non-English languages showed the greatest declines in sexual orientation item nonresponse. Asian women, regardless of English-proficiency status, had the highest odds of item nonresponse. Spanish interviews produced more nonresponse than English interviews and Asian-language interviews produced less nonresponse when we controlled for demographic factors and survey cycle. Sexual minority self-identification increased in concert with the item nonresponse decline. CONCLUSIONS: Sexual orientation nonresponse declines and the increase in sexual minority identification suggest greater acceptability of sexual orientation assessment in surveys. Item nonresponse rate convergence among races/ethnicities, language proficiency groups, and interview languages shows that sexual orientation can be measured in surveys of diverse populations.


Subject(s)
Data Collection/statistics & numerical data , Ethnicity/statistics & numerical data , Racial Groups/statistics & numerical data , Sexual Behavior/ethnology , Adolescent , Adult , Asian/statistics & numerical data , California/epidemiology , Cross-Sectional Studies , Female , Health Surveys , Hispanic or Latino/statistics & numerical data , Humans , Male , Middle Aged , Socioeconomic Factors , White People/statistics & numerical data , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...