Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Pain Manag Nurs ; 25(3): e214-e222, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38431504

ABSTRACT

PURPOSE: To assess the matching and content validity of a pain quality pictogram tool with a Hmong community. DESIGN: A Qualtrics survey was administered to two groups of participants. METHODS: Sixty Hmong participants (n = 49 limited English proficiency and bilingual Hmong community members in group 1; n = 11 bilingual Hmong healthcare practitioners in group 2) participated in this study. Hmong community members in group 1 were asked to identify the pain pictogram that best matched a pre-recorded Hmong pain quality phrase. The practitioners in group 2 were asked to evaluate how well each pain pictogram represented the pre-recorded Hmong pain quality phrase it intended to measure. To assess the matching, we assessed agreement between the pain concept in the phrase and the pictogram intended to represent it, using group 1. A content validity index (CVI) was calculated to assess the content validity of the tool using group 2. RESULTS: Among the community participants, 8 of the 15 pictograms were matched with the intended phrase almost perfectly, and 3 were matched by a substantial majority. There were no differences in matching by patient gender and language proficiency. Among practitioners, 11 of 15 pain pictograms met the CVI threshold of 0.70 for all three dimensions (i.e., representativeness, relevance, and comprehension). CONCLUSION: Findings support including most of the pain pictograms in the tool but suggest specific areas for improvement. CLINICAL IMPLICATIONS: Findings provide insights for redesigning the selected pain pictogram tool to be used in clinical settings with LEP Hmong patients.


Subject(s)
Limited English Proficiency , Multilingualism , Pain Measurement , Psychometrics , Humans , Male , Female , Adult , Psychometrics/instrumentation , Psychometrics/methods , Psychometrics/standards , Surveys and Questionnaires , Middle Aged , Pain Measurement/methods , Pain Measurement/standards , Reproducibility of Results , Health Personnel/statistics & numerical data , Health Personnel/psychology , Pain/psychology
2.
Public Opin Q ; 87(Suppl 1): 480-506, 2023.
Article in English | MEDLINE | ID: mdl-37705920

ABSTRACT

Interviewers' postinterview evaluations of respondents' performance (IEPs) are paradata, used to describe the quality of the data obtained from respondents. IEPs are driven by a combination of factors, including respondents' and interviewers' sociodemographic characteristics and what actually transpires during the interview. However, relatively few studies examine how IEPs are associated with features of the response process, including facets of the interviewer-respondent interaction and patterns of responding that index data quality. We examine whether features of the response process-various respondents' behaviors and response quality indicators-are associated with IEPs in a survey with a diverse set of respondents focused on barriers and facilitators to participating in medical research. We also examine whether there are differences in IEPs across respondents' and interviewers' sociodemographic characteristics. Our results show that both respondents' behaviors and response quality indicators predict IEPs, indicating that IEPs reflect what transpires in the interview. In addition, interviewers appear to approach the task of evaluating respondents with differing frameworks, as evidenced by the variation in IEPs attributable to interviewers and associations between IEPs and interviewers' gender. Further, IEPs were associated with respondents' education and ethnoracial identity, net of respondents' behaviors, response quality indicators, and sociodemographic characteristics of respondents and interviewers. Future research should continue to build on studies that examine the correlates of IEPs to better inform whether, when, and how to use IEPs as paradata about the quality of the data obtained.

3.
Res Social Adm Pharm ; 18(2): 2335-2344, 2022 02.
Article in English | MEDLINE | ID: mdl-34253471

ABSTRACT

Agree-disagree (AD) or Likert questions (e.g., "I am extremely satisfied: strongly agree … strongly disagree") are among the most frequently used response formats to measure attitudes and opinions in the social and medical sciences. This review and research synthesis focuses on the measurement properties and potential limitations of AD questions. The research leads us to advocate for an alternative questioning strategy in which items are written to directly ask about their underlying response dimensions using response categories tailored to match the response dimension, which we refer to as item-specific (IS) (e.g., "How satisfied are you: not at all … extremely"). In this review we: 1) synthesize past research comparing data quality for AD and IS questions; 2) present conceptual models of and review research supporting respondents' cognitive processing of AD and IS questions; and 3) provide an overview of question characteristics that frequently differ between AD and IS questions and may affect respondents' cognitive processing and data quality. Although experimental studies directly comparing AD and IS questions yield some mixed results, more studies find IS questions are associated with desirable data quality outcomes (e.g., validity and reliability) and AD questions are associated with undesirable outcomes (e.g., acquiescence, response effects, etc.). Based on available research, models of cognitive processing, and a review of question characteristics, we recommended IS questions over AD questions for most purposes. For researchers considering the use of previously administered AD questions and instruments, issues surrounding the challenges of translating questions from AD to IS response formats are discussed.


Subject(s)
Attitude , Humans , Reproducibility of Results
4.
Int J Soc Res Methodol ; 24(2): 181-202, 2021.
Article in English | MEDLINE | ID: mdl-34744481

ABSTRACT

Conversation analysts have described the formation of actions and the sequential organization of talk in a wide variety of contexts and offer resources that can be used to study interaction in the interview. Conversational practices are relevant before and during the survey interview: First, the process of recruiting sample members to become respondents provides a site in which an analysis of actions can be applied with a measurable outcome - the effects on participation. Second, once the interview and the task of measurement begins, conversational practices intersect with - and sometimes disrupt -- the "paradigmatic" question-answer sequence, complicating our notions of what it means to "measure" a construct in interaction. CA provides insights into interaction at both these tasks. This paper selectively reviews work that uses conversation analysis to understand the survey interview and offers some new applications (e.g., examining answers to yes-no questions).

5.
Field methods ; 32(3): 253-273, 2020 Aug 01.
Article in English | MEDLINE | ID: mdl-34290568

ABSTRACT

This study describes a method for collecting data from non-literate, non-English speaking populations. Our Audio computer-assisted self-interview instrument with color-labeled response categories was designed for use with a helper assistance. The study included 30 dyads of non-literate older Hmong respondents and family helpers answering questions about health. Analysis of video recordings identified respondents' problems and helpers' strategies to address these problems. Seven dyads displayed the paradigmatic question-answer sequence for all items, while 23 departed from the paradigmatic sequence at least once. Reports and pauses were the most common signs of problems displayed by respondents. Paraphrasing questions or response categories and providing examples were the most common helper strategies. Future research could assess the impact of helpers' strategies on data quality.

6.
J Gerontol B Psychol Sci Soc Sci ; 74(7): 1213-1221, 2019 09 15.
Article in English | MEDLINE | ID: mdl-29220523

ABSTRACT

OBJECTIVES: Recent research indicates that survey interviewers' ratings of respondents' health (IRH) may provide supplementary health information about respondents in surveys of older adults. Although IRH is a potentially promising measure of health to include in surveys, our understanding of the factors contributing to IRH remains incomplete. METHODS: We use data from the 2011 face-to-face wave of the Wisconsin Longitudinal Study, a longitudinal study of older adults from the Wisconsin high school class of 1957 and their selected siblings. We first examine whether a range of factors predict IRH: respondents' characteristics that interviewers learn about and observe as respondents answer survey questions, interviewers' evaluations of some of what they observe, and interviewers' characteristics. We then examine the role of IRH, respondents' self-rated health (SRH), and associated factors in predicting mortality over a 3-year follow-up. RESULTS: As in prior studies, we find that IRH is associated with respondents' characteristics. In addition, this study is the first to document how IRH is associated with both interviewers' evaluations of respondents and interviewers' characteristics. Furthermore, we find that the association between IRH and the strong criterion of mortality remains after controlling for respondents' characteristics and interviewers' evaluations of respondents. DISCUSSION: We propose that researchers incorporate IRH in surveys of older adults as a cost-effective, easily implemented, and supplementary measure of health.


Subject(s)
Diagnostic Self Evaluation , Health Status , Health Surveys/statistics & numerical data , Mortality , Observation , Female , Humans , Longitudinal Studies , Male , Middle Aged , Wisconsin/epidemiology
7.
J Surv Stat Methodol ; 6(1): 122-148, 2018 Mar.
Article in English | MEDLINE | ID: mdl-31032373

ABSTRACT

Although researchers have used phone surveys for decades, the lack of an accurate picture of the call opening reduces our ability to train interviewers to succeed. Sample members decide about participation quickly. We predict participation using the earliest moments of the call; to do this, we analyze matched pairs of acceptances and declinations from the Wisconsin Longitudinal Study using a case-control design and conditional logistic regression. We focus on components of the first speaking turns: acoustic-prosodic components and interviewer's actions. The sample member's "hello" is external to the causal processes within the call and may carry information about the propensity to respond. As predicted by Pillet-Shore (2012), we find that when the pitch span of the sample member's "hello" is greater the odds of participation are higher, but in contradiction to her prediction, the (less reliably measured) pitch pattern of the greeting does not predict participation. The structure of actions in the interviewer's first turn has a large impact. The large majority of calls in our analysis begin with either an "efficient" or "canonical" turn. In an efficient first turn, the interviewer delays identifying themselves (and thereby suggesting the purpose of the call) until they are sure they are speaking to the sample member, with the resulting efficiency that they introduce themselves only once. In a canonical turn, the interviewer introduces themselves and asks to speak to the sample member, but risks having to introduce themselves twice if the answerer is not the sample member. The odds of participation are substantially and significantly lower for an efficient turn compared to a canonical turn. It appears that how interviewers handle identification in their first turn has consequences for participation; an analysis of actions could facilitate experiments to design first interviewer turns for different target populations, study designs, and calling technologies.

8.
J Gerontol B Psychol Sci Soc Sci ; 73(1): 124-133, 2017 12 15.
Article in English | MEDLINE | ID: mdl-28444239

ABSTRACT

Objective: To test the feasibility of collecting and integrating data on the gut microbiome into one of the most comprehensive longitudinal studies of aging and health, the Wisconsin Longitudinal Study (WLS). The long-term goal of this integration is to clarify the contribution of social conditions in shaping the composition of the gut microbiota late in life. Research on the microbiome, which is considered to be of parallel importance to human health as the human genome, has been hindered by human studies with nonrandomly selected samples and with limited data on social conditions over the life course. Methods: No existing population-based longitudinal study had collected fecal specimens. Consequently, we created an in-person protocol to collect stool specimens from a subgroup of WLS participants. Results: We collected 429 stool specimens, yielding a 74% response rate and one of the largest human samples to date. Discussion: The addition of data on the gut microbiome to the WLS-and to other population based longitudinal studies of aging-is feasible, under the right conditions, and can generate innovative research on the relationship between social conditions and the gut microbiome.


Subject(s)
Gastrointestinal Microbiome , Social Conditions , Adolescent , Adult , Aged , Aging , Feces/microbiology , Female , Humans , Longitudinal Studies , Male , Middle Aged , Pilot Projects , Research Design , Wisconsin , Young Adult
9.
Sociol Methodol ; 46(1): 1-38, 2016 Aug.
Article in English | MEDLINE | ID: mdl-27867231

ABSTRACT

"Rapport" has been used to refer to a range of positive psychological features of an interaction -- including a situated sense of connection or affiliation between interactional partners, comfort, willingness to disclose or share sensitive information, motivation to please, or empathy. Rapport could potentially benefit survey participation and response quality by increasing respondents' motivation to participate, disclose, or provide accurate information. Rapport could also harm data quality if motivation to ingratiate or affiliate caused respondents to suppress undesirable information. Some previous research suggests that motives elicited when rapport is high conflict with the goals of standardized interviewing. We examine rapport as an interactional phenomenon, attending to both the content and structure of talk. Using questions about end-of-life planning in the 2003-2005 wave of the Wisconsin Longitudinal Study, we observe that rapport consists of behaviors that can be characterized as dimensions of responsiveness by interviewers and engagement by respondents. We identify and describe types of responsiveness and engagement in selected question-answer sequences and then devise a coding scheme to examine their analytic potential with respect to the criterion of future study participation. Our analysis suggests that responsive and engaged behaviors vary with respect to the goals of standardization-some conflict with these goals, while others complement them.

10.
Qual Life Res ; 25(8): 2117-21, 2016 08.
Article in English | MEDLINE | ID: mdl-26911155

ABSTRACT

PURPOSE: Following calls for replication of research studies, this study documents the results of two studies that experimentally examine the impact of response option order on self-rated health (SRH). METHODS: Two studies from an online panel survey examined how the order of response options (positive to negative versus negative to positive) influences the distribution of SRH answers. RESULTS: The results of both studies indicate that the distribution of SRH varies across the experimental treatments, and mean SRH is lower (worse) when the response options start with "poor" rather than "excellent." In addition, there are differences across the two studies in the distribution of SRH and mean SRH when the response options begin with "excellent," but not when the response options begin with "poor." CONCLUSION: The similarities in the general findings across the two studies strengthen the claim that SRH will be lower (worse) when the response options are ordered beginning with "poor" rather than "excellent" in online self-administered questionnaires, with implications for the validity of SRH. The slight differences in the administration of the seemingly identical studies further strengthen the claim and also serve as a reminder of the inherent variability of a single permutation of any given study.


Subject(s)
Health Status , Adolescent , Adult , Female , Humans , Male , Middle Aged , Quality of Life , Surveys and Questionnaires , Young Adult
11.
Surv Pract ; 9(2)2016.
Article in English | MEDLINE | ID: mdl-31467801

ABSTRACT

Many surveys contain sets of questions (e.g., batteries), in which the same phrase, such as a reference period or a set of response categories, applies across the set. When formatting questions for interviewer administration, question writers often enclose these repeated phrases in parentheses to signal that interviewers have the option of reading the phrase. Little research, however, examines what impact this practice has on data quality. We explore whether the presence and use of parenthetical statements is associated with indicators of processing problems for both interviewers and respondents, including the interviewer's ability to read the question exactly as worded, and the respondent's ability to answer the question without displaying problems answering (e.g., expressing uncertainty). Data are from questions about physical and mental health from 355 digitally recorded, transcribed, and interaction-coded telephone interviews. We implement a mixed-effects model with crossed random effects and nested and crossed fixed effects. The models also control for some respondent and interviewer characteristics. Findings indicate respondents are less likely to exhibit a problem when parentheticals are read, but reading the parentheticals increase the odds (marginally significant) that interviewers will make a reading error.

12.
Qual Life Res ; 24(6): 1443-53, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25409654

ABSTRACT

OBJECTIVES: This study aims to assess the impact of response option order and question order on the distribution of responses to the self-rated health (SRH) question and the relationship between SRH and other health-related measures. METHODS: In an online panel survey, we implement a 2-by-2 between-subjects factorial experiment, manipulating the following levels of each factor: (1) order of response options ("excellent" to "poor" versus "poor" to "excellent") and (2) order of SRH item (either preceding or following the administration of domain-specific health items). We use Chi-square difference tests, polychoric correlations, and differences in means and proportions to evaluate the effect of the experimental treatments on SRH responses and the relationship between SRH and other health measures. RESULTS: Mean SRH is higher (better health) and proportion in "fair" or "poor" health lower when response options are ordered from "excellent" to "poor" and SRH is presented first compared to other experimental treatments. Presenting SRH after domain-specific health items increases its correlation with these items, particularly when response options are ordered "excellent" to "poor." Among participants with the highest level of current health risks, SRH is worse when it is presented last versus first. CONCLUSION: While more research on the presentation of SRH is needed across a range of surveys, we suggest that ordering response options from "poor" to "excellent" might reduce positive clustering. Given the question order effects found here, we suggest presenting SRH before domain-specific health items in order to increase inter-survey comparability, as domain-specific health items will vary across surveys.


Subject(s)
Diagnostic Self Evaluation , Health Status , Surveys and Questionnaires , Adolescent , Adult , Female , Humans , Internet , Male , Middle Aged , Quality of Life , Risk , Self Report , United States , Young Adult
13.
Public Opin Q ; 77(1): 323-351, 2013.
Article in English | MEDLINE | ID: mdl-24976648

ABSTRACT

Previous research has proposed that the actions of sample members may provide encouraging, discouraging, or ambiguous interactional environments for interviewers soliciting participation in surveys. In our interactional model of the recruitment call that brings together the actions of interviewers and sample members, we examine features of actions that may contribute to an encouraging or discouraging environment in the opening moments of the call. Using audio recordings from the 2004 wave of the Wisconsin Longitudinal Study and an innovative design that controls for sample members' estimated propensity to participate in the survey, we analyze an extensive set of interviewers' and sample members' actions, the characteristics of those actions, and their sequential location in the interaction. We also analyze whether a sample member's subsequent actions (e.g., a question about the length of the interview or a "wh-type" question) constitute an encouraging, discouraging, or ambiguous environment within which the interviewer must produce her next action. Our case-control design allows us to analyze the consequences of actions for the outcome of the call.

14.
Public Opin Q ; 76(2): 311-325, 2012 Jul.
Article in English | MEDLINE | ID: mdl-24991062

ABSTRACT

Although previous research indicates that audio computer-assisted self-interviewing (ACASI) yields higher reports of threatening behaviors than interviewer-administered interviews, very few studies have examined the potential effect of the gender of the ACASI voice on survey reports. Because the voice in ACASI necessarily has a gender, it is important to understand whether using a voice that is perceived as male or female might further enhance the validity associated with ACASI. This study examines gender-of-voice effects for a set of questions about sensitive behaviors administered via ACASI to a sample of young adults at high risk for engaging in the behaviors. Results showed higher levels of engagement in the behaviors and more consistent reporting among males when responding to a female voice, indicating that males were potentially more accurate when reporting to the female voice. Reports by females were not influenced by the voice's gender. Our analysis adds to research on gender-of-voice effects in surveys, with important findings on measuring sensitive behaviors among young adults.

15.
Soc Sci Res ; 40(4): 1025-1036, 2011 Jul 01.
Article in English | MEDLINE | ID: mdl-21927518

ABSTRACT

The self-reported health question summarizes information about health status across several domains of health and is widely used to measure health because it predicts mortality well. We examine whether interactional behaviors produced by respondents and interviewers during the self-reported health question-answer sequence reflect complexities in the respondent's health history. We observed more problematic interactional behaviors during question-answer sequences in which respondents reported worse health. Furthermore, these behaviors were more likely to occur when there were inconsistencies in the respondent's health history, even after controlling for the respondent's answer to the self-reported health question, cognitive ability, and sociodemographic characteristics. We also found that among respondents who reported "excellent" health, and to a lesser extent among those who reported their health was "very good," problematic interactional behaviors were associated with health inconsistencies. Overall, we find evidence that the interactional behaviors exhibited during the question-answer sequence are associated with respondents' health status.

16.
Public Opin Q ; 75(5): 909-961, 2011 Dec.
Article in English | MEDLINE | ID: mdl-24970951

ABSTRACT

We begin with a look back at the field to identify themes of recent research that we expect to continue to occupy researchers in the future. As part of this overview, we characterize the themes and topics examined in research about measurement and survey questions published in Public Opinion Quarterly in the past decade. We then characterize the field more broadly by highlighting topics that we expect to continue or to grow in importance, including the relationship between survey questions and the total survey error perspective, cognitive versus interactional approaches, interviewing practices, mode and technology, visual aspects of question design, and culture. Considering avenues for future research, we advocate for a decision-oriented framework for thinking about survey questions and their characteristics. The approach we propose distinguishes among various aspects of question characteristics, including question topic, question type and response dimension, conceptualization and operationalization of the target object, question structure, question form, response categories, question implementation, and question wording. Thinking about question characteristics more systematically would allow study designs to take into account relationships among these characteristics and identify gaps in current knowledge.

17.
Am Sociol Rev ; 75(5): 791-8, 2010 Oct.
Article in English | MEDLINE | ID: mdl-21691562

ABSTRACT

We draw on conversation analytic methods and research to explicate the interactional phenomenon of requesting in general and the specific case of requesting participation in survey interviews. Recent work on survey participation has given much attention to leverage-saliency theory, but has not engaged how the key concepts of this theory are exhibited in the actual unfolding interaction of interviewers and potential respondents. We do so using digitally recorded and transcribed calls to recruit participation in the 2004 Wisconsin Longitudinal Study. We describe how potential respondents present interactional environments that are relatively discouraging or encouraging, and how, in response, interviewers may be relatively cautious or presumptive in their requesting actions. We consider how the ability of interviewers to tailor their behavior to their interactional environment can affect whether the introduction reaches the point at which a request to participate is made, the form that this request takes, and the sample person's response. Our analysis contributes to understanding how we might use insights from the analysis of interaction to increase cooperation with requests to participate in surveys.

SELECTION OF CITATIONS
SEARCH DETAIL
...