Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
Public Opin Q ; 87(Suppl 1): 480-506, 2023.
Article in English | MEDLINE | ID: mdl-37705920

ABSTRACT

Interviewers' postinterview evaluations of respondents' performance (IEPs) are paradata, used to describe the quality of the data obtained from respondents. IEPs are driven by a combination of factors, including respondents' and interviewers' sociodemographic characteristics and what actually transpires during the interview. However, relatively few studies examine how IEPs are associated with features of the response process, including facets of the interviewer-respondent interaction and patterns of responding that index data quality. We examine whether features of the response process-various respondents' behaviors and response quality indicators-are associated with IEPs in a survey with a diverse set of respondents focused on barriers and facilitators to participating in medical research. We also examine whether there are differences in IEPs across respondents' and interviewers' sociodemographic characteristics. Our results show that both respondents' behaviors and response quality indicators predict IEPs, indicating that IEPs reflect what transpires in the interview. In addition, interviewers appear to approach the task of evaluating respondents with differing frameworks, as evidenced by the variation in IEPs attributable to interviewers and associations between IEPs and interviewers' gender. Further, IEPs were associated with respondents' education and ethnoracial identity, net of respondents' behaviors, response quality indicators, and sociodemographic characteristics of respondents and interviewers. Future research should continue to build on studies that examine the correlates of IEPs to better inform whether, when, and how to use IEPs as paradata about the quality of the data obtained.

2.
Article in English | MEDLINE | ID: mdl-36429884

ABSTRACT

Medical research literacy (MRL) is a facet of health literacy that measures a person's understanding of informed consent and other aspects of participation in medical research. While existing research on MRL is limited, there are reasons to believe MRL may be associated with a willingness to participate in medical research. We use data from a racially balanced sample of survey respondents (n = 410): (1) to analyze how MRL scores vary by respondents' socio-demographic characteristics; (2) to examine how MRL relates to respondents' expressed likelihood to participate in a clinical trial; and (3) to provide considerations on the measurement of MRL. The results indicate no differences in MRL scores by race or gender; younger (p < 0.05) and more educated (p < 0.001) individuals have significantly higher MRL scores. Further, higher MRL scores are associated with significantly lower levels of expressed likelihood to participate in a clinical trial. Additionally, the MRL scale included both true and false statements, and analyses demonstrate significant differences in how these relate to outcomes. Altogether, the results signal that further research is needed to understand MRL and how it relates to socio-demographic characteristics associated with research participation and can be measured effectively.


Subject(s)
Biomedical Research , Health Literacy , Humans , Informed Consent , Surveys and Questionnaires , Clinical Trials as Topic
3.
PLoS One ; 17(8): e0272306, 2022.
Article in English | MEDLINE | ID: mdl-35939500

ABSTRACT

Acceptance of animal research by the public depends on several characteristics of the specific experimental study. In particular, acceptance decreases as potential animal pain or distress increases. Our objective in this study was to quantify the magnitude of pain/distress that university undergraduate students and faculty would find to be justifiable in animal research, and to see how that justifiability varied according to the purpose of the research, or the species to which the animal belonged. We also evaluate how demographic characteristics of respondents may be associated with their opinions about justifiability. To accomplish this goal, we developed and administered a survey to students and faculty at the University of Wisconsin-Madison. Our survey employed Likert-style questions that asked them to designate the level of animal pain or distress that they felt was justifiable for each of the following six purposes-animal disease, human disease, basic research, human medicine, chemical testing, or cosmetic testing. These questions were asked about five different species of animals including monkeys, dogs/cats, pig/sheep, rats/mice, or small fish. We used the data to establish a purpose-specific pain/distress scale, a species-specific pain/distress scale, and a composite pain/distress scale that, for each respondent, averaged the extent of justifiable pain/distress across all purposes and species. For purpose, students were more likely to choose higher levels of pain for animal disease research, followed by human disease, basic research, human medicine, chemical testing, and cosmetic testing. Faculty were more likely to choose the same level of pain for the first four purposes, followed by lower levels of pain for chemical and cosmetic testing. For species, students were more likely to choose higher levels of pain for small fish and rats/mice (tied), pigs/sheep and monkeys (tied), than for dogs/cats. For faculty, order from least to most justifiable pain/distress was small fish, rats/mice, pigs/sheep, then dogs/cats and monkeys (the latter two tied). Interestingly, exploratory factor analysis of the pain/distress scales indicated that when it comes to justifying higher levels of pain and distress, respondents identified two distinct categories of purposes, chemical and cosmetic testing, for which respondents were less likely to justify higher levels of pain or distress as compared to other purposes; and two distinct categories of species, small fish and rats/mice, for which respondents were more likely to justify higher levels of pain/distress than other species. We found that the spread of acceptance of animal research was much smaller when survey questions included pain/distress compared to when only purpose or species were part of the question. Demographically, women, vegetarians/vegans, and respondents with no experience in animal research justified less animal pain/distress than their counterparts. Not surprisingly, a lower level of support for animal research in general was correlated with lower justifiability of pain/distress. Based on these findings, we discuss the role of animal pain/distress in regulatory considerations underlying decisions about whether to approve specific animal uses, and suggest ways to strengthen the ethical review and public acceptance of animal research.


Subject(s)
Animal Experimentation , Animals , Dogs , Faculty , Female , Humans , Judgment , Mice , Pain/veterinary , Rats , Sheep , Students , Surveys and Questionnaires , Swine , Universities
4.
Res Social Adm Pharm ; 18(2): 2335-2344, 2022 02.
Article in English | MEDLINE | ID: mdl-34253471

ABSTRACT

Agree-disagree (AD) or Likert questions (e.g., "I am extremely satisfied: strongly agree … strongly disagree") are among the most frequently used response formats to measure attitudes and opinions in the social and medical sciences. This review and research synthesis focuses on the measurement properties and potential limitations of AD questions. The research leads us to advocate for an alternative questioning strategy in which items are written to directly ask about their underlying response dimensions using response categories tailored to match the response dimension, which we refer to as item-specific (IS) (e.g., "How satisfied are you: not at all … extremely"). In this review we: 1) synthesize past research comparing data quality for AD and IS questions; 2) present conceptual models of and review research supporting respondents' cognitive processing of AD and IS questions; and 3) provide an overview of question characteristics that frequently differ between AD and IS questions and may affect respondents' cognitive processing and data quality. Although experimental studies directly comparing AD and IS questions yield some mixed results, more studies find IS questions are associated with desirable data quality outcomes (e.g., validity and reliability) and AD questions are associated with undesirable outcomes (e.g., acquiescence, response effects, etc.). Based on available research, models of cognitive processing, and a review of question characteristics, we recommended IS questions over AD questions for most purposes. For researchers considering the use of previously administered AD questions and instruments, issues surrounding the challenges of translating questions from AD to IS response formats are discussed.


Subject(s)
Attitude , Humans , Reproducibility of Results
5.
Eval Health Prof ; 44(3): 235-244, 2021 09.
Article in English | MEDLINE | ID: mdl-32924566

ABSTRACT

While collecting high quality data from physicians is critical, response rates for physician surveys are frequently low. A proven method for increasing response in mail surveys is to provide a small, prepaid monetary incentive in the initial mailing. More recently, researchers have begun experimenting with adding a second cash incentive in a follow-up contact in order to increase participation among more reluctant respondents. To assess the effects of sequential incentives on response rates, data quality, sample representativeness, and costs, physicians (N = 1,500) were randomly assigned to treatments that crossed the amount of a first ($5 or $10) and second ($0, $5, or $10) incentive to form the following groups: Group $5/$5; Group $5/$10; Group $10/$0; Group $10/$5; and Group $10/$10. Overall, second incentives were associated with higher response rates and lower costs per completed survey, and while they had no effect on item nonresponse, they increased sample representativeness.


Subject(s)
Motivation , Physicians , Data Accuracy , Humans , Postal Service , Surveys and Questionnaires
6.
PLoS One ; 15(5): e0233204, 2020.
Article in English | MEDLINE | ID: mdl-32470025

ABSTRACT

As members of a university community that sponsors animal research, we developed a survey to improve our knowledge about factors underlying the perceived justifiability of animal research among faculty and undergraduate students. To accomplish this objective, we gathered quantitative data about their general views on animal use by humans, their specific views about the use of different species to address different categories of scientific questions, and their confidence in the translatability of animal research to humans. Students and faculty did not differ in their reported levels of concern for the human use of animals, but women reported significantly higher levels of concern than men. Among students, experience with animal research was positively correlated with less concern with animal use, and having practiced vegetarianism or veganism was associated with more concern. Gender, experience with animal research, and dietary preferences were similarly correlated with the extent of justifiability of animal use across all research purposes and species. Faculty responses resembled those for students, with the exception that justifiability varied significantly based on academic discipline: biological sciences faculty were least concerned about human use of animals and most supportive of animal research regardless of purpose or species. For both students and faculty, justifiability varied depending on research purpose or animal species. Research purposes, ranked in order of justifiability from high to low, was animal disease, human disease, basic research, human medicine, animal production, chemical testing, and cosmetics. Justifiability by purpose was slightly lower for students than for faculty. Species justifiability for students, from high to low, was small fish, rats or mice, pigs or sheep, monkeys, and dogs or cats. Faculty order was the same except that monkeys and dogs or cats were reversed in order. Finally, confidence in the translatability of animal research to our understanding of human biology and medicine was not different between students and faculty or between genders, but among faculty it was highest in biological sciences followed by physical sciences, social sciences, and then arts and humanities. Those with experience in animal research displayed the most confidence, and vegetarians/vegans displayed the least. These findings demonstrate that, although the range of views in any subcategory is large, views about animal research justifiability can vary significantly among respondent subpopulations in predictable ways. In particular, research purpose and choice of animal species are important variables for many people. This supports the claim that ensuring purpose and species are robustly integrated into research proposal reviews and approvals should be considered to be a best practice. We suggest that strengthening this integration beyond what is described in current regulations would better meet the justifiability criteria expressed by members of our campus community.


Subject(s)
Animal Experimentation , Attitude , Biomedical Research , Education, Medical, Undergraduate , Faculty , Students , Adult , Animals , Cats , Dogs , Female , Humans , Male , Mice , Rats , Sheep , Swine
7.
PLoS One ; 14(10): e0223375, 2019.
Article in English | MEDLINE | ID: mdl-31647851

ABSTRACT

Research using animals is controversial. To develop sound public outreach and policy about this issue, we need information about both the underlying science and people's attitudes and knowledge. To identify attitudes toward this subject at the University of Wisconsin-Madison, we developed and administered a survey to undergraduate students and faculty. The survey asked respondents about the importance of, their confidence in their knowledge about, and who they trusted to provide information on animal research. Findings indicated attitudes varied by academic discipline, especially among faculty. Faculty in the biological sciences, particularly those who had participated in an animal research project, reported the issue to be most important, and they reported greater confidence in their knowledge about pro and con arguments. Among students, being female, a vegetarian/vegan, or participating in animal research were associated with higher ratings of importance. Confidence in knowledge about regulation and its adequacy was very low across all groups except biological science faculty. Both students and faculty identified university courses and spokespersons to be the most trusted sources of information about animal research. UW-Madison has a long history of openness about animal research, which correlates with the high level of trust by students and faculty. Nevertheless, confidence in knowledge about animal research and its regulation remains limited, and both students and faculty indicated their desire to receive more information from courses and spokespersons. Based on these findings, we argue that providing robust university-wide outreach and course-based content about animal research should be considered an organizational best practice, in particular for colleges and universities.


Subject(s)
Faculty , Research , Students , Universities , Animals , Health Knowledge, Attitudes, Practice , Humans , Surveys and Questionnaires , Trust
8.
J Gerontol B Psychol Sci Soc Sci ; 74(7): 1213-1221, 2019 09 15.
Article in English | MEDLINE | ID: mdl-29220523

ABSTRACT

OBJECTIVES: Recent research indicates that survey interviewers' ratings of respondents' health (IRH) may provide supplementary health information about respondents in surveys of older adults. Although IRH is a potentially promising measure of health to include in surveys, our understanding of the factors contributing to IRH remains incomplete. METHODS: We use data from the 2011 face-to-face wave of the Wisconsin Longitudinal Study, a longitudinal study of older adults from the Wisconsin high school class of 1957 and their selected siblings. We first examine whether a range of factors predict IRH: respondents' characteristics that interviewers learn about and observe as respondents answer survey questions, interviewers' evaluations of some of what they observe, and interviewers' characteristics. We then examine the role of IRH, respondents' self-rated health (SRH), and associated factors in predicting mortality over a 3-year follow-up. RESULTS: As in prior studies, we find that IRH is associated with respondents' characteristics. In addition, this study is the first to document how IRH is associated with both interviewers' evaluations of respondents and interviewers' characteristics. Furthermore, we find that the association between IRH and the strong criterion of mortality remains after controlling for respondents' characteristics and interviewers' evaluations of respondents. DISCUSSION: We propose that researchers incorporate IRH in surveys of older adults as a cost-effective, easily implemented, and supplementary measure of health.


Subject(s)
Diagnostic Self Evaluation , Health Status , Health Surveys/statistics & numerical data , Mortality , Observation , Female , Humans , Longitudinal Studies , Male , Middle Aged , Wisconsin/epidemiology
9.
J Off Stat ; 35(2): 353-386, 2019 Jun.
Article in English | MEDLINE | ID: mdl-33542588

ABSTRACT

While scales measuring subjective constructs historically rely on agree-disagree (AD) questions, recent research demonstrates that construct-specific (CS) questions clarify underlying response dimensions that AD questions leave implicit and CS questions often yield higher measures of data quality. Given acknowledged issues with AD questions and certain established advantages of CS items, the evidence for the superiority of CS questions is more mixed than one might expect. We build on previous investigations by using cognitive interviewing to deepen understanding of AD and CS response processing and potential sources of measurement error. We randomized 64 participants to receive an AD or CS version of a scale measuring trust in medical researchers. We examine several indicators of data quality and cognitive response processing including: reliability, concurrent validity, recency, response latencies, and indicators of response processing difficulties (e.g., uncodable answers). Overall, results indicate reliability is higher for the AD scale, neither scale is more valid, and the CS scale is more susceptible to recency effects for certain questions. Results for response latencies and behavioral indicators provide evidence that the CS questions promote deeper processing. Qualitative analysis reveals five sources of difficulties with response processing that shed light on under-examined reasons why AD and CS questions can produce different results, with CS not always yielding higher measures of data quality than AD.

10.
J Surv Stat Methodol ; 6(1): 122-148, 2018 Mar.
Article in English | MEDLINE | ID: mdl-31032373

ABSTRACT

Although researchers have used phone surveys for decades, the lack of an accurate picture of the call opening reduces our ability to train interviewers to succeed. Sample members decide about participation quickly. We predict participation using the earliest moments of the call; to do this, we analyze matched pairs of acceptances and declinations from the Wisconsin Longitudinal Study using a case-control design and conditional logistic regression. We focus on components of the first speaking turns: acoustic-prosodic components and interviewer's actions. The sample member's "hello" is external to the causal processes within the call and may carry information about the propensity to respond. As predicted by Pillet-Shore (2012), we find that when the pitch span of the sample member's "hello" is greater the odds of participation are higher, but in contradiction to her prediction, the (less reliably measured) pitch pattern of the greeting does not predict participation. The structure of actions in the interviewer's first turn has a large impact. The large majority of calls in our analysis begin with either an "efficient" or "canonical" turn. In an efficient first turn, the interviewer delays identifying themselves (and thereby suggesting the purpose of the call) until they are sure they are speaking to the sample member, with the resulting efficiency that they introduce themselves only once. In a canonical turn, the interviewer introduces themselves and asks to speak to the sample member, but risks having to introduce themselves twice if the answerer is not the sample member. The odds of participation are substantially and significantly lower for an efficient turn compared to a canonical turn. It appears that how interviewers handle identification in their first turn has consequences for participation; an analysis of actions could facilitate experiments to design first interviewer turns for different target populations, study designs, and calling technologies.

11.
BMC Public Health ; 17(1): 771, 2017 10 04.
Article in English | MEDLINE | ID: mdl-28978325

ABSTRACT

BACKGROUND: Self-rated health (SRH) is widely used to measure subjective health. Yet it is unclear what underlies health ratings, with implications for understanding the validity of SRH overall and across sociodemographic characteristics. We analyze participants' explanations of how they formulated their SRH answer in addition to which health factors they considered and examine group differences in these processes. METHODS: Cognitive interviews were conducted with 64 participants in a convenience quota sample crossing dimensions of race/ethnicity (white, Latino, black, American Indian), gender, age, and education. Participants rated their health then described their thoughts when answering SRH. We coded participants' answers in an inductive, iterative, and systematic process from interview transcripts, developing analytic categories (i.e., themes) and subdimensions within. We examined whether the presence of each dimension of an analytic category varied across sociodemographic groups. RESULTS: Our qualitative analysis led to the identification and classification of various subdimensions of the following analytic categories: types of health factors mentioned, valence of health factors, temporality of health factors, conditional health statements, and descriptions and definitions of health. We found differences across groups in some types of health factors mentioned-corresponding, conflicting, or novel with respect to prior research. Furthermore, we also documented various processes through which respondents integrate seemingly disparate health factors to formulate an answer through valence and conditional health statements. Finally, we found some evidence of sociodemographic group differences with respect to types of health factors mentioned, valence of health factors, and conditional health statements, highlighting avenues for future research. CONCLUSION: This study provides a description of how participants rate their general health status and highlights potential differences in these processes across sociodemographic groups, helping to provide a more comprehensive understanding of how SRH functions as a measure of health.


Subject(s)
Black or African American/psychology , Diagnostic Self Evaluation , Hispanic or Latino/psychology , Indians, North American/psychology , White People/psychology , Adult , Black or African American/statistics & numerical data , Age Factors , Educational Status , Female , Hispanic or Latino/statistics & numerical data , Humans , Indians, North American/statistics & numerical data , Interviews as Topic , Male , Middle Aged , Sex Factors , United States , White People/statistics & numerical data
12.
Sociol Methodol ; 46(1): 1-38, 2016 Aug.
Article in English | MEDLINE | ID: mdl-27867231

ABSTRACT

"Rapport" has been used to refer to a range of positive psychological features of an interaction -- including a situated sense of connection or affiliation between interactional partners, comfort, willingness to disclose or share sensitive information, motivation to please, or empathy. Rapport could potentially benefit survey participation and response quality by increasing respondents' motivation to participate, disclose, or provide accurate information. Rapport could also harm data quality if motivation to ingratiate or affiliate caused respondents to suppress undesirable information. Some previous research suggests that motives elicited when rapport is high conflict with the goals of standardized interviewing. We examine rapport as an interactional phenomenon, attending to both the content and structure of talk. Using questions about end-of-life planning in the 2003-2005 wave of the Wisconsin Longitudinal Study, we observe that rapport consists of behaviors that can be characterized as dimensions of responsiveness by interviewers and engagement by respondents. We identify and describe types of responsiveness and engagement in selected question-answer sequences and then devise a coding scheme to examine their analytic potential with respect to the criterion of future study participation. Our analysis suggests that responsive and engaged behaviors vary with respect to the goals of standardization-some conflict with these goals, while others complement them.

13.
Qual Life Res ; 25(8): 2117-21, 2016 08.
Article in English | MEDLINE | ID: mdl-26911155

ABSTRACT

PURPOSE: Following calls for replication of research studies, this study documents the results of two studies that experimentally examine the impact of response option order on self-rated health (SRH). METHODS: Two studies from an online panel survey examined how the order of response options (positive to negative versus negative to positive) influences the distribution of SRH answers. RESULTS: The results of both studies indicate that the distribution of SRH varies across the experimental treatments, and mean SRH is lower (worse) when the response options start with "poor" rather than "excellent." In addition, there are differences across the two studies in the distribution of SRH and mean SRH when the response options begin with "excellent," but not when the response options begin with "poor." CONCLUSION: The similarities in the general findings across the two studies strengthen the claim that SRH will be lower (worse) when the response options are ordered beginning with "poor" rather than "excellent" in online self-administered questionnaires, with implications for the validity of SRH. The slight differences in the administration of the seemingly identical studies further strengthen the claim and also serve as a reminder of the inherent variability of a single permutation of any given study.


Subject(s)
Health Status , Adolescent , Adult , Female , Humans , Male , Middle Aged , Quality of Life , Surveys and Questionnaires , Young Adult
14.
Surv Pract ; 9(2)2016.
Article in English | MEDLINE | ID: mdl-31467801

ABSTRACT

Many surveys contain sets of questions (e.g., batteries), in which the same phrase, such as a reference period or a set of response categories, applies across the set. When formatting questions for interviewer administration, question writers often enclose these repeated phrases in parentheses to signal that interviewers have the option of reading the phrase. Little research, however, examines what impact this practice has on data quality. We explore whether the presence and use of parenthetical statements is associated with indicators of processing problems for both interviewers and respondents, including the interviewer's ability to read the question exactly as worded, and the respondent's ability to answer the question without displaying problems answering (e.g., expressing uncertainty). Data are from questions about physical and mental health from 355 digitally recorded, transcribed, and interaction-coded telephone interviews. We implement a mixed-effects model with crossed random effects and nested and crossed fixed effects. The models also control for some respondent and interviewer characteristics. Findings indicate respondents are less likely to exhibit a problem when parentheticals are read, but reading the parentheticals increase the odds (marginally significant) that interviewers will make a reading error.

15.
Qual Life Res ; 24(6): 1443-53, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25409654

ABSTRACT

OBJECTIVES: This study aims to assess the impact of response option order and question order on the distribution of responses to the self-rated health (SRH) question and the relationship between SRH and other health-related measures. METHODS: In an online panel survey, we implement a 2-by-2 between-subjects factorial experiment, manipulating the following levels of each factor: (1) order of response options ("excellent" to "poor" versus "poor" to "excellent") and (2) order of SRH item (either preceding or following the administration of domain-specific health items). We use Chi-square difference tests, polychoric correlations, and differences in means and proportions to evaluate the effect of the experimental treatments on SRH responses and the relationship between SRH and other health measures. RESULTS: Mean SRH is higher (better health) and proportion in "fair" or "poor" health lower when response options are ordered from "excellent" to "poor" and SRH is presented first compared to other experimental treatments. Presenting SRH after domain-specific health items increases its correlation with these items, particularly when response options are ordered "excellent" to "poor." Among participants with the highest level of current health risks, SRH is worse when it is presented last versus first. CONCLUSION: While more research on the presentation of SRH is needed across a range of surveys, we suggest that ordering response options from "poor" to "excellent" might reduce positive clustering. Given the question order effects found here, we suggest presenting SRH before domain-specific health items in order to increase inter-survey comparability, as domain-specific health items will vary across surveys.


Subject(s)
Diagnostic Self Evaluation , Health Status , Surveys and Questionnaires , Adolescent , Adult , Female , Humans , Internet , Male , Middle Aged , Quality of Life , Risk , Self Report , United States , Young Adult
16.
Eval Health Prof ; 36(3): 352-81, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23975760

ABSTRACT

The versatility, speed, and reduced costs with which web surveys can be conducted with clinicians are often offset by low response rates. Drawing on best practices and general recommendations in the literature, we provide an evidence-based overview of methods for conducting online surveys with providers. We highlight important advantages and disadvantages of conducting provider surveys online and include a review of differences in response rates between web and mail surveys of clinicians. When administered online, design-based features affect rates of survey participation and data quality. We examine features likely to have an impact including sample frames, incentives, contacts (type, timing, and content), mixed-mode approaches, and questionnaire length. We make several recommendations regarding optimal web-based designs, but more empirical research is needed, particularly with regard to identifying which combinations of incentive and contact approaches yield the highest response rates and are the most cost-effective.


Subject(s)
Health Care Surveys/methods , Health Personnel , Internet , Medical Staff , Research Design , Cost-Benefit Analysis , Costs and Cost Analysis , Efficiency, Organizational , Evidence-Based Practice , Health Care Surveys/economics , Health Care Surveys/standards , Humans , Motivation , Surveys and Questionnaires
17.
Vet Surg ; 42(6): 635-42, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23808834

ABSTRACT

OBJECTIVE: To report the current state of minimally invasive surgery (MIS) in veterinary surgical practice in 2010. STUDY DESIGN: Electronic questionnaire. SAMPLE POPULATION: Diplomates and residents of the American College of Veterinary Surgeons (ACVS). METHODS: A survey (38 questions for Diplomates, 23 questions for residents) was sent electronically to 1216 Diplomates and 300 residents. Questions were organized into 5 categories to investigate: (1) caseload and distribution of MIS cases; (2) MIS training; (3) MIS benefits, morbidity, limitations and motivating factors; (4) ACVS role; and (5) demographics of the study population. RESULTS: Eighty-six percent of small animal (SA) Diplomates, 99% of large animal (LA) Diplomates, and 98% of residents had performed MIS. Median LA caseload (30 cases/year; range, 1-600) was significantly higher than SA caseload (20 cases/year; range, 1-350). Descending order of case distribution was: arthroscopy > laparoscopy > endoscopic upper airway > thoracoscopy. Sixty percent of Diplomates and 98% of residents received MIS training during their residency. Residents' perspective of MIS training proficiency was positively correlated to caseload. Ninety-five percent of all respondents felt postoperative morbidity was less with MIS, and were motivated by patient benefits, maintaining a high standard of care, and personal interests. Fifty-eight percent of Diplomates and 89% of residents felt ACVS should be involved in developing MIS training. CONCLUSIONS: MIS is widely used by ACVS Diplomates and residents in clinical practice; however, important differences exist between SA and LA surgeons and practice types. MIS training in partnership with the ACVS is needed for continued development in veterinary surgery.


Subject(s)
Livestock/surgery , Minimally Invasive Surgical Procedures/veterinary , Pets/surgery , Societies, Scientific/organization & administration , Veterinary Medicine/organization & administration , Animals , Data Collection , Internship and Residency , Laparoscopy/education , United States
18.
Matern Child Health J ; 16(4): 785-91, 2012 May.
Article in English | MEDLINE | ID: mdl-21509432

ABSTRACT

From 2009 to 2010, an experiment was conducted to increase response rates among African American mothers in the Wisconsin Pregnancy Risk Assessment Monitoring System (PRAMS). Sample members were randomly assigned to groups that received a prepaid, cash incentive of $5 (n = 219); a coupon for diapers valued at $6 (n = 210); or no incentive (n = 209). Incentives were included with the questionnaire, which was mailed to respondents. We examined the effects of the incentives on several outcomes, including response rates, cost effectiveness, survey response distributions, and item nonresponse. Response rates were significantly higher for the cash group than for the coupon (42.5 vs. 32.4%, P < .05) or no incentive group (42.5 vs. 30.1%, P < .01); the coupon and no incentive groups performed similarly. While absolute costs were the highest for the cash group, the cost per completed survey was the lowest. The incentives had limited effects on response distributions for specific survey questions. Although respondents completing the survey by mail in the cash and coupon groups exhibited a trend toward being less likely to have missing data, the effect was not significant. Compared to a coupon or no incentive, a small cash incentive significantly improved response rates and was cost effective among African American respondents in Wisconsin PRAMS. Incentives had only limited effects, however, on survey response distributions, and no significant effects on item nonresponse.


Subject(s)
Black or African American/psychology , Health Surveys/methods , Motivation , Risk Assessment , Surveys and Questionnaires/economics , Black or African American/statistics & numerical data , Cost-Benefit Analysis , Female , Health Surveys/economics , Humans , Postal Service , Pregnancy , Pregnancy, High-Risk/ethnology , Risk Assessment/methods , Telephone , Wisconsin
19.
Public Opin Q ; 76(2): 311-325, 2012 Jul.
Article in English | MEDLINE | ID: mdl-24991062

ABSTRACT

Although previous research indicates that audio computer-assisted self-interviewing (ACASI) yields higher reports of threatening behaviors than interviewer-administered interviews, very few studies have examined the potential effect of the gender of the ACASI voice on survey reports. Because the voice in ACASI necessarily has a gender, it is important to understand whether using a voice that is perceived as male or female might further enhance the validity associated with ACASI. This study examines gender-of-voice effects for a set of questions about sensitive behaviors administered via ACASI to a sample of young adults at high risk for engaging in the behaviors. Results showed higher levels of engagement in the behaviors and more consistent reporting among males when responding to a female voice, indicating that males were potentially more accurate when reporting to the female voice. Reports by females were not influenced by the voice's gender. Our analysis adds to research on gender-of-voice effects in surveys, with important findings on measuring sensitive behaviors among young adults.

20.
Soc Sci Res ; 40(4): 1025-1036, 2011 Jul 01.
Article in English | MEDLINE | ID: mdl-21927518

ABSTRACT

The self-reported health question summarizes information about health status across several domains of health and is widely used to measure health because it predicts mortality well. We examine whether interactional behaviors produced by respondents and interviewers during the self-reported health question-answer sequence reflect complexities in the respondent's health history. We observed more problematic interactional behaviors during question-answer sequences in which respondents reported worse health. Furthermore, these behaviors were more likely to occur when there were inconsistencies in the respondent's health history, even after controlling for the respondent's answer to the self-reported health question, cognitive ability, and sociodemographic characteristics. We also found that among respondents who reported "excellent" health, and to a lesser extent among those who reported their health was "very good," problematic interactional behaviors were associated with health inconsistencies. Overall, we find evidence that the interactional behaviors exhibited during the question-answer sequence are associated with respondents' health status.

SELECTION OF CITATIONS
SEARCH DETAIL
...