Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Methoden Daten Anal ; 17(2): 135-170, 2023.
Article in English | MEDLINE | ID: mdl-37724168

ABSTRACT

This study investigates the extent to which video technologies - now ubiquitous - might be useful for survey measurement. We compare respondents' performance and experience (n = 1,067) in live video-mediated interviews, a web survey in which prerecorded interviewers read questions, and a conventional (textual) web survey. Compared to web survey respondents, those interviewed via live video were less likely to select the same response for all statements in a battery (non-differentiation) and reported higher satisfaction with their experience but provided more rounded numerical (presumably less thoughtful) answers and selected answers that were less sensitive (more socially desirable). This suggests the presence of a live interviewer, even if mediated, can keep respondents motivated and conscientious but may introduce time pressure - a likely reason for increased rounding - and social presence - a likely reason for more socially desirable responding. Respondents "interviewed" by a prerecorded interviewer, rounded fewer numerical answers and responded more candidly than did those in the other modes, but engaged in non-differentiation more than did live video respondents, suggesting there are advantages and disadvantages for both video modes. Both live and prerecorded video seem potentially viable for use in production surveys and may be especially valuable when in-person interviews are not feasible.

2.
J Surv Stat Methodol ; 10(2): 317-336, 2022 Apr.
Article in English | MEDLINE | ID: mdl-37406077

ABSTRACT

Live video (LV) communication tools (e.g., Zoom) have the potential to provide survey researchers with many of the benefits of in-person interviewing, while also greatly reducing data collection costs, given that interviewers do not need to travel and make in-person visits to sampled households. The COVID-19 pandemic has exposed the vulnerability of in-person data collection to public health crises, forcing survey researchers to explore remote data collection modes-such as LV interviewing-that seem likely to yield high-quality data without in-person interaction. Given the potential benefits of these technologies, the operational and methodological aspects of video interviewing have started to receive research attention from survey methodologists. Although it is remote, video interviewing still involves respondent-interviewer interaction that introduces the possibility of interviewer effects. No research to date has evaluated this potential threat to the quality of the data collected in video interviews. This research note presents an evaluation of interviewer effects in a recent experimental study of alternative approaches to video interviewing including both LV interviewing and the use of prerecorded videos of the same interviewers asking questions embedded in a web survey ("prerecorded video" interviewing). We find little evidence of significant interviewer effects when using these two approaches, which is a promising result. We also find that when interviewer effects were present, they tended to be slightly larger in the LV approach as would be expected in light of its being an interactive approach. We conclude with a discussion of the implications of these findings for future research using video interviewing.

3.
Int J Soc Res Methodol ; 24(2): 249-364, 2021.
Article in English | MEDLINE | ID: mdl-33732090

ABSTRACT

To explore socially desirable responding in telephone surveys, this study examines response latencies in answers to 27 questions in 319 audio-recorded iPhone interviews from Schober et al. (2015). Response latencies were compared when respondents (a) answered questions on sensitive vs. nonsensitive topics (as classified by online raters); (b) produced more vs. less socially desirable answers; and (c) were interviewed by a professional interviewer or an automated system. Respondents answered questions on sensitive topics more quickly than on nonsensitive topics, though patterns varied by question format (categorical, numerical, ordinal). Independent of question sensitivity, respondents gave less socially desirable answers more quickly when answering categorical and ordinal questions but more slowly when answering numeric questions. Respondents were particularly quicker to answer sensitive questions when asked by interviewers than by the automated system. Findings demonstrate that response times can be (differently) revealing about question and response sensitivity in a telephone survey.

4.
Top Cogn Sci ; 10(2): 452-484, 2018 04.
Article in English | MEDLINE | ID: mdl-29630774

ABSTRACT

This paper examines when conceptual misalignments in dialog lead to consequential miscommunication. Two studies explore misunderstanding in survey interviews of the sort conducted by governments and social scientists, where mismeasurement can have real social costs. In 131 interviews about tobacco use, misalignment between respondents' and researchers' conceptions of ordinary expressions like "smoking" and "every day" was quantified by probing respondents' interpretations of survey terms and re-administering the survey questionnaire with standard definitions after the interview. Respondents' interpretations were surprisingly variable, and in many cases they did not match the conceptions that researchers intended them to use. More often than one might expect, this conceptual variability was consequential, leading to answers (and, in principle, to estimates of the prevalence of smoking and related attributes in the population) that would have been different had conceptualizations been aligned; for example, fully 12% of respondents gave a different answer about having smoked 100 cigarettes in their entire life when later given a standard definition. In other cases misaligned interpretations did not lead to miscommunication, in that the differences would not have led to different survey responses. Although clarification of survey terms during the interview sometimes improved conceptual alignment, this was not guaranteed; in this corpus some needed attempts at clarification were never made, some attempts did not succeed, and some seemed to make understanding worse. The findings suggest that conceptual misalignments may be more frequent in ordinary conversation than interlocutors know, and that attempts to detect and clarify them may not always work. They also suggest that at least some unresolved misunderstandings do not matter in the sense that they do not change the outcome of the communication-in this case, the survey estimates.


Subject(s)
Communication , Comprehension , Interpersonal Relations , Smoking , Surveys and Questionnaires , Adult , Female , Humans , Male
5.
Front Psychol ; 8: 966, 2017.
Article in English | MEDLINE | ID: mdl-28694785

ABSTRACT

When musicians improvise freely together-not following any sort of script, predetermined harmonic structure, or "referent"-to what extent do they understand what they are doing in the same way as each other? And to what extent is their understanding privileged relative to outside listeners with similar levels of performing experience in free improvisation? In this exploratory case study, a saxophonist and a pianist of international renown who knew each other's work but who had never performed together before were recorded while improvising freely for 40 min. Immediately afterwards the performers were interviewed separately about the just-completed improvisation, first from memory and then while listening to two 5 min excerpts of the recording in order to prompt specific and detailed commentary. Two commenting listeners from the same performance community (a saxophonist and drummer) listened to, and were interviewed about, these excerpts. Some months later, all four participants rated the extent to which they endorsed 302 statements that had been extracted from the four interviews and anonymized. The findings demonstrate that these free jazz improvisers characterized the improvisation quite differently, selecting different moments to comment about and with little overlap in the content of their characterizations. The performers were not more likely to endorse statements by their performing partner than by a commenting listener from the same performance community, and their patterns of agreement with each other (endorsing or dissenting with statements) across multiple ratings-their interrater reliability as measured with Cohen's kappa-was only moderate, and not consistently higher than their agreement with the commenting listeners. These performers were more likely to endorse statements about performers' thoughts and actions than statements about the music itself, and more likely to endorse evaluatively positive than negative statements. But these kinds of statements were polarizing; the performers were more likely to agree with each other in their ratings of statements about the music itself and negative statements. As in Schober and Spiro (2014), the evidence supports a view that fully shared understanding is not needed for joint improvisation by professional musicians in this genre and that performing partners can agree with an outside listener more than with each other.

6.
Front Psychol ; 7: 1629, 2016.
Article in English | MEDLINE | ID: mdl-27853438

ABSTRACT

This study explores the extent to which a large set of musically experienced listeners share understanding with a performing saxophone-piano duo, and with each other, of what happened in three improvisations on a jazz standard. In an online survey, 239 participants listened to audio recordings of three improvisations and rated their agreement with 24 specific statements that the performers and a jazz-expert commenting listener had made about them. Listeners endorsed statements that the performers had agreed upon significantly more than they endorsed statements that the performers had disagreed upon, even though the statements gave no indication of performers' levels of agreement. The findings show some support for a more-experienced-listeners-understand-more-like-performers hypothesis: Listeners with more jazz experience and with experience playing the performers' instruments endorsed the performers' statements more than did listeners with less jazz experience and experience on different instruments. The findings also strongly support a listeners-as-outsiders hypothesis: Listeners' ratings of the 24 statements were far more likely to cluster with the commenting listener's ratings than with either performer's. But the pattern was not universal; particular listeners even with similar musical backgrounds could interpret the same improvisations radically differently. The evidence demonstrates that it is possible for performers' interpretations to be shared with very few listeners, and that listeners' interpretations about what happened in a musical performance can be far more different from performers' interpretations than performers or other listeners might assume.

7.
Public Opin Q ; 80(1): 180-211, 2016.
Article in English | MEDLINE | ID: mdl-27257310

ABSTRACT

Demonstrations that analyses of social media content can align with measurement from sample surveys have raised the question of whether survey research can be supplemented or even replaced with less costly and burdensome data mining of already-existing or "found" social media content. But just how trustworthy such measurement can be-say, to replace official statistics-is unknown. Survey researchers and data scientists approach key questions from starting assumptions and analytic traditions that differ on, for example, the need for representative samples drawn from frames that fully cover the population. New conversations between these scholarly communities are needed to understand the potential points of alignment and non-alignment. Across these approaches, there are major differences in (a) how participants (survey respondents and social media posters) understand the activity they are engaged in; (b) the nature of the data produced by survey responses and social media posts, and the inferences that are legitimate given the data; and (c) practical and ethical considerations surrounding the use of the data. Estimates are likely to align to differing degrees depending on the research topic and the populations under consideration, the particular features of the surveys and social media sites involved, and the analytic techniques for extracting opinions and experiences from social media. Traditional population coverage may not be required for social media content to effectively predict social phenomena to the extent that social media content distills or summarizes broader conversations that are also measured by surveys.

8.
Front Psychol ; 6: 1578, 2015.
Article in English | MEDLINE | ID: mdl-26539138

ABSTRACT

This study investigates how an onscreen virtual agent's dialog capability and facial animation affect survey respondents' comprehension and engagement in "face-to-face" interviews, using questions from US government surveys whose results have far-reaching impact on national policies. In the study, 73 laboratory participants were randomly assigned to respond in one of four interviewing conditions, in which the virtual agent had either high or low dialog capability (implemented through Wizard of Oz) and high or low facial animation, based on motion capture from a human interviewer. Respondents, whose faces were visible to the Wizard (and videorecorded) during the interviews, answered 12 questions about housing, employment, and purchases on the basis of fictional scenarios designed to allow measurement of comprehension accuracy, defined as the fit between responses and US government definitions. Respondents answered more accurately with the high-dialog-capability agents, requesting clarification more often particularly for ambiguous scenarios; and they generally treated the high-dialog-capability interviewers more socially, looking at the interviewer more and judging high-dialog-capability agents as more personal and less distant. Greater interviewer facial animation did not affect response accuracy, but it led to more displays of engagement-acknowledgments (verbal and visual) and smiles-and to the virtual interviewer's being rated as less natural. The pattern of results suggests that a virtual agent's dialog capability and facial animation differently affect survey respondents' experience of interviews, behavioral displays, and comprehension, and thus the accuracy of their responses. The pattern of results also suggests design considerations for building survey interviewing agents, which may differ depending on the kinds of survey questions (sensitive or not) that are asked.

9.
PLoS One ; 10(6): e0128337, 2015.
Article in English | MEDLINE | ID: mdl-26060991

ABSTRACT

As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.


Subject(s)
Interviews as Topic , Smartphone , Adult , Aged , Aged, 80 and over , Disclosure , Female , Humans , Male , Middle Aged , Surveys and Questionnaires , Text Messaging
10.
Front Psychol ; 5: 808, 2014.
Article in English | MEDLINE | ID: mdl-25152740

ABSTRACT

To what extent and in what arenas do collaborating musicians need to understand what they are doing in the same way? Two experienced jazz musicians who had never previously played together played three improvisations on a jazz standard ("It Could Happen to You") on either side of a visual barrier. They were then immediately interviewed separately about the performances, their musical intentions, and their judgments of their partner's musical intentions, both from memory and prompted with the audiorecordings of the performances. Statements from both (audiorecorded) interviews as well as statements from an expert listener were extracted and anonymized. Two months later, the performers listened to the recordings and rated the extent to which they endorsed each statement. Performers endorsed statements they themselves had generated more often than statements by their performing partner and the expert listener; their overall level of agreement with each other was greater than chance but moderate to low, with disagreements about the quality of one of the performances and about who was responsible for it. The quality of the performances combined with the disparities in agreement suggest that, at least in this case study, fully shared understanding of what happened is not essential for successful improvisation. The fact that the performers endorsed an expert listener's statements more than their partner's argues against a simple notion that performers' interpretations are always privileged relative to an outsider's.

11.
Behav Brain Sci ; 27(2): 209-210, 2004 Apr.
Article in English | MEDLINE | ID: mdl-18241487

ABSTRACT

Conversational partners' representations may be less aligned than they appear even when interlocutors believe they have successfully understood each other, as data from a series of experiments on surveys about facts and behaviors suggest. Although the goal of a mechanistic psychology of dialogue is laudable, the ultimate model is likely to require far greater specification of individual and contextual variability.

SELECTION OF CITATIONS
SEARCH DETAIL
...