Several staff from the UWSC will present at the annual meeting of the 2015 conference of the American Association for Public Opinion Research (AAPOR) in Hollywood, Florida on May 14-17. As in previous years, UWSC staff will demonstrate their contributions to survey research methodology by delivering presentations of current research (staff names appear in bold):
Kerryann Diloreto, Jennifer Dykema, Karen Jaques, and Nadia Assad, “Effects of ACASI Voice Choice and Voice Persona on Reports to Questions about Sensitive Behaviours among Young Adults”
- This study analyzes results from an experiment embedded in the ACASI section of the California Youth Transitions to Adulthood Study in which respondents were randomly assigned (1) to hear one of three types of prerecorded voices or (2) presented with the option of selecting from among the three voices.Voices were selected to represent different personas, including an empathetic-sounding voice, a professional-sounding voice, and a synthetic (text-to-speech) voice.The analysis examines the effects of voice choice and the type of voice listened to on levels of reporting about sensitive behaviors, item nonresponse, and the proportion of audio listened to as well as the effect of literacy on respondents’ propensity to turn the audio off.
Nora Cate Schaeffer, Dana Garbarski, Jennifer Dykema, Douglas W. Maynard, Bo Hee Min, and Ellen Dinsmore, “Refusal Conversions across Calls: Interviewer’s Actions in Initial Calls and Their Consequences”
- This study uses recordings from an analytic sample from the 2004 wave of the Wisconsin Longitudinal Study of a case control study of paired declinations and acceptances matched on the sample members’ propensity to participate in the survey.The analysis identifies and codes actions for which the refusing sample member was subsequently converted in a later call and compares these to interactions in which the refusing sample member was not converted.
Dana Garbarski, Jennifer Dykema, Kenneth D. Croes, Tara Piché, and Dorothy F. Edwards, “How Respondents Report Their Health Status: Cognitive Interviews of Self-Rated Health Across Race, Ethnicity, Gender, Age, and Socioeconomic Status”
- This study extends previous research that explores factors respondents take into account when reporting their self-rated health (“Would you say your health in general is excellent, very good, good, fair, or poor?”). The analysis uses data from cognitive interviews to examine in detail: (1) which health factors respondents take into account; and (2) how respondents take these health factors into account. The targeted sample of respondents is structured to examine differences in health ratings across social groups, including race/ethnicity, gender, age, and education. By focusing on respondents’ explanations of how they formulated their response in addition to which health factors they considered, the authors are able to describe a more complete model of how respondents rate their health, with particular attention to variations across social groups.
Dana Garbarski, Nora Cate Schaeffer, and Jennifer Dykema, “Examining Interviewers’ Ratings of Respondents’ Health: Associations with Health Correlates, Respondents’ Self-Rated Health, and Mortality”
- This study uses data from a 2011 face-to-face survey in the Wisconsin Longitudinal Study to examine which respondent and interviewer characteristics are associated with 1) interviewers’ ratings of respondents’ health and 2) discordance between interviewers’ ratings and self-ratings of respondents’ health. The presentation also explores whether