Several staff from the UWSC will present at the annual meeting of the 2017 conference of the American Association for Public Opinion Research (AAPOR) in New Orleans, Louisiana on May 18-21. As in previous years, UWSC staff will demonstrate their contributions to survey research methodology by delivering presentations of current research (staff names appear in bold). In addition Jennifer Dykema (Senior Scientist and Survey Methodologist) is this year’s Conference Chair.
Isabel Anadon, Jennifer Dykema, Dana Garbarski, Nora Cate Schaeffer, Ian Wall and Dorothy Farrah Edwards, “Do Interviewer and Respondent Behaviors Predict Measurement Equivalence: Comparing Measurement of Trust across Racial/Ethnic Groups”
- Although lack of measurement equivalence is a common problem in survey question, no studies have examined patterns of interaction between the interviewer and respondent in order to determine whether items with more measurement error are associated with greater interactional problem indicators. In the current study we (1) code for behaviors that have been demonstrated to be associated with measurement error (e.g., respondents displaying comprehension or mapping difficulties); (2) test for differences among subgroups in the frequency with which behaviors are displayed; and (3) describe whether the behaviors vary by the questions associated with measurement invariance.
Dana Garbarski, Nora Cate Schaeffer, Jennifer Dykema, “Examining the Validity of Interviewers’ Ratings of Respondents’ Health”
- This study seeks to establish the criterion validity of interviewers’ ratings of respondents’ general health status (IRH), examining associations between IRH and health- and interviewer-relevant measures of interest using data from Wave 8 of the UK Innovation Panel Study.
John Stevenson, Jennifer Dykema, Chad Kniss, Nadia Assad and Cathy Taylor, “Effects of Sequential Prepaid Incentives to Increase Participation and Data Quality in a Mail Survey”
- Several studies find that the inclusion of a single, small, prepaid monetary incentive is effective in increasing response rates among physicians. Little research, however, examines what effect variation in the distribution and administration of these incentives has on response rates, and recent recommendations regarding incentives advise researchers to include a second cash incentive in a follow-up contact in order to increase the likelihood that “later communications will be read, and hopefully acted upon (Dillman et al. 2014, 424).” We embedded an experiment in a mail survey to examine the effects of different incentive combinations on participation.