UWSC staff to participate in the 71st annual AAPOR (American Association for Public Opinion Research) Conference

Several staff from the UWSC will present at the annual meeting of the 2016 conference of the American Association for Public Opinion Research (AAPOR) in Austin, Texas on May 12-15. As in previous years, UWSC staff will demonstrate their contributions to survey research methodology by delivering presentations of current research (staff names appear in bold):

Dana Garbarski, Nora Cate Schaeffer, Jennifer Dykema, Ellen Dinsmore, and Bo Hee Min

  • Using questions about end-of-life planning in the 2003-2005 wave of the Wisconsin Longitudinal Study, this study examines rapport as an interactional phenomenon–attending to both the content and structure of talk–and observes that rapport consists of behaviors that can be characterized as dimensions of responsiveness by interviewers and engagement by respondents. We examine the analytic potential of dimensions of responsiveness and engagement using a case-control design and the criterion of future study participation, as rapport developed during the interview may affect respondents’ decisions to participate in subsequent data collection efforts.

Nora Cate Schaeffer, Jennifer Dykema, and Dana Garbarski, “Conversational Practices and Standardization: How Respondents Answer Survey Questions”

  • Our qualitative study draws on thousands of transcribed question-answer sequences from several sources (e.g., Wisconsin Longitudinal Study, a national omnibus survey, a state poll, and several federal surveys). We describe the components with which answers are constructed for different question forms. Our preliminary analysis finds, for example, particularly complex answers to yes-no questions. Even when a question clearly projects “yes” or “no” as an answer, a respondent may add components like “probably” that could make the codability of the answer less clear. Understanding the influence of question form on answers and the components with which respondents construct answers is important for designing and improving traditional standardized interviewing, for training interviewers, and for designing any interviewing method involved in measurement.

Ian F. Wall, Jennifer Dykema, and Dorothy Farrar Edwards, “Measuring Trust in Medical Researchers: Comparing Agree-disagree and Construct-specific Items”

  • This study combines an experimental design with a mixed-methods analysis to compare response tendencies resulting from agree-disagree (AD) and construct-specific (CS) survey items, in this case items designed to assess one’s level of trust in medical researchers. The experiment is embedded within a cognitive interview study that tests questions about barriers and facilitators to participation in biomedical research with a quota sample of African Americans, American Indians, Caucasians, and Latinos. While several studies have demonstrated that CS items tend to yield responses with higher reliability and validity, this analysis provides insights about the response process so that we may better understand why response tendencies differ between AD and CS items, allowing for more specific item design recommendations. The analysis compares response latencies, indicators of response processing difficulties (e.g., requests for clarification), and indicators for reliability and concurrent validity for each question type. In addition, answers to questions and probes were analyzed qualitatively. Overall, the authors find that CS questions do not necessarily improve all aspects of the response process within this context.

Click here to visit the American Association for Public Opinion Research (AAPOR) website.