{"id":19071,"date":"2024-03-18T08:26:18","date_gmt":"2024-03-18T07:26:18","guid":{"rendered":"https:\/\/surveyinsights.org\/?p=19071"},"modified":"2024-12-05T17:29:23","modified_gmt":"2024-12-05T16:29:23","slug":"do-question-topic-and-placement-shape-survey-breakoff-rates","status":"publish","type":"post","link":"https:\/\/surveyinsights.org\/?p=19071","title":{"rendered":"Do Question Topic and Placement Shape Survey Breakoff Rates?"},"content":{"rendered":"<h1>Introduction<\/h1>\n<p>Surveys have become a dominant method of research in many social scientific fields (Rossi et al. 2013) and are a key mechanism for inserting public opinion into democratic governance (Berinsky 2017). Unfortunately, many surveys suffer from high rates of breakoff \u2013 that is, instances in which respondents prematurely and deliberately stop their participation. Breakoffs can be permanent (i.e., the respondent never returns to the interview; Peytchev 2009) or temporary (i.e., the respondent leaves but returns to complete the questionnaire or reenters the panel at a later time; McGonagle 2013). Regardless of type, breakoff rates are often non-trivial in magnitude. A meta-analysis of web surveys found breakoff rates ranging from 0.4% to 30.9% (Mavletova and Couper 2015). A major telephone survey (the Panel Study of Income Dynamics) reported a temporary breakoff rate of 23% (McGonagle 2013). Others report a terminal breakoff rate for web surveys at 34% (Manfreda, Batagelj, Vehovar 2002).<\/p>\n<p>We focus on terminal breakoffs in which individuals abandon the survey after it begins. Survey researchers aim to minimize these types of breakoffs for two main reasons: efficiency and quality. With respect to efficiency, canceling and replacing incomplete surveys incurs project costs (Keeter et al. 2016). With respect to quality, these breakoffs can produce biases if they do not occur randomly in the sample (Ro\u00dfmann, Steinbrecher, and Blumenstiel 2015). While methodological studies of unit nonresponse abound, scholarship has been less interested in the issue of breakoff, so much so that breakoff rates are not usually reported (Peytchev 2009; Schaeffer and Dykema 2011). Further, the limited literature on breakoffs focuses primarily on web surveys rather than telephone or face-to-face interviewing (but see McGonagle 2013).<\/p>\n<p>Some research has shown that survey topic and respondent interest influence response rates, breakoff rates, and other types of engagement (Galesic 2006; Groves, Presser, and Dipko 2004; Krosnick and Presser 2010; McGonagle 2013; Shropshire, Hawdon, and Witte 2009). Extending work in this domain, we test the effect of question topic and placement in generating engagement with the survey. Survey professionals often advise to place the most interesting and relevant questions at the beginning of a survey (see recommendations on question order from Pew Research 2023 and Qualtrics 2023). Yet to our knowledge there has been little to no experimental research that has systematically considered how question topic and question location in the instrument jointly shape breakoff behavior. We theorize that that topic-induced motivation could stem from any of three non-rival mechanisms. First, interesting questions pique attentiveness to the survey. Second, questions about salient issues make the survey seem worthwhile. Third, questions on \u201cimportant\u201d topics foster bonding between interviewer and interviewee, boosting the latter\u2019s cooperation with the survey.<\/p>\n<p>A challenge to assessing how breakoff rates are shaped by question topic and placement is finding topics that are relevant to broad swathes of the population. The onset of the COVID-19 pandemic in 2020 provided one such opportunity. We included a small set of questions about a highly salient and nationally important issue \u2013 the COVID-19 pandemic \u2013 on a phone survey conducted April-June 2020 about democratic governance in Haiti. Our core test consists of an experiment embedded within the survey: individuals were randomly assigned to answer questions related to the pandemic at the start or toward the end of the survey. We then measure the breakoff rate.<\/p>\n<p>We find, in this case, some limited evidence that placing salient topics at the start of the survey can minimize breakoff behavior. As expected, the effect size is correlated with levels of interest in the topic: those who were more concerned about COVID were less likely to break off after hearing questions about it. This study contributes to literature on question topic and respondent engagement by offering an experimental test and a quantifiable measurement of the efficiency gained by starting the survey on a salient issue of the day. Adding to scholarship about topic-induced motivation, the results suggest that topic may be influential in the decision to discontinue the survey just as it is for the decision to participate in the first place (e.g., Groves, Presser, and Dipko 2004), though perhaps to a lesser degree. Based on the modest yet noticeable findings, we argue that, ceteris paribus, survey researchers should begin questionnaires with more salient topics in order to reduce breakoffs; yet, given only marginal gains, it is not essential to do so if it comes at the cost of interrupting the flow of the questionnaire or if no issue is particularly and broadly salient.<\/p>\n<h1>Topic-Induced Motivation and Survey Breakoff<\/h1>\n<p>Worldwide fixation on the COVID-19 pandemic accelerated in early 2020, reaching a first peak around April and remaining elevated for the duration of the year (Alshaabi et al. 2020). As the virus spread worldwide, some survey practitioners reported that individuals were more willing to participate in interviews when the survey topic was related to the pandemic (e.g., Ambitho et al. 2020).<\/p>\n<p>These anecdotes align with conventional wisdom among practitioners that questionnaires ought to begin with interesting questions so as to maintain respondent attention. In their guide to writing survey questions, Pew Research states that \u201cit is often helpful to begin the survey with simple questions that respondents will find interesting and engaging\u201d (Pew Research Center 2023). The textbook <em>Marketing Research and Information Systems<\/em>, prepared by the Food and Agriculture Organization of the United Nations, mentions that \u201cOpening questions that are easy to answer and which are not perceived as being \u2018threatening\u2019, and\/or are perceived as being interesting, can greatly assist in gaining the respondent&#8217;s involvement in the survey and help to establish a rapport\u201d (Crawford 1997). Some even suggest adding \u201cringer\u201d or \u201cthrowaway\u201d questions about the hot topics of the day to boost interest in the survey (Qualtrics 2023).<\/p>\n<p>Existing scholarship offers theoretical backing for the idea that the salience of the pandemic would shape willingness to engage in surveys. Previous work has found that the extent to which the topic of the survey is personally relevant and interesting influences response rates (Holland and Christian 2009; Keusch 2013; Krosnick and Presser 2010; Martin 1994; Van Kenhove, Wijnen and De Wulf 2002). The leverage-saliency theory (Groves, Singer, and Corning 2000; Groves, Presser, and Dipko 2004) posits that the outcome of each survey request is influenced by multiple attributes, including, among other factors, the survey topic.<\/p>\n<p>We extend this scholarship by examining the role that topic plays in <em>breakoffs<\/em>, not unit response rates. The decision to participate in a survey and the decision to terminate it early are theoretically and analytically distinct. The latter decision is conditional on the former, so the populations of interest are different. Furthermore, breakoffs may be influenced by a whole host of variables that are unobserved prior to the beginning of the questionnaire, such as question wordings, cognitive load, order effects, interviewer dynamics, and so on. Finally, much of the literature linking survey topic and response rates assumes that the survey has a singular, unified topic that can be succinctly described in the introductory text; for many surveys (e.g., omnibus questionnaires), the themes are often not revealed until after the respondent agrees to participate.<\/p>\n<p>We add to research on topic interest by considering how question topic and placement (location within the survey), jointly, can shape dynamics related to survey breakoff rates. To the degree that placing interesting and relevant questions at the start of the survey generates greater engagement, we posit this may occur through any of three non-rival mechanisms. First, respondents may experience a bump in interest when they engage with a personally relevant topic. Relatable questions should create an initial spike in interest, boosting their willingness to continue. Second, beginning with questions that are highly relevant could convince respondents that the broader research effort is worthwhile. Other researchers have found that belief in the importance of scientific studies predicts a lower breakoff rate (Ro\u00dfmann, Blumenstiel, and Steinbrecher 2015). Third, beginning a questionnaire by acknowledging a highly salient issue may help establish a rapport between interviewer and respondent. Past research has linked rapport to survey engagement, non-response, and willingness to disclose sensitive attitudes (Garbarski, Schaeffer, and Dykema 2016; Sun, Conrad, and Kreuter 2020; Tu and Liao 2007). Questions that acknowledge important local issues could foster interviewer-interviewee bonding, which ought to boost respondent cooperation. Conversely, it may come across as insensitive or untimely to discuss seemingly irrelevant matters during times of crisis.<\/p>\n<p>Prior studies on breakoffs and respondent interest are instructive though incomplete. For example, researchers have shown that some features of survey instrument design that affect respondent interest are associated with lower likelihood of breakoff (Galesic 2006; McGonagle 2013), though these studies focus not on question content or saliency but rather structural factors like types of questions, questionnaire length, and module introductions. Peytchev (2009) suggests that older respondents are less likely to break off in surveys that are mainly about health-related issues due to the topic\u2019s greater relevance for that age cohort, though this mechanism is not explored empirically. Two other studies suggest that interest in the survey topic (e.g., politics, ecological conservation) predicts lower breakoff tendencies in web surveys about those issues (Ro\u00dfmann, Blumenstiel, and Steinbrecher 2015; Shropshire, Hawdon, and Witte 2009). This is consistent with the proposed theory, but we extend their work by examining multi-topic surveys and by experimentally testing whether question topic and placement combine to elicit lower breakoff rates.<\/p>\n<p>Our study design leveraged the salience of the COVID-19 pandemic, a situation that was of interest, relevance, and importance to individuals around the world in 2020. Based upon observations from survey practice and research, we hypothesized the following about the relationship between questions about the pandemic and respondent engagement:<\/p>\n<p><em>\u2022 <strong>H1<\/strong>: Respondents who receive questions about the coronavirus first are less likely to break off than those who first receive questions about other topics.<\/em><\/p>\n<p>We also hypothesize that concern about the pandemic issue moderates the treatment effect. Those who are unconcerned about COVID-19 should have no reaction to questions about it; or perhaps may even be turned off by the mention of the issue, making them more likely to break off. To more precisely test the core of the argument about topic-induced motivation, we assess H1 not only for the full sample, but also for a subset of the sample that excludes those who express little concern about the coronavirus problem. We consider the more definitive test of the theoretical framework to be captured by this hypothesis:<\/p>\n<p><em>\u2022 <strong>H2<\/strong>: Respondents who believe the pandemic is a serious problem are less likely to breakoff when asked about the coronavirus first (versus asked about other questions first).<\/em><\/p>\n<h1>Data and Methods<\/h1>\n<h2>Survey Information and Questionnaire Design<\/h2>\n<p>Our core test is based on a national cellphone survey of adults (ages 18+) that was fielded in Haiti from April to June of 2020. The probabilistic sample was drawn from random digit dialing of active cell phone numbers, supplemented by frequency matching to realize census-derived targets on region, gender, and age cohorts. The survey touched on a variety of issues related to democratic governance (though the study information script only mentioned \u201cthe situation in Haiti\u201d).<\/p>\n<p>The selection of Haiti as a case was determined by survey objectives unrelated to this study. However, we consider the use of Haiti as a laboratory to be an additional novelty of this study, as methodological research rarely gathers data from developing countries, where best practices for survey research may differ from those in the United States and Western Europe due to differing cultural norms, languages, political institutions, and\/or experiences with survey research. What is more relevant is the timing of the survey vis-\u00e1-vis the pandemic. When the survey began, in April, the COVID-19 pandemic was just beginning to take root in Haiti, and it spread quickly in May and June (Rouzier, Liautaud, and Deschamps 2020). This unfortunate situation allows for a useful test of the theory as the virus became an overarching political and personal concern for Haitians over time, though it was not universally seen as the most important issue (thus allowing for variation in level of concern, necessary for our test of H2).<\/p>\n<p>Our experiment design consisted of random assignment to one of two conditions. In the first, the COVID-First Condition, respondents were asked a set of 10 questions related to the pandemic at the start of the questionnaire. In the second, the COVID-Late Condition, respondents were asked those 10 questions toward the end of the survey. Lupu and Zechmeister (2021) has shown that this experiment has substantive effects (priming individuals to think about the pandemic influences certain democratic attitudes), which affirms individuals have the potential to be affected by the module. The present study aims to detect the effect of the treatment on respondent behavior, namely, whether the COVID-First design elevates interest and motivation relative to other topics (the COVID-Late group began with questions about the economy).<\/p>\n<p>Figure 1 displays the structure of the questionnaire. After answering seven eligibility questions (e.g., age, citizenship) and agreeing to participate, respondents receive either the COVID module (10 questions) and then about 35 substantive survey items (exact number depends on branching) which we call the \u201ccore\u201d, or vice versa. The core consists of modules on the following topics: economic situation, government services, interpersonal trust, democracy, crime, television, trust in institutions, voting, corruption, health \/ medical services, political interest and knowledge, and welfare. All respondents then answer an end block of questions concerning demographic characteristics (e.g., level of education), sampling information (e.g., number of cell phones used by respondent), and a battery of items about water access and related issues. Our analyses consider only data within the COVID module, the core, and the end block. An alternative approach would be to consider only data gathered prior to the end block because after that point, both groups have been treated; we also tried this approach and found no meaningful differences, so we do not show these results.<\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Figure_1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-19560\" src=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Figure_1-1024x611.png\" alt=\"\" width=\"450\" height=\"268\" srcset=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Figure_1-1024x611.png 1024w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Figure_1-300x179.png 300w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Figure_1-768x458.png 768w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Figure_1.png 1174w\" sizes=\"auto, (max-width: 450px) 100vw, 450px\" \/><\/a><br \/>\nFigure 1: Questionnaire Structure<\/p>\n<h2>Measurement<\/h2>\n<p>We do not use survey weights in the analyses, for four reasons. First, as a practical matter, weighting is difficult for a study of breakoffs precisely because respondents may drop out before they provide demographic information at the end of the survey. Second, as a study of breakoffs, our main population of interest is survey takers rather than the population of Haiti. Third, treatments are assigned at random, and we have no a priori theoretical reason to expect that the proposed treatment effects would be heterogenous according to any particular sampling or demographic variable(s). Finally, research finds that, for survey experiments, the benefits of using weights (decreased bias) are relatively small while the costs (loss of statistical power) are substantial (Miratrix et al. 2018).<\/p>\n<p>Per our theoretical framework, concern about COVID should act as a moderator for the relationship between treatment and respondent engagement. For this test, we remove those who responded to a 5-category question about the coronavirus outbreak in their country by reporting that the pandemic is \u201cnot so serious\u201d or \u201cnot serious at all\u201d or they \u201chave not thought much about\u201d the issue (other options: \u201cvery serious\u201d and \u201csomewhat serious\u201d). A total of 439 individuals said that COVID was less than serious, compared to 1,407 who said the outbreak was \u201cvery\u201d or \u201csomewhat\u201d serious. There were a total of 28 \u201cdon\u2019t know\u201d responses, 246 non-response\/refusals, and 109 N\/As (i.e., respondents who broke off before being asked the question).<\/p>\n<p>We include the nonresponses in the analyses because eliminating them would unfairly bias the results toward our hypotheses. The N\/As disproportionately come from the COVID-Late group (since the people who break off are most likely to do so near the beginning of the survey), so eliminating them would show an artificially low breakoff rate for the COVID-Late group. For this same reason, common methods for testing moderator effects, like an interaction or two-way ANOVA, are inappropriate when using breakoffs as a dependent variable. The N\/As cannot be dropped from the analysis, and we can neither group them into with the \u201cCOVID serious\u201d or the \u201cCOVID unserious\u201d conditions as it would arbitrarily place nearly all breakoff cases into one group or another. For a similar reason, we do not drop the \u201cdon\u2019t knows\u201d and refusals. The rate of item nonresponse is much higher for questions at the beginning of the survey (since disinterested respondents give nonresponses and then drop out). Thus, eliminating the \u201cdon\u2019t knows\u201d and refusers would eliminate respondents who are both a) more likely to come from the COVID-First group and b) more likely to break off.<\/p>\n<h2>Data Analysis<\/h2>\n<p>Since this design is experimental and treatment assignment is random, our primary analysis consists of a two-sample z-test to detect significant differences between the COVID-First and COVID-Last conditions on proportion of breakoffs (H1). We then repeat the analysis after eliminating the \u201cCOVID not serious\u201d group (H2).<\/p>\n<p>As a further test to eliminate possible confounding variables, we run multivariate logistic regressions for both the full and limited samples. The models include fixed effects for interviewer and also control for respondent age, gender, region, and urban\/rural residence. Other sociodemographic variables were asked only at the end of the survey and are thus not available for analysis (many had broken off before answering these questions). We considered a multilevel model as well, with respondents nested within interviewers. However, a Hausman test revealed that a fixed effect model is preferred (i.e., we reject the null hypothesis that individual-specific effects are not correlated with the covariates). Further, we prefer a fixed effects model because of the relatively small number of interviewers (11, compared to the rule of thumb of 30; Kreft and Bokhee 1994) and because we are not theoretically interested in making inferences about the interviewers, merely controlling for their effect. Regardless, the treatment effect changes very little whether we use a random or fixed effect model.<\/p>\n<p>Before proceeding, we acknowledge the limitations of this analysis. Though we believe that the COVID-19 pandemic offered a treatment in the form a broadly salient issue in which most people would be interested, the assumption that respondents are eager to discuss the topic may not hold true for all (even if they think it is an important or serious issue). The pandemic was also a one-of-a-kind situation, and it is uncertain the extent to which it is applicable to other newsworthy events or topics that social scientific researchers may ask about in public opinion surveys (e.g., an election, a war, or the economy). Further, the present data analysis assesses only an average treatment effect and does not probe further into mechanisms that may underlie any potential treatment effects. We recommend that researchers and practitioners take the results and adapt them to the specific circumstances of their own surveys and domains. Context-specific knowledge will inform the applicability of the results to other studies.<\/p>\n<h2>Descriptive Statistics and Sample Characteristics<\/h2>\n<p>The target number of complete interviews on the survey was 2,000. In our data, a total of 2,390 interviews passed the \u201cScreeners &amp; Consent\u201d block and thus were assigned to a treatment group by the survey software. For this survey experiment, we do not believe that response rates are necessary or even meaningful. Measures like AAPOR\u2019s codes RR1-4 assess the percentage of completed (and sometimes partial) interviews out of all attempts. This percentage has no real relevance to the present study because we are not interested only in the percentage of complete (or partial) interviews, but rather we are focused on breakoffs among those who participate. Further, as a study of cooperation among survey participants, we are not interested in generalizing to the overall Haitian population but rather only to the portion of the population that is willing and able to take surveys. Therefore, any imbalances between the sample and the broader population resulting from noncontact, ineligibility, unknown eligibility, or refusals are irrelevant to the present study because the excluded individuals are not part of the survey-taking population.<\/p>\n<p>We operationalize breakoffs as deliberate hangups or, in other words, interviews in which the respondent says they do not wish to continue the survey or they ask to be called back at another time but never answer the callback. For the breakoff rate analysis, then, we drop 61 cases in which an interview ended early for inadvertent reasons (e.g., dropped call, poor connection). Though this strategy may exclude some real breakoff cases (if, for example, someone abruptly hangs up without saying anything), it is impossible to distinguish these from cases in which there was a real technical malfunction and the respondent would have otherwise completed the interview. Thus, we use the more narrow definition to test the first two hypotheses. In analysis not shown, we did assess the data using a more inclusive definition of breakoff, and the results do not meaningfully change. We also omit 60 cases that were terminated for \u201cother\u201d reasons (usually, because a quota is overfilled) and 40 interviews that were completed but rejected by quality control supervisors. This leaves 2229 observations.<\/p>\n<p>In total, there are 211 breakoffs and 2,018 complete interviews, meaning the overall breakoff rate, i.e., breakoffs divided by [breakoffs plus completes], is 9.5%. The interviewer team consisted of 11 staff, each conducting between 183 and 256 interviews. Breakoff rates between interviewers ranged from 2.7% to 17.2%. Because of the wide range of breakoff rates, we add fixed effects for interviewer in the logistic regression to control for interviewer effects.<\/p>\n<p>The data show that the two treatment groups are balanced on observable demographic characteristics including gender, age, region, urban\/rural residence, education, and wealth (table in appendix). In the multivariate analysis, we control for these variables save education and wealth, which are asked at the end of the questionnaire (at which point many have broken off). A simple analysis of the limited set of demographic variables asked in the eligibility block suggests that women were about 30% (3.9 percentage points) more likely to drop out (p &lt; 0.01) than men. Age, urban\/rural residence, region, and time of interview (day or night, weekend or weekday) are not significantly associated with breakoff rate in bivariate analyses.<\/p>\n<h1>Results<\/h1>\n<p>Table 1 shows the breakoff rate in each experiment condition. Among the full sample, the breakoff rate for the COVID-Late group is 1.7 percentage points higher than it is for the COVID-First group. This means that for a 2000-person survey, 34 fewer interview attempts need to be made to reach the target sample size when topic-induced motivation is elevated. This difference, however, is not statistically significant (one-tailed p-value = 0.09). To test H2, we use the concern about the seriousness of the pandemic measure to filter out respondents who believe the COVID outbreak is less than \u201csomewhat serious.\u201d As the second data column of Table 1 (\u201cLimited Sample\u201d) shows, the treatment effect nearly doubles to 3.1 percentage points (p &lt; 0.025), in line with our expectation from H2.<\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-19561 aligncenter\" src=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_1-1024x397.png\" alt=\"\" width=\"450\" height=\"174\" srcset=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_1-1024x397.png 1024w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_1-300x116.png 300w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_1-768x298.png 768w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_1.png 1032w\" sizes=\"auto, (max-width: 450px) 100vw, 450px\" \/><\/a><\/p>\n<p style=\"text-align: center;\"><em>Note: Standard errors in parentheses. Standard errors for \u201cNet difference\u201d are calculated <\/em><em>from two-sample<br \/>\ndifference-in-proportion z-test.<\/em><\/p>\n<p>This difference in the treatment effect between the two samples is consistent with the topic-induced motivation framework: those who care more about the coronavirus should be more affected by the placement of questions related to it. Examining Table 1 more closely reveals that the growth in the treatment effect after limiting the sample is attributable mostly to the COVID-Late group; the breakoff rate for the COVID-First group does not change much between the samples. This lends additional credence to the topic-induced motivation argument. According to the theory, those who are more preoccupied with COVID (the limited sample) should be particularly sensitive to not being asked about the issue. Similarly, we would not expect the difference between the full and limited sample to be large for the COVID-First respondents since they are asked about the topic right away.<\/p>\n<p style=\"text-align: left;\">To further isolate the effect of the treatment, we estimate logistic regressions using the full and limited sample, while controlling for respondent age, gender, region, and urban\/rural residence, along with adding interviewer fixed effects. The results are shown in Table 2. The results show a similar pattern to Table 1. The treatment has an effect in the expected direction in both cases, but it is only statistically significant in the sample in which those who care little about the coronavirus are filtered out. Men are less likely to break off, and, as described in the Descriptive Statistics section, there is significant variation in breakoff rate by interviewer. There is no significant variation by region of residence.<\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-19562 aligncenter\" src=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_2-1024x828.png\" alt=\"\" width=\"450\" height=\"364\" srcset=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_2-1024x828.png 1024w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_2-300x243.png 300w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_2-768x621.png 768w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_2.png 1026w\" sizes=\"auto, (max-width: 450px) 100vw, 450px\" \/><\/a><br \/>\n<em>Note: Standard errors in parentheses. We drop 31 cases which are missing \u201curban\u201d. The baseline category for \u201cmale\u201d (gender of respondent) include women and two cases in which \u201cother\u201d was chosen. p-values are one-tailed for the treatment and two-tailed for the other variables. * p &lt; 0.05 ** p &lt; 0.025.<\/em><\/p>\n<p>As an aside, the breakoff data seem to coincide with the timing of the pandemic. Data collection took place between April and June, when the pandemic was relatively new. Concern gradually grew over the data collection period (mean of 4.11 on the 5-point concern about the seriousness of the pandemic scale in April, compared to 4.43 in June). Notably, this coincided with a drop in overall breakoff rate (14.09% in April, 4.12% in June). Those who took the survey later on (late May and into June 2020) were significantly less likely to break off (p &lt; 0.01). Though not a scientifically rigorous test, this result is in line with the motivating hypothesis that people became more interested in surveys about COVID as the issue became omnipresent. We also tested for heterogeneous treatment effects and found no significant interaction between treatment and date of interview, age, gender, or urban\/rural residence. Those living in the North region seemed to be comparatively more affected by the treatment, though the differences between it and the other regions are only marginally significant, and likely due to chance.<\/p>\n<h1>Discussion<\/h1>\n<p>Existing literature has documented how the stated topic of a survey influences response rates. Prior research has also shown that many facets of questionnaire design, like cognitive load, mode, or structure (e.g., number of questions on a page), are linked to engagement with and motivation to complete surveys. Little research, however, has systematically and experimentally tested how the placement and topic of survey questions can affect respondent engagement once they begin the survey.<\/p>\n<p>We investigate the relationship between question placement and topic, on the one hand, and breakoff behavior in surveys, on the other hand. In doing so, we posit and test the idea that initiating surveys with interesting and relevant questions increases participants\u2019 engagement and, thus, reduce breakoffs. We theorize that two factors \u2013 topic (capacity of the survey to produce interest) and question placement (location of a module within the survey) \u2013 are jointly important in motivating engagement. We offer a theoretical framework that permits any one of three non-rival micro-mechanisms to undergird this dynamic: salient questions pique respondent interest, relevant questions convince respondents the survey is worthwhile, and\/or questions on important topics induce bonding between the interviewer and interviewee. We test the argument by leveraging one particularly salient topic in 2020: the COVID-19 pandemic. Whereas most research studies this topic with web surveys in developed countries, we focus on behavior during a phone study in a less developed democracy.<\/p>\n<p>In an original experiment carried out in a national phone survey in Haiti, we find a pattern of results that are overall supportive of the notion that question placement and question topic jointly matter. Frontloading the survey with questions about the COVID-19 pandemic led to marginally fewer breakoffs, though the result is not statistically significant. We do find support for the topic-induced motivation argument in a more precise test: the treatment effect widens when those who are unconcerned with the pandemic are removed from the analysis.<\/p>\n<p>We had the fortunate opportunity to repeat this experiment in another probabilistic national phone survey in Ecuador. Fieldwork for this survey took place between December 2020 to January 2021. The sample design, questionnaire, and training protocols were broadly the same as those in Haiti, but the target sample size in Ecuador was only 800 adults. Though breakoffs were much rarer in Ecuador (1.6%), we found that the breakoff rate was over three times higher for those assigned to the COVID-Late Condition compared to the COVID-First Condition, in line with our expectations. The estimated treatment effect was 1.7 percentage points (one-tailed p-value &lt; 0.05), coincidentally the same effect size found in Haiti. However, in the case, the treatment effect barely moved when we remove 60 respondents who said the coronavirus issue was less than serious, unlike what we showed in Haiti, possibly because the number of breakoffs was already so low. Nevertheless, the conclusion was the same as in Haiti: frontloading the questionnaire with COVID items has a marginal but noticeable effect.<\/p>\n<p>Though the effects we find are not immense in magnitude and are on the borderline of statistical significance, based on the consistency of the treatment effect between the two countries, we conclude that surveys will be slightly more efficient if participants are asked questions related to the top issue of the day at the beginning of the questionnaire. Even marginal gains can be beneficial for survey practitioners. Due to a lower breakoff rate, fewer respondents will need to be interviewed to reach the target sample size. In practical terms, for a 2000-person survey, our results would suggest that placing questions about the most salient issue first would turn about 34 breakoffs into complete interviews (62 if the issue were universally seen as a serious issue), which saves a day or two of fieldwork for a survey like ours. However, we consider the COVID outbreak to be a \u201cmost likely\u201d case (Gerring 2007) for the topic-induced motivation theory, and, finding only mild effects, we would not necessarily recommend adding \u201cthrowaway\u201d questions or adjusting the questionnaire in a way that comprises other survey objectives for this purpose alone. But all else equal, we consider it wise to begin surveys with the most important issues of the day.<\/p>\n<p>Our study contributes to understanding respondent behavior during surveys. We show that the decision to complete a survey once it begins is influenced jointly by topic and question order. This has implications for assessing substantive survey results: because respondents may break off depending on question order (H1), and this behavior is tied to level of interest in the topic (H2), the distribution of valid responses to a question could change depending on where questions are placed in the instrument. For example, in this case, researchers who ask questions about COVID at the end of a questionnaire will show artificially lower levels of concern about the pandemic, because those who are highly preoccupied with the issue already broke off.<\/p>\n<p>In future work, we recommend additional experiment-based studies consider the extent to which similar dynamics can be found across other types of surveys and in different contexts (including but not restricted to different types of crises). While we believe that we would observe similar results in a context in which there is another type of highly salient issue or event (say, the assassination of Haiti\u2019s president), it is not clear if question topic and placement would induce engagement for a moderately important issue (say, an upcoming election). Furthermore, we note that our study does not permit us to assess the micro-mechanisms (piqued interest, belief it is time well-spent, or interviewer-interviewee bonding) that may produce topic-induced motivation. Therefore, we further recommend that future research work to assess these varying paths through which question placement and topic may affect engagement. Additional explorations could also more carefully measure how \u201cinteresting\u201d respondents find different topics, since \u201cseriousness\u201d does not necessarily equate to eagerness to talk about an issue. Finally, in this study, we also found substantial differences in breakoff rates by country, interviewer, and gender. As scholarship on breakoff behaviors continues to expand, additional avenues of research ought to include work that considers what factors drive variation across these three variables, and how this relationship might be affected by questionnaire characteristics.<\/p>\n<h1><strong>Appendix<\/strong><\/h1>\n<p style=\"text-align: center;\"><a href=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_A.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-19563\" src=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_A-1024x841.png\" alt=\"\" width=\"450\" height=\"370\" srcset=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_A-1024x841.png 1024w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_A-300x246.png 300w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_A-768x631.png 768w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2024\/01\/Table_A.png 1342w\" sizes=\"auto, (max-width: 450px) 100vw, 450px\" \/><\/a><br \/>\n<em>Notes: Category \u201cRegion: South\u201d omitted for redundancy. \u201cWealth\u201d is based on principal components analysis scores, which are in turn based on reported possession of various household items. Gender, age, region, and urban were assessed near the beginning of the questionnaire, and education and wealth near the end (and thus the latter have much higher missingness\/standard errors).<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Surveys have become a dominant method of research in many social scientific fields (Rossi et al. 2013) and are a key mechanism for inserting public opinion into democratic governance (Berinsky 2017). Unfortunately, many surveys suffer from high rates of breakoff \u2013 that is, instances in which respondents prematurely and deliberately stop their participation. Breakoffs [&hellip;]<\/p>\n","protected":false},"author":4974,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[41,49],"tags":[1003,778,294,1005,726,1004],"class_list":["post-19071","post","type-post","status-publish","format-standard","hentry","category-questionnaire_design","category-survey-design-2","tag-breakoffs","tag-covid-19","tag-experiment","tag-haiti","tag-non-response","tag-survey-topic"],"acf":[],"_links":{"self":[{"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/posts\/19071","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/users\/4974"}],"replies":[{"embeddable":true,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=19071"}],"version-history":[{"count":44,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/posts\/19071\/revisions"}],"predecessor-version":[{"id":20335,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/posts\/19071\/revisions\/20335"}],"wp:attachment":[{"href":"https:\/\/surveyinsights.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=19071"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=19071"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=19071"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}