{"id":1731,"date":"2013-05-08T15:16:10","date_gmt":"2013-05-08T14:16:10","guid":{"rendered":"http:\/\/surveyinsights.org\/?p=1731"},"modified":"2024-12-05T17:31:02","modified_gmt":"2024-12-05T16:31:02","slug":"research-note-reducing-the-threat-of-sensitive-questions-in-online-surveys","status":"publish","type":"post","link":"https:\/\/surveyinsights.org\/?p=1731","title":{"rendered":"Research Note: Reducing the Threat of Sensitive Questions in Online Surveys?"},"content":{"rendered":"<h1><strong>Introduction<\/strong><\/h1>\n<p>This research note explores the effect of offering an open-ended comment field on the responses to a series of questions about attitudes toward immigrants in an online survey. The motivation for this research was a puzzling finding in an earlier study on race of interviewer effects. In that study (Krysan and Couper, 2003), it was found that \u2013 contrary to expectation \u2013 white respondents expressed more positive (less prejudicial) responses to minorities when interviewed using a virtual interviewer \u2013 a video of an interviewer in a computer-assisted self-interviewing (CASI) survey \u2013 than when interviewed by a live interviewer. This runs counter to the evidence that self-administered methods yield more candid responses to socially undesirable questions than interviewer-administered methods (see Tourangeau and Yan, 2007).<\/p>\n<p>One post-hoc explanation for that surprising finding emerged from debriefings of respondents following the interview (Krysan and Couper, 2002). For example, one respondent offered the following comment: \u201cSome questions were kind of broad and could use some clarification that may be possible with a live interview.\u201d Another respondent expressed the following view: \u201cI would have liked to add comments, while I did in the live interview, and that made it more comfortable.\u201d In fact, 13% of white respondents spontaneously offered a comment of this type, while none of the African American respondents did so (Krysan and Couper, 2002).<\/p>\n<p>This led Krysan and Couper to speculate that respondents may have been \u201creluctant in the virtual interviewer to give an answer that might appear racist because they could not explain themselves.\u201d They went on to note that \u201ceven in mail surveys it is quite easy for respondents to make notations and marks on the margins of the questionnaire to explain or provide qualifications to their answer. When respondents are restricted in the virtual interviewer [CASI] to typing a single letter or number for their answer, they are not allowed such flexibility \u2013 though presumably the design of such an instrument could make this capability an option.\u201d In a subsequent online study, Krysan and Couper (2006) did not explore this intriguing finding, and we know of no other studies that have investigated this possibility.<\/p>\n<p>Given this, we designed a study to test such a speculation \u2013 that offering respondents the opportunity to clarify or explain their responses may provide greater comfort in expressing potentially negative stereotypes that are typically subject to social desirability effects. We thus expected that those who were offered the comment field would have higher rates of expression of stereotypical or prejudicial attitudes. We also expected that use of the comment field (i.e., entered a comment) and length of comments entered, would also be associated with higher prejudice scores.<\/p>\n<h1><strong>Design and Methods<\/strong><\/h1>\n<p>We tested the effect of offering the opportunity to comment in two different surveys administered to members of CentERdata\u2019s LISS panel, a probability-based online panel of adults age 16 and older in the Netherlands (Scherpenzeel and Das, 2011; see <cite><a href=\"http:\/\/www.lissdata.nl\/\">www.lissdata.nl<\/a><\/cite>). Our experiments were restricted to panel members of Dutch ancestry (i.e., immigrants were excluded), and we used a standard battery of 10 items on attitudes towards immigrants. The items are reproduced in <a href=\"#_tab_1\">Table 1<\/a>. These comprise a short form of the support for multiculturalism scale, developed by Breugelmans and van der Vijver (2004; see also Breugelmans, van der Vijver, and Schalk-Soekar, 2009; Schalk-Soekar, Breugelmans and van der Vijver, 2008). We reversed the meaning of the scale so that higher scores indicate greater opposition to multiculturalism or negative attitudes toward immigrants. All items were measured on a 5-point fully-labeled scale (1=disagree entirely, 5=agree entirely). Items marked (R) in <a href=\"#_tab_1\">Table 1<\/a> are reverse-scored, so that a high score indicates more negative attitudes.<\/p>\n<p><em id=\"_tab_1\">Table 1.\u00a0 Immigrant Attitude Items<\/em><\/p>\n<table border=\"1\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td valign=\"top\" width=\"597\">\n<ol>\n<li>I believe that it is good for the Netherlands to have several groups living here, each with their own cultural background (R).<\/li>\n<li>I don\u2019t like to be on a bus or train with a lot of non-native Dutch people.<\/li>\n<li>I believe that the unity of the Netherlands is weakened by the immigrant population.<\/li>\n<li>I believe that neighborhoods where lots of immigrants live are less safe.<\/li>\n<li>I believe that there are too many immigrants living in the Netherlands.<\/li>\n<li>I believe that it is best for the Netherlands if immigrants keep their own culture and customs (R).<\/li>\n<li>I feel at ease when I\u2019m in a neighborhood where lots of immigrants live (R).<\/li>\n<li>I believe that most immigrants know enough about Dutch culture and customs (R).<\/li>\n<li>I don\u2019t feel at ease when immigrants talk among each other in a language I cannot understand.<\/li>\n<li>I believe that immigrants try hard enough to get a job (R).<\/li>\n<\/ol>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The first experiment was field in August 2009. A total of 8,026 panel members were selected for the survey, and 4,639 completed it, for a completion rate of 57.8%.\u00a0 Roughly one-third of eligible panel members were randomly assigned to the control condition, with the remaining two-thirds assigned to the experimental condition. After removing ineligibles and a small number of breakoffs (9), we are left with 4,363 observations for analysis. In the control condition, the items were presented one at a time on separate screens or Web pages (i.e., 10 pages), while in the experimental condition, each item was followed on the next page with an open text field offering the respondent the opportunity to elaborate on their answer (i.e., 20 pages). In this condition, the introduction to the series included the statement: \u201cAfter each response, you can explain your response on the next screen.\u201d The wording on the follow-up screen read: \u201cYour response to this statement was [xxx]. Can you explain this answer?\u201d Answers were required for both the closed-ended and the follow-up open-ended questions. See <a href=\"#_fig_1\">Figure 1<\/a> and <a href=\"#_fig_2\">Figure 2<\/a> for screenshots from Experiment 1.<\/p>\n<p><em id=\"_fig_1\"><span class=\"breakBefore\">Figure 1: Screenshot of Closed Question, Experiment 1<\/span><\/em><\/p>\n<p><a href=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure-1.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-2179\" title=\"Figure 1\" src=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure-1.jpg\" alt=\"\" width=\"668\" height=\"374\" srcset=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure-1.jpg 954w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure-1-300x167.jpg 300w\" sizes=\"auto, (max-width: 668px) 100vw, 668px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p><em id=\"_fig_2\">Figure 2: Screenshot of Open Question, Experiment 1<\/em><\/p>\n<p><a href=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure-22.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-2178\" title=\"Figure 2\" src=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure-22.jpg\" alt=\"\" width=\"668\" height=\"374\" srcset=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure-22.jpg 954w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure-22-300x167.jpg 300w\" sizes=\"auto, (max-width: 668px) 100vw, 668px\" \/><\/a><\/p>\n<p>Experiment 1 was not implemented exactly as intended, because of software restrictions extant at the time.\u00a0 First, the open field was meant to appear below the relevant close-ended question (not on a separate page). Second, the open field was intended to be optional, not required. For these reasons, we repeated the study following an update to the survey software.<\/p>\n<p>The second experiment was fielded in December, 2010. A total of 7,328 panel members were selected for the survey, with 5,328 completing it (a further 8 panelists broke off), for a completion rate of 72.7%. Half of the sample was randomly assigned to each of the experimental and control conditions. In the second experiment, the open comment field appeared below the closed-ended question on the same page, and answers to the open question were not required. <a href=\"#_fig_3\">Figure 3<\/a> shows one example item from Experiment 2.<\/p>\n<p><em id=\"_fig_3\">Figure 3: Screenshot of Closed and Open Question, Experiment 2<\/em><\/p>\n<p><a href=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure_3.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-2163\" title=\"Figure_3\" src=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure_3.jpg\" alt=\"\" width=\"672\" height=\"398\" srcset=\"https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure_3.jpg 960w, https:\/\/surveyinsights.org\/wp-content\/uploads\/2013\/05\/Figure_3-300x177.jpg 300w\" sizes=\"auto, (max-width: 672px) 100vw, 672px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<h1><strong>Analysis and Results <\/strong><\/h1>\n<p>We discuss the results of the two experiments in turn. For Experiment 1, Cronbach\u2019s alpha for the 10-item scale is high (0.876), so we examine both the combined measure and the individual items. Contrary to expectation, we find significantly <span style=\"text-decoration: underline;\">lower<\/span> prejudice scores for those who got the open question (F[1, 4352]=25.6, p&lt;.0001), although the effect size is trivial (\u22480.16) per Cohen\u2019s (1988) guidelines. The results are shown in <a href=\"#_tab_2\">Table 2<\/a>.<\/p>\n<p><em id=\"_tab_2\">Table 2. Means of Prejudice Scores by Experimental Condition, Experiments 1 and 2<\/em><\/p>\n<table border=\"1\" width=\"654\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td rowspan=\"2\" valign=\"top\" width=\"258\">Prejudice Scores<\/td>\n<td colspan=\"2\" valign=\"top\" width=\"198\">\n<p align=\"center\">No comment field<\/p>\n<\/td>\n<td colspan=\"2\" valign=\"top\" width=\"198\">\n<p align=\"center\">Comment field<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td valign=\"top\" width=\"96\">\n<p align=\"center\">Mean<\/p>\n<\/td>\n<td valign=\"top\" width=\"103\">\n<p align=\"center\">(s.e.)<\/p>\n<\/td>\n<td valign=\"top\" width=\"87\">\n<p align=\"center\">Mean<\/p>\n<\/td>\n<td valign=\"top\" width=\"111\">\n<p align=\"center\">(s.e.)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td width=\"258\">Experiment 1<\/td>\n<td width=\"96\">3.31<\/td>\n<td width=\"103\">(0.013)<\/td>\n<td width=\"87\">3.20<\/td>\n<td width=\"111\">(0.016)<\/td>\n<\/tr>\n<tr>\n<td width=\"258\">Experiment 2<\/td>\n<td width=\"96\">3.31<\/td>\n<td width=\"103\">(0.013)<\/td>\n<td width=\"87\">3.25<\/td>\n<td width=\"111\">(0.014)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Looking at the individual items (not shown in <a href=\"#_tab_2\">Table 2<\/a>), all ten items show the same negative trend (lower prejudice scores in the group with the comment field than in the group without the comment field), with seven of the differences reaching statistical significance (p&lt;.05).\u00a0 The first item, which respondents answered before their first exposure to the experimental manipulation, did not show a significant difference (p=.47), suggesting that the randomization to experimental conditions was effective.\u00a0 A multivariate analysis of variance (MANOVA) of the ten items yields a similar result (Wilks\u2019 Lambda=0.973, F[10, 4343]=11.98, p&lt;.0001).\u00a0 Also contrary to expectation, among those who got the comment field, we find a small but significant <span style=\"text-decoration: underline;\">negative<\/span> correlation (r=\u22120.08, p&lt;.01) between the number of words entered and the prejudice score. That is, those with higher prejudice scores entered fewer comments, on average.<\/p>\n<p>Given the surprising finding, we wondered if the implementation of the experiment may have produced the unexpected result. So we replicated the experiment, as noted above.\u00a0Cronbach\u2019s alpha for the ten-item measure was similarly high (0.894) in the second implementation. The mean differences in prejudice scores again show significant effects (F[1, 5326]=7.1, p=0.0077), again with a small effect size (Cohen\u2019s <em>d <\/em>\u2248 0.073). However, the direction of the effect is again opposite to what we hypothesized, with lower prejudice scores for those who got the optional comment field (see <a href=\"#_tab_2\">Table 2<\/a>). All ten individual items show the same negative trend (with seven significant at p&lt;.05), and a MANOVA yielded similar results (Wilks\u2019 Lambda=0.973, F[10, 4343]=11.98, p&lt;.0001).<\/p>\n<p>Given that the comment field did not require an answer in Experiment 2, we can look at this in a little more detail. Among those in the experimental group, 37% entered at least one comment, while only 4.1% entered comments in all ten fields. Again, among those who got the comment field, we find a significant (p&lt;.0001) negative correlation (r=\u22120.20) between the amount of words entered in the comment field and the prejudice score. In this group, the mean prejudice score was significantly (F[1, 2629]=8.85, p=0.0030) lower for those who entered any comments (mean=3.20, s.e.=0.023) than for those who entered no comments (mean=3.29, s.e.=0.017), again contrary to expectation.<\/p>\n<p>As a check on the coding of the dependent variable, we regressed the prejudice scores on a series of background variables. As expected, education has a significant (p&lt;.0001) negative association with prejudice towards immigrants, as did urbanicity (p&lt;.001). Gender is also significantly (p&lt;.0001) associated with prejudice, with women having lower prejudice scores than men, while age had a curvilinear association (p=0.012), with those age 45-54 reporting lower levels of prejudice than those in younger or older age group. We also tested interactions of the experimental manipulation with each of these demographic variables, and find only one significant interaction, with the difference between the two experimental groups being larger for men than for women. In other words, the unexpected findings do not appear to be due to errors in coding of the responses to the scale items.<\/p>\n<p>Finally, could the negative correlation between the number of words entered and prejudice be explained by education? Given that education is negatively correlated with prejudice, is it that better-educated respondents are making more use of the comment fields? We find no evidence of this. There is so significant association between education and whether any comments were entered, and a curvilinear relationship between education and the length of comments, with those in the middle education groups entering longer comments than those with lower or higher education. In a multivariate model, we find no interaction of education and number of words entered on prejudice scores, among those who got the comment field.<\/p>\n<h1><strong><span class=\"breakBefore\">Discussion<\/span><\/strong><\/h1>\n<p>Across two experiments conducted in the same population, we find no support for the hypothesis that offering the opportunity to clarify, justify, or explain a response would lead to higher reports of socially undesirable attitudes \u2013 in this case, negative attitudes towards immigrants. Contrary to expectation, we find significant effects in the opposite direction, that is, those offered the opportunity to clarify their closed-ended responses, and those who availed themselves of the opportunity, expressed significantly more positive attitudes towards immigrants in the Netherlands. We also find that those who entered longer comments had lower levels of prejudice. What may account for these unexpected results?<\/p>\n<p>First, the panel nature of the sample may have affected the results. LISS panel members have been asked questions of this type at various points before, and may be comfortable revealing their views on immigrants. In other words, would the same results be found in a cross-sectional sample, where respondents may have less trust in the survey organization or less comfort answering questions such as these?<\/p>\n<p>Second, the questions themselves might not be particularly threatening or susceptible to social desirability biases. In general, the more threatening a question is, the more susceptible it is likely to be to social desirability bias, and hence the more likely it might be affected by the experimental manipulation. The questions used by Krysan and Couper (2003) ask more directly about overt racism, and hence may be more subject to social desirability bias than the items used here.<\/p>\n<p>Third, the mode of data collection may be a factor.\u00a0The need to explain one\u2019s answers may be smaller in an online self-administered environment than in one where an interviewer is present.\u00a0There is evidence that people are more willing to disclose sensitive information in Web surveys than in other survey modes (e.g., Kreuter, Presser, and Tourangeau, 2008).<\/p>\n<p>However, all three of these possible explanations may account for a lack of effect of the experimental manipulation.\u00a0They do not explain why we find significant \u2013 albeit small \u2013 effects in both studies.\u00a0Finally, the initial hypothesis may of course be wrong.\u00a0The results \u2013 replicated in two similar but not identical experiments, in the same population \u2013 suggest that the addition of the comment field has some effect on the level of prejudice reported \u2013 just not in the direction expected.<\/p>\n<p>One alternative explanation may be that having people reflect on their answers (by asking them to explain their choices) may increase self-reflection and editing of responses, thereby decreasing the reporting of negative or stereotyped attitudes. Some support for this comes from the experimental literature on prejudice (e.g., Devine, 1989; Devine and Sharp, 2009; Dovidio, Kawakami, and Gaertner, 2002; Monteith and Mark, 2005; Powell and Fazio, 1984; Wittenbrink, Judd, and Park, 1997) which suggests that stereotypes may be activated automatically, and that the expression of less-prejudiced views takes conscious effort.\u00a0In other words, faster responses may simply be more prejudiced responses.\u00a0 We find little evidence for this hypothesis using indirect indicators in our data. In an analysis of timing data from the second experiment, we find no correlation (r=-0.018, n.s.) between the time taken to answer these 10 questions and the level of prejudice, among those who did not get the comment fields.<\/p>\n<p>Without knowing what the respondents\u2019 \u201ctrue\u201d attitudes are, we can only surmise about the underlying mechanisms and the direction of the shift (whether the comment fields produce more socially desirable responding or more closely reflect underlying beliefs). Disentangling these effects requires further research.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction This research note explores the effect of offering an open-ended comment field on the responses to a series of questions about attitudes toward immigrants in an online survey. The motivation for this research was a puzzling finding in an earlier study on race of interviewer effects. In that study (Krysan and Couper, 2003), it [&hellip;]<\/p>\n","protected":false},"author":48,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[84,82,83,81],"class_list":["post-1731","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-attitudes-toward-immigrants","tag-comment-fields","tag-social-desirability","tag-web-surveys"],"acf":[],"_links":{"self":[{"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/posts\/1731","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/users\/48"}],"replies":[{"embeddable":true,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1731"}],"version-history":[{"count":52,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/posts\/1731\/revisions"}],"predecessor-version":[{"id":20337,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=\/wp\/v2\/posts\/1731\/revisions\/20337"}],"wp:attachment":[{"href":"https:\/\/surveyinsights.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1731"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1731"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/surveyinsights.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1731"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}