Altering the Survey-taking Climate: The Case of the 2010 U.S. Census
Yan T. & Datta A.R. (2015). Altering the Survey-taking Climate: The Case of the 2010 U.S. Census, Survey Methods: Insights from the Field. Retrieved from https://surveyinsights.org/?p=7324
© the authors 2015. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0)
Abstract
Response rates to household surveys have been declining in the past several decades and survey researchers and practitioners have been working on ways to change the survey-taking climate to combat the declining response rates. As part of the 2010 Decennial Census, the U.S. Census Bureau waged the 2010 Integrated Communications Campaign (2010 ICC), a multi-faceted effort to improve public awareness of, attitudes towards, and knowledge about the Census in order to increase Census participation. This type of communications program is a unique case of an attempt to alter the external survey-taking climate and thus potentially affect survey participation. This paper empirically examines the extent to which exposure to the 2010 ICC affected knowledge and attitudes about the Census in the months leading up to Census Day. We then explore the relationship between different levels of attitudes and knowledge and subsequent Census participation. Our results suggest that the external survey-climate was altered to foster positive receptivity to the survey request, and that favorable receptivity, in turn, leads to a higher likelihood of participating in the survey request (the 2010 Decennial Census in this case). Implications for survey researchers and organizations are also discussed.
Keywords
survey nonresponse, survey-taking climate, the 2010 Integrated Communications Campaign
Acknowledgement
This work was done partially under contract from the U.S. Census Bureau to NORC at the University of Chicago. We thank the Census Bureau for their support. The views and opinions are those of the authors and do not represent the Census Bureau.
Copyright
© the authors 2015. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0)
1. Introduction
Response rates to household surveys have been declining in the past several decades (Curtin, Singer, and Presser 2005; Yan and Curtin 2011). The steady decline in response rates poses methodological and analytical challenges to survey organizations and researchers. Indeed, the low response rates being experienced by survey organizations cast doubts on (and raise heated debates on) the validity and practicality of continuing the tradition of probability-sample surveys.
Survey organizations and survey researchers have been expending great effort to identify factors affecting survey participation, to uncover theories or mechanisms accounting for non-participation, and to change survey protocols and procedures to facilitate cooperation from potential survey respondents. According to an influential conceptual model of survey participation (Groves and Couper 1998), decisions to comply with a survey request are dependent on the social environment, respondent (or household) characteristics, survey design features, interviewer characteristics, and interviewer-respondent interactions. Among the factors listed, the social environment factor is considered to be stable, fixed, and out of control of survey researchers (Groves and Couper 1998). Even though survey researchers cannot alter the economic, social, and demographic characteristics of the environment in which sample households reside, survey researchers and organizations have changed their views on the malleability of one element of the social environment – the survey-taking climate.
The term “survey-taking climate” or “survey climate” was first referenced in 1991 in a paper about nonresponse research conducted at Statistics Sweden to combat declining response rates (Lyberg and Lyberg 1991). Although the term “survey-taking climate” is not clearly defined in the paper, the paper discussed the damage on survey nonresponse attributed to a hostile survey-taking climate and talked about the use of “nonresponse barometer” to monitor the survey-taking climate in Sweden. Twenty-two years later, Lorenc and his co-authors continued the research on survey-taking climate by advocating increased efforts on the part of national statistical agencies to understand, track, and positively influence the survey-taking climate (Lorenc, Loosveldt, Mulry, and Wrighte2013). In this paper, we adopt a working definition of survey climate as population-level attitudes and beliefs about survey research outside of an immediate request to participate in a survey. Admittedly, this is a rather specific definition of the survey climate. A broader definition could also encompass dimensions at the general societal level such as declining response rates and increasing awareness of privacy and confidentiality rules (e.g., the Do-Not-Call registry in the United States).
The survey literature is sparse with empirical research on how the survey-taking climate can be monitored and manipulated as well as the impact of such a manipulation on survey participation. The survey-taking climate is sometimes measured through a “survey on surveys” approach and several instruments have been constructed to measure the survey-taking climate (Goyder 1986; Hox, de Leeuw, and Vorst 1995; Rogelberg 1997; Singer, Van Hoewyk, & Maher 1998; Stocke 2001; 2006; Loosveldt and Storms 2008).
In terms of ways to alter the survey-taking climate, Lorenc and his co-authors (2013) introduced the concept of social marketing campaigns as a method to improve the survey-taking climate. However, few instances can be found in the literature of using social marketing campaigns to influence the survey-taking climate, and research demonstrating the ability of such campaigns to actually alter the survey-taking climate is even more limited.
Social marketing campaigns have been used in other contexts, however, to promote health and wellbeing, to encourage energy conservation, to reduce poverty, to increase financial literacy, and to increase civic participation (Evans, Davis, and Farrelly 2008; Bertrand, Mullainathan, and Shafir 2006; World Bank 2012). Recent evidence reviews indicate that social marketing campaigns using mass media (television, radio, outdoor and print advertising, and the Internet) can be effective in changing — on a population level — behaviors such as smoking, physical activity, condom use, financial education, and behavioral mediators such as knowledge, attitudes and beliefs related to these behaviors (Farrelly Davis, Haviland, Messeri, and Healton 2005; Hornik 2002; World Bank 2012).
Datta and colleagues specify a conceptual model through which a media campaign alters a survey-taking climate and the altered climate in turn affects individuals’ survey participation decisions (Datta, Yan, Evans, Pedlow, Spencer, and Bautista 2012; Evans, Datta, and Yan 2014; Evans, Yan, and Datta 2012). As hypothesized by the model, the media campaign changes the survey-taking environment by altering the knowledge, attitudes, and beliefs people have about taking surveys, which can ultimately affect participation behavior in response to a specific survey request.
This paper discusses an example of implementing a media campaign to alter the survey-taking climate – the 2010 Integrated Communications Campaign (2010 ICC) launched by the U.S. Census Bureau. Using the 2010 ICC as an example, we want to address two important empirical research questions that specifically focus on changes to the survey-taking climate around the time of the 2010 Decennial Census and the impact of those changes on actual participation in the Census:
- To what extent was the survey-taking climate altered around the time of the 2010 Census?
- Did these changes result in increased Census participation?
2. 2010 Integrated Communications Campaign (2010 ICC)
The U.S. Census Bureau implemented the 2010 Integrated Communications Campaign to encourage participation in the 2010 Decennial Census. The campaign included paid advertising; earned media such as news stories and message placement within telenovela story lines; local organizational actions including union newsletters and street fairs; person-to-person communications such as workplace and church-based communications; a Census in Schools program for outreach to students in elementary and secondary schools; and extensive on-line social media messaging. The 2010 ICC was indeed integrated, with a high degree of coordination in materials, images and themes across the different components of the campaign. By integrating these elements, the campaign’s goal was to more effectively ensure that everyone was reached, especially the hard to enumerate. The campaign was also pervasive. The paid media campaign was one of the nation’s largest television advertising efforts in the early part of 2010. The Census in Schools component targeted toward children in elementary and secondary schools by delivering educational materials to every eligible school in the country.
3. The 2010 Census Integrated Communications Program Evaluation (CICPE)
The principal data source for this paper is the 2010 Census Integrated Communications Program Evaluation (CICPE) conducted by NORC at the University of Chicago under contract from the Census Bureau to: 1) track the evolution of knowledge of and attitudes toward the Census prior to and during the 2010 Census; 2) evaluate the effect of the 2010 ICC on mail return and cooperation with enumerators; 3) increase understanding of the mechanisms through which a communications campaign can affect census participation; and 4) emphasize the perspectives of hard-to-count groups in achieving these analytical objectives (Census Bureau, 2012).
Survey data collection for the 2010 CICPE took place at three time points. Wave 1 was conducted mid-September 2009 through mid-January 2010, during early partnership activity, to assess baseline levels of all measures of public attention and intentions. Wave 2 took place mid-January through mid-March, 2010, during the peak of the paid media campaign and partnership activities, but before households received their census forms. Finally, Wave 3 was conducted during the NRFU period from mid-April through mid-July 2010 when people had made their decisions about participating in the mailback phase and had been exposed to the full course of the paid media and partnership campaigns.
The same multi-mode address-based sampling approach was adopted across all three waves of data collection. Sampled addresses that were matched with a telephone number were first worked in a telephone center via Computer-Assisted Telephone Interviewing (CATI). Sampled addresses that were not matched with a telephone number or cases that showed no progress in the telephone center after a designated period of time were sent to field interviewers for Computer-Assisted Personal Interviewing (CAPI). Response rates varied by sample group and by wave. We refer readers to the study’s final report, which is available online (https://www.census.gov/2010census/pdf/2010_Census_ICP_Evaluation.pdf), for in-depth descriptions of the study design, data collection methodology (including response rates), analyses, detailed data tables, and interpretation of findings.
Survey samples included approximately equal numbers of individuals from five hard-to-count groups (Hispanic, non-Hispanic Black, American Indian, Asian, and Native Hawaiian) and one comparison group (non-Hispanic White). This comparison group includes all non-Black non-Hispanic individuals. Together with the Hispanic and non-Hispanic Black category, this ‘non-Hispanic White’ category is representative of the entire U.S. population residing in households. The remaining three groups — Asians, Native Hawaiians, and American Indians — are additional independent supplemental samples of three less prevalent hard-to-enumerate subgroups. Sample sizes are shown in Table 1 by wave and by race/ethnicity. Response rates are displayed in Table 2 by wave, race/ethnicity, and type of sample.
Table 1. Completed Interviews by Race/Ethnicity Group in National Sample
Race/Ethnicity |
Wave 1 (Early Stage of Campaign) |
Wave 2 (Peak Stage of Campaign) |
Wave 3 (NRFU stage of Campaign) |
Total Number of Completes |
Hispanic | 461 | 369 | 539 | 1,369 |
Non-Hispanic Black | 377 | 384 | 526 | 1,287 |
Non-Hispanic White | 404 | 358 | 472 | 1,234 |
Total National Sample | 1,242 | 1,111 | 1,537 | 3,890 |
Supplemental: American Indian | 457 | 392 | 529 | 1,378 |
Supplemental: Asian | 542 | 410 | 548 | 1,500 |
Supplemental: Native Hawaiian | 430 | 350 | 494 | 1,274 |
Total Number of Completes | 2,671 | 2,263 | 3,108 | 8,042 |
Table 2. Response Rates by Wave, Race/Ethnicity Group, and Type of Sample
Race/Ethnicity |
Wave 1 | Wave 2 (Cross-sectional Sample) | Wave 2 (Panel Sample) | Wave 3 (Cross-sectional Sample) | Wave 3 (Panel Sample) |
Hispanic | 461 | 118 | 251 (78.2%) |
285 | 254 (71.7%) |
Non-Hispanic Black | 377 | 111 | 273 (74.8%) |
268 | 258 (68.4%) |
Non-Hispanic White | 404 | 99 | 259 (71.8%) |
221 | 251 (70.0%) |
Total National Sample | 1,242 (60.5%) |
328 (60.9%) |
783 | 774 (63.1%) |
763 |
Supplemental: American Indian | 457 (56.5%) |
107 (43.2%) |
285 (77.5%) |
235 (37.9%) |
294 (71.0%) |
Supplemental: Asian | 542 (50.7%) |
114 (64.2%) |
296 (62.1%) |
264 (73.8%) |
284 (58.5%) |
Supplemental: Native Hawaiian | 430 (30.6%) |
119 (46.1%) |
231 (67.2%) |
267 (53.3%) |
227 (68.9%) |
4. Results
4.1. To What Extent Was the Survey-taking Climate Altered Around the Time of the 2010 Census?
To address the first research question, we examine the trends in reported awareness of the Census, knowledge about the Census, and positive and negative attitudes towards the Census across waves as the media campaign unfolded and became more intensive.
Awareness of the Census is measured via two survey items (Specific question wordings are displayed in Appendix I). Individuals are coded as “aware of the Census” if they indicate that they have heard of the Census either with or without an accompanying definition of the Census. Table 3 displays weighted percentage of respondents who were coded as being aware of the Census over time. (All analyses are weighted and have taken into account effects of clustering and weighting unless noted otherwise. The weights adjust for unequal selection probabilities, unequal subsampling rates, and differential nonresponse. The weights are also poststratified to the 2000 US Decennial Census control totals.)
As shown in Table 3, awareness of the Census was very high at all three waves, reaching 90 percent for the national estimate even at Wave 1. Moreover, awareness increased over time for all reported subgroups, including statistically significant increases for the nation as a whole and for three hard-to-reach population groups (American Indians, Asians, and Native Hawaiians) at both Wave 2 and Wave 3 relative to Wave 1. The Wave 1 to Wave 3 increase in awareness also reached statistical significance for Hispanics. Since the campaign continued to disseminate information between Waves 2 and 3, Wave 2 can be seen as a partial dose measure. The pattern of increasing awareness over time suggests that the campaign had successfully increased people’s awareness of the Census during the course of the campaign.
Table 3. Awareness of the Census Across Wave, by Race/Ethnicity Group
Awareness of the Census | |||
Sample Type |
W1 %(s.e.) | W2 %(s.e.) | W3 %(s.e.) |
Hispanic | 87.3 (4.3) | 93.8 (3.4) | 99.2* (0.5) |
Non-Hispanic Black | 86.9 (6.9) | 92.6 (3.8) | 98.2 (1.4) |
Non-Hispanic White | 95.9 (1.9) | 98.8 (0.6) | 99.3 (0.6) |
National Estimate | 93.9 (1.6) | 97.5* (0.7) | 99.2* (0.5) |
American Indian | 88.3 (3.1) | 97.1* (0.7) | 99.6* (0.2) |
Asian | 73.1 (3.9) | 89.0* (3.5) | 91.4* (4.2) |
Native Hawaiian | 80.9 (6.6) | 94.8* (1.8) | 97.8* (1.4) |
Note: * Indicates p<.05 in comparison relative to Wave 1 for same group.
In order to evaluate whether the ICC improved the general public’s knowledge about the Census, the questionnaires for all waves included a series of items designed to measure respondents’ understanding of the uses of the Census. Respondents were first asked whether they were required by law to participate in the Census. Then they were asked a series of items—some true, some false—about possible uses of Census data. For each respondent, we computed a knowledge score by counting correct responses to these eight knowledge questions. Table 4 displays weighted mean knowledge scores across wave by race/ethnicity. Like awareness, knowledge about the Census increased over time; increases were statistically significant for all sample groups at Wave 3 and for three of the five hard-to-count population subgroups at Wave 2 (American Indians, Asians, and Native Hawaiians). Like Table 3, Table 4 provides another piece of evidence that the survey-taking climate was altered over the course of the media campaign, resulting in greater knowledge among the targeted population about the Census.
Table 4. Mean Knowledge Scores Across Wave, by Race/Ethnicity Group
Mean Knowledge Scores out of a Possible Score of 8.0 | |||
Sample Type | W1 (s.e.) | W2 (s.e.) | W3 (s.e.) |
Hispanic | 3.8(0.2) | 4.5(0.3) | 5.3*(0.2) |
Non-Hispanic Black | 3.2(0.4) | 3.9(0.3) | 4.4*(0.2) |
Non-Hispanic White | 4.6(0.2) | 4.9(0.2) | 5.4*(0.1) |
National Estimate | 4.4 (0.2) | 4.7*(0.1) | 5.3*(0.1) |
American Indian | 3.6(0.1) | 4.3*(0.2) | 4.7*(0.3) |
Asian | 3.1(0.2) | 4.2*(0.2) | 4.5*(0.3) |
Native Hawaiian | 3.2(0.3) | 4.2*(0.1) | 4.7*(0.2) |
Note: * Indicates p<.05 in comparison relative to Wave 1 for same group.
To capture attitudes toward the Census and how they might have changed over time, the questionnaire included questions designed to measure respondents’ feelings and opinions about the Census. First, we asked respondents to report on their general feelings about the Census. Then respondents were presented with a list of ten statements about the Census and asked to rate their level of agreement with each statement. To better illustrate attitudes, the ‘Strongly agree’ and ‘Agree’ categories are combined into a single category, ‘agree.’ We present two summary measures for each sample group in Table 5, distinguishing between positive and negative attitudes. These summary measures are, respectively, the average number of “agree” responses to the five statements reflecting positive attitudes and beliefs about the Census and the average number of “agree” responses to the six statements depicting negative attitudes and beliefs about the census.
As shown in Table 5, positive attitudes toward the Census increased over time whereas negative attitudes towards the Census reduced, as the campaign progressed and increased its intensity. Over-time changes in positive attitudes (from Wave 1 to Wave 3) are statistically significant for all sample groups, including all five hard-to-count population groups and the nation as a whole.
Table 5. Positive and Negative Attitudes toward the Census by Wave and Race/Ethnicity Group
Positive Attitudes | Negative Attitudes | |||||
Sample Type | W1 (s.e.) | W2 (s.e.) | W3 (s.e.) | W1 (s.e.) | W2 (s.e.) | W3 (s.e.) |
Hispanic | 3.4 (0.3) | 3.9 (0.2) | 4.4*(0.0) | 1.1 (0.2) | 1.1(0.2) | 0.8(0.1) |
Non-Hispanic Black | 3.0 (0.3) | 3.5 (0.3) | 3.8*(0.1) | 1.1 (0.2) | 0.8 (0.2) | 0.9(0.1) |
Non-Hispanic White | 3.4 (0.1) | 4.0*(0.1) | 4.0*(0.1) | 0.9 (0.1) | 0.6*(0.1) | 0.5*(0.1) |
National Estimate | 3.4 (0.1) | 3.9*(0.1) | 4.0*(0.1) | 1.0(0.1) | 0.7*(0.1) | 0.6*(0.1) |
American Indian | 3.1 (0.1) | 3.8*(0.2) | 3.9*(0.3) | 0.9 (0.1) | 0.8 (0.1) | 0.8(0.1) |
Asian | 2.5(0.2) | 3.5*(0.2) | 3.5*(0.2) | 1.1 (0.1) | 1.1(0.2) | 0.8*(0.1) |
Native Hawaiian | 2.9 (0.3) | 3.9*(0.1) | 3.8*(0.2) | 0.9 (0.1) | 1.0(0.1) | 0.9 (0.2) |
Note: * Indicates p<.05 in comparison relative to Wave 1 for same group.
Overall, the campaign effectively altered the survey-taking environment by improving sampled respondents’ awareness of the Census and knowledge about the Census and by increasing sample respondents’ positive attitudes towards the Census and reducing their negative attitudes.
4.2 Did Changes in the Survey-Taking Climate Result in Increased Census Participation?
To answer this research question, we matched our sample to the Census Bureau operations data and constructed a measure of Census form return – 1 if the household returned its Census form by April 18th or 0 if the household did NOT return its Census form by April 18th. (April 18th is the last date of the mailback phase of the campaign, after which all households without returned census forms were designated as eligible for the NRFU portion of the census effort. ) To estimate the association of knowledge and attitudes with Census form return status, we fitted a logistic regression model for each of the six sample groups; the dependent variable is the measure of Census form return and the independent variables are knowledge about the Census and positive and negative attitudes towards the Census held by respondents. For this analysis, we excluded sampled addresses that were not eligible for mail return.
Table 6 demonstrates a statistically significant positive relationship between knowledge and Census form return, indicating that improved knowledge scores by the media campaign was associated with higher likelihood of returning the Census form. Of more importance, this relationship holds for five (out of six) sample groups. There is also some indication that holding positive attitudes about the Census was also positively associated with returning the Census form for some types of respondents. But there is no association between negative attitudes and Census form return.
Table 6. Predicting Census Form Return Using Knowledge and Attitudes (Odds Ratios)
Hispanic | Non-HispanicBlack | Non-HispanicWhite | AmericanIndian | Asian | NativeHawaiian | |
OR(p-value) | OR(p-value) | OR(p-value) | OR(p-value) | OR(p-value) | OR(p-value) | |
Knowledge Scores | 0.87(0.43) | 1.19*(0.04) | 1.21*(0.04) | 1.12*(<0.01) | 1.26*(0.01) | 1.28*(0.05) |
Positive Attitudes | 0.99(0.94) | 1.23(0.17) | 1.31*(0.02) | 1.61*(<0.01) | 1.10(0.38) | 0.83*(<0.01) |
Negative Attitudes | 1.36(0.46) | 1.05(0.82) | 1.36(0.15) | 0.94(0.93) | 0.68(0.19) | 0.70(0.11) |
Pseudo R-square | 0.01 | 0.04 | 0.31 | 0.01 | 0.02 | <0.01 |
Max-rescaled R-square | 0.02 | 0.07 | 0.31 | 0.14 | 0.08 | 0.04 |
Note: * Indicates statistical significance at p=.05.
5. Discussion and Implications
This paper describes an example of the implementation of a media campaign (outside of the survey effort) in an attempt to change the survey-taking climate in support of survey participation. Our analyses provide empirical evidence that it is possible to positively change the survey-taking climate and that the improved survey-taking climate can have a positive impact on decision to participate in surveys. In the case of the 2010 ICC, we found an increasing awareness and knowledge about the Census, increases in positive attitudes and decreases in negative attitudes towards the Census. We further show that higher levels of knowledge and positive attitudes are associated with greater likelihood of actual Census form return (i.e., Census participation).
The 2010 ICC is a specific case of successfully manipulating the survey-taking environment to improve Census participation among potential survey respondents. However, there was evidence that media campaigns aimed at improving Census participation increased participation in other surveys during that period of time. For instance, Groves and Couper (1998) noted that response rates to other surveys conducted by the Census Bureau were higher during the advertising campaign for the Decennial Census. Similar trends have been observed for the response rates to the American Community Survey with higher mail check-in rates for the months where the media campaign was launched or ongoing (Bates and Mulry, 2011). This is encouraging because it shows that other surveys benefit from a large-scale media campaign even when the media campaign is not targeting them directly. We understand that most surveys will probably never have the opportunity to address the survey-taking climate through this magnitude of media campaigns. We do feel that the survey industry collectively could make use of media campaigns to improve survey-taking climate. We make a bold call to national statistical agencies and private survey organizations to together invest resources on media campaigns aimed at positively alter the survey-taking climate.
What other lessons can be learned from the 2010 ICC about future efforts to affect the survey-taking environment? First, the messages promoted by the 2010 ICC media campaign are of the same themes that are often used in advance letters, survey brochures, refusal conversion letters, interviewer scripts, or endorsement letters in seeking to increase survey cooperation. Even in the absence of media campaigns to affect the broad survey-taking environment, we believe that efforts to broadcast these themes addressing respondents’ concerns could have a favorable impact on the survey-taking environment local to individual sample members. Second, our findings highlight the benefits in increasing potential respondents’ knowledge about the survey content, the survey sponsor, and the survey outcome and in improving potential respondents’ positive attitudes towards the survey and the sponsor. Media campaigns are not the only means to achieve that end. We believe that even changes to the survey design protocols that could potentially increase people’s knowledge about and positive attitudes towards the survey or the survey sponsor will have a positive impact on respondents’ decision-making process. Third, media campaigns or interventions of smaller scale or scope could be possible for certain settings. For instance, employee surveys within an institution or organization could potentially benefit from interventions of this type. Or surveys of schools could use a similar intervention from school districts.
There are several limitations of the study. We acknowledge that our procedure to capture attitudes towards the Census and changes in attitudes is rather elementary and a more refined approach is desirable. Second, the logistic regression models are weak on the explanatory power. Finally, the 2010 ICC is only one successful instance and more empirical research is needed to demonstrate the feasibility and utility of such media campaigns for sample surveys.
Appendix I. Wordings of Survey Items Asked in the 2010 CICPE
Construct Measured | Survey Items |
Awareness of Census | –Have you ever heard of the census?–The Census is the count of all the people who live in the United States. Have you ever heard of that before? |
Knowledge about Census | –So far as you know, does the law require you to answer the Census questions?–People have different ideas about what the Census is used for. I am going to read some of them to you. As I read each one, please tell me by indicating yes or no whether you think that the Census is used for that purpose. Is the Census used…a. to decide how much money communities will get from the government?b. to decide how many representatives each state will have in Congress?c. to count both citizens and non-citizens?d. to determine property taxes?e. to help the police and FBI keep track of people who break the law?f. to help businesses and governments plan for the future?g. to locate people living in the country illegally? |
Positive Attitudes towards Census | –Overall, how would you describe your general feelings about the Census? Do you feel…highly favorable, moderately favorable, neutral, not too favorable, rather unfavorable?–Next, I’m going to read some opinions about the Census. As I read each one, tell me if you strongly agree, agree, disagree, or strongly disagree with each of the statements:– Filling out the Census will let the government know what my community needs.– The Census Bureau’s promise of confidentiality can be trusted.– Taking part in the Census shows I am proud of who I am.– Answering and sending back the Census matters for my family and community. |
Negative Attitudes towards Census | –Next, I’m going to read some opinions about the Census. As I read each one, tell me if you strongly agree, agree, disagree, or strongly disagree with each of the statements:– The Census is an invasion of privacy.– I am concerned that the information I provide will be misused.– My answers to the Census could be used against me.– The government already has my personal information, like my tax returns, so I don’t need to fill out a Census form.– I just don’t see that it matters much if I personally fill out the Census form or not.– It takes too long to fill out the Census information, I don’t have time. |
References
- Bates, N., and Mulry, M. (2011). Using a Geographic Segmentation to Understand, Predict, and Plan for Census and Survey Mail Nonresponse. Journal of Official Statistics, 27, 601-618.
- Bates, N., and Mulry, M. (2012). Did the 2010 Census Social Marketing Campaign Shift Public Mindsets? In Proceedings of SRMS, JSM, 5257-5751.
- Bertrand, M., Mullainathan, S., and Shafir, E. (2006). Behavioral Economics and Marketing in Aid of Decision Making Among the Poor. Journal of Public Policy & Marketing, 25, 8-23.
- Bruner, G.C. (1998). Standardization and Justification: Do Ad Scales Measure Up? Journal of Current Issues and Research in Advertising, 20, 1-18.
- Curtin, R., Singer, E., and Presser, S. (2005). Changes in Telephone Survey Nonresponse Over the Past Quarter Century. Public Opinion Quarterly, 69, 87–98.
- Datta, A. R., Yan, T., Evans, D., Pedlow, S., Spencer, B., and Bautista, R. (2012). 2010 Census Integrated Communications Program Evaluation: Final Report. Prepared for the U.S. Bureau of the Census, U.S. Department of Commerce. NORC at the University of Chicago: Chicago. Accessed at https://www.census.gov/2010census/pdf/2010_Census_ICP_Evaluation.pdf
- Davis, K.C., Nonnemaker, J., Farrelly, M.C., and Niederdeppe, J. (2010). Exploring Differences in Smokers Perceptions of the Effectiveness of Cessation Media Messages. Tobacco Control, doi: 10.1136/tc.2009.035568.
- Dillard, J.P., Shen, L., and Vail, R.G. (2007). Does Perceived Message Effectiveness Cause Persuasion or Vice Versa? 17 Consistent Answers. Human Communication Research, 33, 467-488.
- Groves, R., and Couper, M. (1998). Nonresponse in Household Surveys. New York: John Wiley and Son..
- Evans, D., Datta, R., and Yan,T. (2014). Use of Paid Media to Encourage 2010 Census Participation Among The Hard-To-Count. In Roger Tourangeau, Brad Edwards, Timothy P. Johnson, Kirk M. Wolter, and Nancy Bates (Eds.), Hard-to-Survey Populations, pp. 519-540, Cambridge, UK: Cambridge University Press.
- Evans, W. D., Yan, T., and Datta, A. R. (2012). “Receptivity to 2010 Census Messages Among the General Public and Hard-to-enumerate Populations.” Journal of Mass Communication and Journalism, 2:126. doi:10.4172/2165-7912.1000126.
- Evans, W.D., Davis, K.C., and Farrelly, M.C. (2008). Planning for a Media Evaluation. In A Practical Guide to Program Evaluation Planning, D. Holden and M. Zimmerman (Eds.), Thousand Oaks, CA: Sage Publications, Inc.
- Evans, W.D., Davis, K.C., Umanzor, C., Patel, K., Khan, M. (2011). Evaluation of Sexual Communication Messages. BMC Reproductive Health, 8:15 doi:10.1186/1742-4755-8-15.
- Evans, W.D., Uhrig, J., Davis, K., and McCormack, L. (2009). Efficacy Methods to Evaluate Health Communication and Marketing Campaigns. Journal of Health Communication, 14(3):244-254.
- Farrelly, M.C., Davis, K.C., Haviland, M.L., Messeri, P., and Healton, C.G. (2005). Evidence of a Dose-Response Relationship between ‘truth’ Antismoking Ads and Youth Smoking. American Journal of Public Health 95(3): 425–31.
- Hornik, R.C. (2002). Public Health Communication: Evidence for Behavior Change. Mahwah, NJ: Earlbaum
- Loosveldt, G., & Storms, V. (2008). Measuring Public Opinions about Surveys. International Journal of Public Opinion Research, 20(1), 74-89.
- Lorenc, B., Loosveldt, G., Mulry, M. H., & Wrighte D. (2013). Understanding and Improving the External Survey Environment of Official Statistics. Survey Methods: Insights from the Field. Retrieved from https://surveyinsights.org/?p=161
- Yan, T., & Curtin, R. (2010). The Relation Between Unit Nonresponse and Item Nonresponse: A Response Continuum Perspective. International Journal Of Public Opinion Research, 22, 535-551.
- World Bank. (2012). Africa Regional Dialogue on Financial Literacy and Capability: Final Report. Prepared for the World Bank, Washington, DC.