Understanding and Improving the External Survey Environment of Official Statistics
Lorenc, B., Loosveldt, G., Mulry, M. H., & Wrighte D. (2013). Understanding and Improving the External Survey Environment of Official Statistics. Survey Methods: Insights from the Field. Retrieved from https://surveyinsights.org/?p=161
© the authors 2013. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0)
Abstract
We argue for renewed efforts to improve the external survey environment for official statistics. We introduce the concept of social marketing as one novel way of achieving this. We also propose measuring the survey-taking climate and the related changes on the societal level using a 'survey climate barometer'. Finally, by presenting current and potential initiatives planned by Statistics Canada, we illustrate activities that national statistical institutes could implement with the goal of positively influencing their external survey environment.
Keywords
nonresponse bias, nonresponse rates, social marketing, survey climate barometer, survey-taking climate
Any views expressed are those of the authors and not necessarily those of the Catholic University of Leuven, Statistics Canada, Statistics Sweden, or the U.S. Census Bureau.
Copyright
© the authors 2013. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0)
Introduction
Current literature contains little work on the efforts of national statistical institutes (NSIs) to improve their broader public perception. Impressions about the NSIs that we know of indicate that, in the majority of them, there is little concerted activity by NSIs in this regard (with exceptions that we will mention). The goal of this article is to argue for increased efforts by NSIs toward positively influencing the external environment in which they operate.
Proponents of the status quo may argue that additional efforts toward strengthening an NSI’s within the society it serves are not necessary, as NSIs produce statistics for the benefit of the society and their ‘brand’ is established through the statistics they produce. As such, anything else could be considered a waste of resources. However, we believe that this view is too simplistic, posing a threat to long-term sustainability of many survey programs that are the basis of modern official statistics. We find support for our belief in two general trends.
First, there is increasing understanding that NSIs – as well as other producers of official statistics – need to refocus their primary goal of producing statistics to the one centred on use of statistics by the society that they serve: in evidence-based decision-making by governments, businesses, and others. Remaining in the ‘data production’ realm threatens to distance NSIs from their users, detrimental to relevance of official statistics.
Second, the possibilities for voicing one’s opinion were scarce when the theoretical basis for modern sampling methodology was established (in the 1940s and1950s). However, now there are vast opportunities for publicly voicing opinions. Added to this is proliferation of marketing firms and private polling companies. Even if evidence is circumstantial, it is plausible to assume that this growth has contributed to an observed decline of response rates in household surveys conducted by NSIs (‘over-surveying’, Groves & Couper, 1998, p. 31). In a saturated opinion-voicing environment, NSIs will benefit from a public that differentiates between responding to surveys that produce official statistics and other forms of opinion voicing. Failure on the part of NSIs to educate the public on this distinction threatens high-quality data collection and, ultimately, the accuracy of official statistics.
In this paper, we:
- discuss whether NSIs should devote resources to strengthen their external survey environments,
- propose some potentially effective means to improve the external survey environments of official statistics,
- present a framework with which to approach the measurement of the state of an NSI’s external survey environments.
In Section Nonresponse as a quality risk, we take an affirmative stance on the need for NSIs to devote resources to strengthening their external survey environments, arguing that decreasing survey response rates cause considerable quality risks and increased costs. In Section Societal aspects of statistics production, we discuss the place and relevance of official statistics in society, with a particular focus on the uses and users of official statistics. These users may be seen as stakeholders or partners in the production of official statistics. Social marketing, which we introduce in Section Social marketing for official statistics purposes, stresses the importance of building partnerships for improving the external survey environment of an NSI and inducing higher response rates and improved data quality. The 2010 U.S. Decennial Census provides an illustration of actions carried out by an NSI using a social marketing framework. In Section Evaluation of survey climate we consider measurement of external survey environment by suggesting a ‘survey climate barometer’ and evaluation of activities carried out to improve it. Finally, in Section Example of an NSI’s activities toward improving external survey environment, we review the issues typically encountered by the data collection operations of an NSI. We illustrate some activities geared toward improving the external survey environment by presenting Statistics Canada’s current and proposed initiatives. The closing section offers some remarks and a call to NSIs to increase their efforts to positively influence the external survey environment.
Nonresponse as a quality risk
A dilemma facing survey methodologists is how much emphasis should be placed on survey response rates as an indicator of data quality. Based on empirical results, a perspective emerged in the preceding decade that the value of response rates as an indicator of data quality (its accuracy dimension) is low. Evidence gained through meta-analyses of nonresponse bias studies indicates that there is little association between response rate and nonresponse bias (Groves, 2006). Therefore, it was argued that nonresponse bias is a preferable indicator of data quality while response rate is not (Groves & Peytcheva, 2008; Peytcheva & Groves, 2009).
It is, however, well known that obtaining high-quality estimates of nonresponse bias also commonly involves either strong assumptions or considerable additional resources. The methods available for estimating nonresponse bias include simulations that rely on model assumptions, which, in principle, ought to be validated against real data. Other methods use data from administrative records as proxies for true values and thereby invite the question of how well the administrative data represents the variables that the survey measures. While there are valuable research contributions regarding estimating nonresponse bias (e.g. Peytchev, Peytcheva, & Groves, 2010), much methodological development remains to be accomplished.
Further, nonresponse bias is a phenomenon associated with each particular study variable rather than a survey as a whole (e.g. Bethlehem, Cobben, & Schouten, 2011). To the extent assessing nonresponse bias is feasible (viz. uncertainties outlined in next paragraph), it becomes prohibitively costly and resource- demanding, due to sheer volume of resources and efforts required, to have a thorough nonresponse bias analysis for a whole survey. In the case of NSIs, the cost and resources required to perform bias analysis for the entire statistical production may be very high.
In addition, when a survey is recurring, the estimation of nonresponse bias needs to be repeated at intervals. Because the nature of nonresponse tends to change through time, an estimate of nonresponse bias runs the risk of becoming obsolete. The updating of the estimate adds to the costs and the amount of the methodological work needed.
On the other hand, respectable response rates are still pursued as ‘the best defence against potential non-response bias’ (Curtin, Singer, & Presse, 2007, p.102). Furthermore, there is a realization that response rates are the data quality indicator that the majority of survey sponsors and users of statistics understand and appreciate. The public opinion surveying industry itself was focusing on that aspect (AAPOR 2005), and response rate is one of the few quality indicators explicitly mentioned in a recent international standard ISO 20252 (ISO, 2006). It is still pursued, for example by the European Social Survey, which is strongly dedicated to achieving at least 70% response rate in all of the participating countries (Stoop, Billiet, Koch, & Fitzgerald, 2010, p. 59).
A framework for approaching survey participation in household surveys was provided by Groves and Couper (1998). By taking the perspective of an academic researcher conducting a survey, they categorize correlates of participation in a survey into two sets, one considered to be out of the researcher’s control, and another considered being under the researcher’s control. External survey environment and survey-taking climate are out of a researcher’s control in that framework. While this may be true for a researcher in a small- or medium-sized research-oriented organization, we propose that an NSI is both a major contributor to the survey-taking climate in its society and is also in a position to actively influence that climate.
Thus, we embrace a broader perspective and an approach that balances different types of efforts to achieve survey quality. A corollary of this approach is that survey methodologists should not reduce their attentiveness towards response rates; however, at the same time, they should refrain from ‘the single-minded pursuit of high response rates’ characterized by Peytcheva and Groves (2009, p. 193).
Societal aspects of statistics production
Building an understanding of how statistics are used may be an effective method of motivating participation in surveys. Specifically, potential respondents’ understanding of the uses of official statistics may lead to a higher appreciation and thereby higher predisposition to taking part in official surveys. NSIs have the challenge of making the connection explicit between the statistics that they provide to society and the surveys that they conduct.
What is common for all kinds of surveys – official and otherwise – is that their results are used in making decisions. However, in contrast to sampling, measurement, editing, estimation, and so on, survey methodologists have only recently began focusing on the first and the last steps of any survey endeavour, namely the need for survey results and the use of them; these were assumed not to merit attention, and the methodological work was carried out conditional on them. We will refer to their inclusion as a ‘broader perspective’ that one can adopt toward the production of statistics.
Such a broader perspective is not new. Its first mention in published materials dates from the beginning of modern sampling theory in a contribution by Deming from 1944 (cf. the historical review in Groves & Lyberg, 2010). Importantly, Deming mentions usefulness of statistics, implying a user perspective, and then reviews factors that affect it.
However, mention of a user perspective in the decades that followed became limited to a few instances. It was only in the past ten years or so that this started to change. Through models such as the quality framework for production of official statistics (Eurostat, 2003), the ISO 20252 standard (ISO, 2006), the EFQM framework (Hakes, 2007), and the Generic Statistical Business Process Model (UNECE, 2009), data users are now included in a number of formal models or requirements for the production of statistics.
Mandates of many NSIs include the requirement to provide high-quality data for evidence-based decision-making for policy makers, businesses, and others. This goal is put at risk by the increasing trend of nonresponse in official statistics. The benefits to the NSI of higher response rates were discussed in the preceding section. However, there is a broader risk of economic loss and sub-optimal decision-making due to biased, inaccurate or unsatisfactory statistics caused by low response rates, if the gradual downward trend of response rates in surveys for official statistics continues. This is linked directly to the broader issue of trust in official statistics.
Based on known sponsors and stakeholders of official statistics, affected users may include entities such as:
- governmental ministries (national accounts, employment, urbanisation, culture, etc.),
- central banks,
- trade organizations,
- enterprises,
- research and academic institutions,
- media,
- citizens.
Many of the groups mentioned in the above list can be viewed as partners of an NSI with respect to working toward improving the external survey environment, as we suggest in the following section.
Social marketing for official statistics purposes
Social marketing
Although NSIs are not able to exercise full control over the environment in which they administer their surveys, they may be able to have a positive influence on it. The field of social marketing provides guidance for influencing the survey data collection environment. Social marketing uses similar methods as commercial marketing but focuses on behaviours that benefit society (e.g. stopping smoking, reducing teen pregnancies, and fastening seat belts) instead of goods or services.
Andreasen (1994) defines social marketing as the ‘application of concepts and tools from the commercial world to influence the voluntary behaviour of target audiences to improve their lives and/or the society of which they are part’. A social marketing campaign might employ several forms of media and marketing that commercial marketing also uses, including television, print ads, billboards and merchandise. A component of social marketing campaigns, not found in commercial marketing campaigns, is partnerships with private companies and non-profit organizations to promote their message. Partnerships provide resources and infrastructure that organizations either do not have or cannot afford to build on their own. In contrast, the sponsor of a commercial marketing campaign may form alliances with other companies with a goal of increased revenue for both.
National statistical agencies want to encourage respondents to answer a survey or census questionnaire. However, the approach may need to differ between the census and surveys. Censuses are usually conducted every five or ten years, depending on the country, and these require a response from the entire population. Therefore, the infrequency, large scale, and relatively short time frame for a census may be seen as justifying a large expenditure for a comprehensive combination of marketing techniques such as advertising, media, promotions and partnerships.
On the contrary, NSIs typically conduct a number of smaller on-going surveys that request the participation of a small sample of the population, so the promotions to encourage response may need to be on a continuing basis and possibly focus on the hard-to-reach sub-populations. On-going surveys may have to cope with smaller promotional budgets (if any at all) and rely heavily on partnerships. Partnerships will depend on the nature of the surveys that are selected for social marketing activities.
A census application
The U.S. 2010 Census Integrated Communications Program (ICP) illustrates social marketing used to increase census awareness and participation. The goals of campaign were to increase the mail response rate, improve the overall accuracy and reduce the differential undercount, and increase cooperation with Census enumerators during the follow-up of those who do not respond by mail. Since the objective was to count the entire resident population, the U.S. 2010 Census employed a multi-mode response model: the first phase being a mailout/mailback and the second being a personal visit for nonresponse follow-up. Maximizing the mail response rate is not only cheaper, but studies have also shown the data collected on mail returns are of higher quality than data collected during in- person follow-up visits (Hillygus, Nie, Prewitt, & Pals, 2006; U.S. Census Bureau, 2008). The U.S. Census Bureau has estimated that a single percentage increase in mail returns translates to roughly 85 million dollars saved in nonresponse follow-up costs (U.S. Census Bureau, 2010). The campaign consisted of paid media, public relations, a partnership program with national, state and local community organizations, a Census in Schools program, and campaigns using social and digital media. The U.S Census Bureau developed partnerships with more than 256,000 local, regional and national organizations to promote the 2010 Census (Olson, 2010). Partnerships have been used to encourage response during past censuses as well.
Statistical analyses are useful in supporting the design and implementation of public information campaigns. The 2010 Census campaign employed two types of segmentations, one geographic and one attitudinal. The geographic segmentation identified segments of the population based on their predisposition to mail back a form. The goal of the segmentation was to understand why some areas are more or less predisposed to census and survey participation. With this information, the 2010 campaign could effectively target, plan and monitor 2010 Census campaign efforts.
Cluster analysis produced a comprehensive geographic-based population segmentation defined by unique demographic, housing, and socio-economic variables (Bates & Mulry, 2008; 2011). The source of the data was the U.S. Census Bureau 2000 Planning Database (PDB), which is populated by the Census 2000 long form data. The PDB is a census tract-level database that is publicly available and contains a range of housing, demographic, and socio-economic variables correlated with mail response (Bruce & Robinson 2006; Robinson, Johanson, & Bruce, 2007). The analysis revealed eight distinct segments varying across the entire spectrum of mail-back propensities from high to low response, and several segments closely aligned to three different hard-to-count profiles, Economically Disadvantaged, Ethnic Enclaves, and Young Mobile Single.
Even though the construction of the segments used data collected in the 2000 Census, the 2010 Census national mail participation rate patterns by cluster were found to be identical to those ten years ago. These results lend validation to the use of the geographic segmentation in describing the variation in mail response and its use by the 2010 campaign (Bates & Mulry, 2008; 2011).
The Census Bureau also relied on an attitudinal segmentation based on data collected in the Census Barriers, Attitudes, and Motivators Survey (CBAMS). The goal of the survey was to obtain an in-depth understanding of the public’s opinion about the 2010 Census for use in preparing advertising and other materials (MACRO, 2009; Bates et al., 2009). This survey asked questions about Census awareness; intent to participate in the 2010 Census; potential barriers to participation; attitudes; motivators toward responding to the 2010 Census; and media preferences. Discriminant analysis using the CBAMS data yielded five distinct attitudinal segments or messaging ‘mindsets’. These were labelled as follows: the insulated, the unacquainted, the head-nodders, the leading edge, and the cynical fifth. The information collected for each mindset provided insights, strategies, information sources, tactics and messages that would persuade its members to participate in the Census. Other statistical analyses with CBAMS data explored which among these mindsets were more relevant to local grass-roots partnerships and identified the sources that the different mindsets within the race/Hispanic ethnicity groups depend on for information (Mulry & Olson, 2010; 2011).
Social marketing programs may be able to aid NSIs in better understanding the communication needs and, in the long run, improving the response rates to their surveys and censuses. Although implementing an extensive campaign such as the ICP on a continuing basis requires considerable resources, conducting some of the less expensive components, such as the partnerships and informational programs in schools, may be an effective method of influencing the survey environment. Along these lines, the U.S. Census Bureau is exploring whether the mindset approach can be adapted to aid interviewers and communications regarding on-going surveys (ICF Macro, 2011).
Evaluation of survey climate
In this section we consider measurement of the survey-taking climate as a way for NSIs to monitor changes in its external survey environment, and suggest a method for evaluating their activities toward influencing the external survey environment.
The survey-taking climate as part of the external survey environment
The first reference to the concept ‘survey-taking climate’ appeared in a paper about nonresponse research at Statistics Sweden (Lyberg & Lyberg, 1991). This paper contained the idea of producing a nonresponse barometer to monitor the survey climate. The barometer presented a time series of nonresponse rates for a particular period. The concept of survey-taking climate was also integrated in the Groves and Coupers’s conceptual framework for survey cooperation (Groves & Couper, 1998; see also Section Nonresponse as a quality risk). In this framework, the climate is considered a characteristic of the social environment, which is out of the researcher’s control. It refers to the number of survey in a society, perceived legitimacy of each survey, trends in survey participation and discussions in newspapers about the NSI, the Census and the results of various surveys.
Although the survey climate is only vaguely defined, it is considered a relevant element of the external survey environment in understanding the respondent’s decision to participate in an interview. As such, general societal characteristics are used to explain this decision at the individual respondent level. To link both levels (societal and individual), one can specify a simple mediation model in which the subjective experience of the survey climate mediates the general survey climate and the respondent’s decision to participate. This subjective experience manifests itself in the individual’s opinion about surveys and willingness to participate. So, we consider these opinions as the expression at the individual level of the general survey-taking climate.
Measurement on the individual level
The simple mediation model clarifies that the measurement of the survey climate implies indicators at the general societal level and related measurements of opinions at the individual level. The basic principle for the selection and measurement of indicators rests in a general definition of the survey- taking climate: “The public willingness to co-operate and the extent to which people consider survey research and survey interviews to be useful and legitimate” (Loosveldt & Storms, 2008). Starting from this definition, Loosveldt and Storms (2008) construct a measurement instrument for opinions about surveys with five different dimensions of survey climate:
- the survey value dimension, which is the expression of the value ascribed to surveys;
- the cost dimension, which refers to the investment of time and cognitive effort;
- the survey enjoyment dimension, which reflects the assumption that respondents like to participate in a survey interview;
- the dimension concerning perception of survey reliability; and
- the dimension related to the sensitivity to privacy concerns in surveys.
Given that even a survey about the survey-taking climate will be subject to nonresponse, we mention several methodological issues to consider when conducting this survey:
- To avoid influence on the measurement, the NSI’s data collection organisation should preferably not be used for this survey.
- While a multi-mode approach ought to be feasible (to help attain high response rates), care should be taken to understand the mode effects.
- Likewise, the effect of nonresponse on the measurement ought to be estimated, through nonresponse follow-up studies, and reflected in the final estimates.
In a related endeavour, the OECD Committee on Statistics investigated methodology for surveys to measure trust in official statistics (Fellegi, 2009). The report prepared by the committee contained a framework for measuring trust and a proposed questionnaire that countries could use as a basis for developing their own questionnaires. Recently, the U.S. Census Bureau issued a contract for a survey to track trust in official statistics (U.S. Census Bureau, 2011).
Measurement on the societal level
Related to each dimension of the opinion measurement instrument, one can collect relevant information at the societal level through the following:
- The way polls are presented in the media (such as the content, frequency and discussion in news media about results of polls) can influence the perception of the value and reliability of surveys.
- Cost-related issues – such as the number of surveys (over-surveying), the mean interview length of surveys and the use of incentives for respondents – can be systematically monitored.
- In surveys that involve interviewers, one can systematically collect interviewers’ report about nonrespondents’ motivation, their experience with previous surveys and intention to participate in future surveys.
- Information about privacy legislation and general concerns about privacy (e.g. number of private telephone numbers) is relevant for the sensitivity to privacy matters in surveys.
Examination of the survey opinion measurement instrument and related indicators at the societal level strongly suggests changing the original nonresponse barometer into a survey climate barometer. The survey climate barometer integrates information about nonresponse rates, opinions about surveys and general societal indicators relevant for the survey practice at the national or regional level. It is clear that the survey climate barometer produces relevant information that is useful for the development and evaluation of an NSI’s social marketing strategy.
Measurement on the societal level thus aims to cover those covariates of the survey-taking climate that are not directly influence by the NSI’s activities on improving the external survey environment, but that can influence the survey-taking climate.
Example of an NSI’s activities toward improving external survey environment
The challenge for an NSI
For many countries, the data collection environment is such that respondents are inundated with requests for information, sales solicitations, and other requests that infringe on their privacy and personal time (including inquiries by telephone, conventional mail, email, text messaging and door-to-door solicitations). As survey takers, NSIs are often seen as just another organization vying for the respondent’s time, which can have an adverse affect on the level of cooperation received and the NSI’s ability to achieve desired response rates.
Influencing the external survey environment is perhaps among the most difficult challenges for statistical agencies. It is, for instance, virtually impossible to convince sampled household members who choose to avoid contact to participate in surveys, be they mandatory or voluntary in nature.
The challenge for an NSI is to find a way to stand out from the crowd, to ensure that all potential respondents are aware of the importance of participating in official statistics surveys and to put mechanisms in place to build buy-in. Potential respondents need to be made aware of the value proposition in responding to surveys; this includes not only the national importance of surveys, but also what it means to them locally. How do we influence the environment so that NSIs stand out as national institutions that are important to the public that they serve? How do NSIs improve their ‘brand’.
NSIs face issues in addition of the ‘over-surveying’ effect (Groves & Couper, 1998), that reduce response rates and increase collection costs. Growing concerns about confidentiality, leaked personal information and identity theft have contributed to decreased respondent participation. In several countries there is a growing discomfort toward government contact, including scepticism toward the need for intrusive surveys, regardless of mode. NSIs cannot eliminate all respondent concerns in this regard, but need to develop methods to address these concerns and strengthen trust in official statistics.
Furthermore, we cannot accurately predict when certain hard-to-reach sub-populations will be at home, especially in large urban areas, and we will continue to have difficulty gaining access to high-security apartment buildings and gated communities. Changing household composition, busier lifestyles and irregular work hours are all contributing factors to the variability in residents’ patterns. Certain population sub-groups are exceptionally difficult to find at home for either telephone or face-to-face field surveys (e.g. young males). This is especially true in urban core areas.
A growing number of apartment buildings in large urban centres create two distinct issues for face-to- face field interviewers. First, an increase in high-security buildings makes it difficult for interviewers to gain access to conduct surveys. Second, determining the occupancy status of apartments is much more difficult given that there are limited methods to verify if a unit is occupied or if a respondent is at home.
For telephone surveys, respondent avoidance is a key concern. Improved telephone technology allows respondents to screen calls. The increased use of cell phones, cell phone-only households and Voice Over Internet Protocol (VOIP) creates a new set of challenges by limiting the ability to reach some respondents, to conduct traditional respondent tracing activities and to make accurate linkages between individuals and households
With many NSIs experiencing a gradual downward trend in response rates and increasing collection costs, there is a desire to develop strategies to strengthen survey response rates and reduce nonresponse bias while increasing the cost-effectiveness of data collection.
Statistics Canada’s activities
A key element of Statistics Canada’s strategy for improving its external survey environment is to further strengthening its ‘brand’. Respondents need to understand the importance of the information that has been gathered, the authority on which the organisation’s mandate is based, and perhaps most importantly, ‘what’s in it’ for them. Current and future respondents need to understand why Statistics Canada (or any other NSI for that matter) is different from other organizations contacting them. An expensive national advertising campaign is not within the current budgetary scope, thus Statistics Canada is examining the following broad-based approach toward influencing the external survey environment and strengthening respondent participation. These strategies could be used by other organizations to achieve the same end. It should be noted that some of these approaches have been implemented, while others are still under consideration.
Statistics Canada’s initiatives can be placed into four groups: general communication strategies to strengthen the organizational brand, interactions with the media, direct interactions with survey respondents and other partnerships initiatives.
General communication strategies to strengthen the organizational brand
These include:
- Producing short videos targeting Canadian respondents which are accessible through the Statistics Canada’s website. Partnerships with other organizations (public and private) are also being developed creating links to various websites.
- Using social networking sites such as Twitter to try to increase awareness of Statistics Canada, its surveys, data holdings and new data releases.
- Embedding more prominent and consistent thank-you messages in all of Statistics’ Canada communication vehicles to both respondents and Canadians in general to boost goodwill.
- Leveraging information and permissions gathered during the Census to facilitate gaining access to high-security apartments and gated communities. This includes contact information for building landlords, management companies and superintendents.
Interactions with the media
These include:
- Looking at the opportunities to get more ‘traction’ in the media: to increase recognition of Statistics Canada, the work it does, the importance of responding to its surveys and the usefulness of the resulting information.
- Having more prominent citation of survey names in The Daily (Statistics Canada’s official release vehicle) to increase their use by the media and in turn help promote respondent recognition of the official survey names.
Direct interactions with survey respondents
These include:
- Enhancing outreach and respondent relations efforts with hard-to-reach populations: (a) community associations and ethnic media; (b) aboriginal peoples; (c) educational organizations and schools (future respondents); (d) community outreach.
- Changing the approach to survey introductory letters to include highlights from the previous collection period focusing on community-based benefits derived from the use of the survey results (rather than bureaucratic reasons for data collection).
- Expanding the use of new survey modes such as web-based electronic questionnaires. This may help improve response among certain population sub-groups (e.g. young males) and will respond to the demands of others for varied response options.
- Reducing respondent burden by reducing survey lengths, making more extensive use of administrative data, and enhancing coordination between surveys, thus enhancing the perceived relevance of the work of NSIs.
Other partnerships
These include:
- Using existing key data users and stakeholders (e.g. various levels of government) as spokespersons for the importance of official statistics thus appealing to a respondent’s sense of civic responsibility. There is still work required to determine mechanisms to help to deal with declines in social and civic connectedness among the younger generation.
- Developing partnerships with other non-governmental organizations to capitalize on opportunities for broader communications; (e.g. website links, information in association magazines).
Although not usually the norm at Statistics Canada, other NSIs could examine the increased use of respondent incentives as a means of influencing survey participation. Decisions to use incentives should consider distinctions between intrinsic and extrinsic motivation (Ryan & Deci, 2000) and possible impacts on respondent attitudes and their participation decision, including the potential to increase bias in the resulting estimates.
Summary and looking ahead
In this paper, we have presented reasons and potential means for NSIs to improve their external environment and by promoting their brand stand out from other surveying and marketing organizations. The potential benefits of doing so include increased visibility and relevance as well as improved data collection through reduced nonresponse rates, reduced costs and higher data quality. Efforts to influence the external survey environment have been carried out by some NSIs. However, to the best of our knowledge, this has until recently only taken place on an ad hoc basis and perhaps only in reaction to certain observed issues.
We also proposed an approach that includes measuring and following up on the activities to influence the external environment that may be used for quantitative evaluation and as an indicator of the need for further effort in this regard. With future advances in practical tools for implementing a total survey error (TSE) framework, we envision these tools could include methodology for identifying an optimal level of effort directed at influencing the external survey environment. Although the TSE concept has existed for some time, the practical implementation of a TSE approach is only now starting to be used by organizations that produce statistics (Biemer, 2010; Groves & Lyberg, 2010). As the methodology for implementing TSE evolves over the coming years, NSIs should see the advantages of incorporating strategies to influence their external environment into their day-to-day planning and operations.
We also encourage NSIs to introduce mechanisms to empirically evaluate the impact of activities put in place to influence the external survey environment. The survey climate barometer, introduced in Section Evaluation of survey climate, is one such measure that can be taken at regular intervals. Further, testing and experimental approaches may be needed to provide more precise guidance on the selection and implementation of approaches proposed in the paper. We also hope that the NSIs will pro-actively share their experiences in this regard with other statistical organizations and the survey research methodology community.
References
1. Andreasen, A. R. (1994). Social Marketing: Definition and Domain. Journal of Public Policy and Marketing, 13(1), 108-114.
2. Bates, N., & Mulry, M. H. (2008). Segmenting the Population for the 2010 Census Integrated Communication Campaign. C2PO 2010 Census Integrated Communications Research Memoranda Series. No. 1. October 24, 2008. Washington, D.C.: U.S. Census Bureau. Retrieved from http://2010.census.gov/ partners/pdf/C2POMemoNo_1_10-24-08.pdf
3. Bates, N., and Mulry, M. H. (2011). Using a Geographic Segmentation to Understand, Predict, and Plan for Census and Survey Mail Nonresponse. Journal of Official Statistics, 27(4), 601-619.
4. Bates, N., Conrey, F., Zuwallack, R., Billia, D., Harris, V., Jacobsen, L., & White, T. (2009). Messaging to America: Results from the Census Barriers, Attitudes, and Motivators Survey (CBAMS). C2PO 2010 Census Integrated Communications Research Memoranda Series. No.10. May 12, 2009. Washington, D.C.: U.S. Census Bureau. Retrieved from http://2010.census.gov/partners/pdf/C2POMemoNo10.pdf
5. Bethlehem, J., Cobben, F., & Schouten, B. (2011). Handbook of Nonresponse in Household Surveys. New York: Wiley.
6. Biemer, P. P. (2010). Total Survey Error: Design, Implementation , and Evaluation. Public Opinion Quarterly, 74(5), 817- 848.
7. Bruce, A., & Robinson, J. G. (2006). Tract-Level Planning Database with Census 2000 Data. Washington, D.C.: U.S. Census Bureau. Retrieved from http://www.census.gov/procur/www/2010communications/ library.html
8. Curtin, R., Singer, E., & Presser, S. (2007). Incentives in Random Digit Dial Telephone Surveys: A replication and Extension. Journal of Official Statistics, 23(1), 91-105.
9. Eurostat (2003). Methodological Documents – Definition of Quality in Statistics. Retrieved from http://epp.eurostat.ec.europa.eu/portal/page/portal/quality/documents/ess%20quality %20definition.pdf.
10. Fellegi, I. (2010). Report of the Electromic Working Group on Measuring Trust in Official Statistics. STD/CSTAT/BUR(2010)2. Committee on Statistics. Statistics Directorate. Organization for Economic Cooperation and Development. Paris, France.
11. Groves, R. M. (2006). Research Synthesis: Nonresponse Rates and Nonresponse Error in Households Surveys. Public Opinion Quarterly, 70(5), 646-675.
12. Groves, R. M., & Couper, M. P. (1998). Nonresponse in Household Interview Surveys. New York: Wiley.
13. Groves, R. M., Dillman, D. A., Eltinge, J. L., & Little, R. J. A. (2002). Survey Nonresponse. New York: Wiley.
14. Groves, R. M, & Lyberg, L. (2010.) Total Survey Error: Past, Present, and Future. Public Opinion Quarterly, 74(5), 849-879.
15. Groves, R. M., & Peytcheva, E. (2008). The Impact of Nonresponse Rates on Nonresponse Bias. Public Opinion Quarterly, 72(2), 167-189.
16. Hakes, C. (2007). The EFQM Excellence Model for Assessing Organizational Performance. Zaltbommel, NLNetherlands: Van Haren Publishing.
17. Hansen, M. H., & W. N., Hurwitz (1946). The problem of nonresponse in sample surveys. Journal of the American Statistical Association, 41, 517-529.
18. Hillygus, D. S., Nie, N., Prewitt, K., & Pals, H. (2006). The Hard Count: The Political and Social Challenges of Census Mobilization. New York: Russell Sage Foundation.
19. ICF Macro (2011). Census Barriers, Attitudes, and Motivators Survey II. Final Report. U.S. Census Bureau. Washington, DC. Retrieved from http://www.2010.census.gov/partners/pdf/CBAMS_II_Final_Report.pdf
20. ISO(2006). ISO20252:2006 Market, opinion and social research – Vocabulary and service requirements. Geneva: International Organization for Standardization.
21. Loosveldt, G., & Storms, V. (2008). Measuring Public Opinions about Surveys. International Journal of Public Opinion Research, 20(1), 74-89.
22. Lyberg, I., & Lyberg, L. (1991). Nonresponse Research at Statistics Sweden. 1991 Proceedings of the Survey Research Methods Section. American Statistical Association. Alexandria, VA. Retrieved from http://www.amstat.org/sections/SRMS/Proceedings/papers/1991_012.pdf
23. MACRO (2009). Census Barriers, Attitudes, and Motivators Survey Analytic Report. May 9, 2009. Washington, D.C.: U.S. Census Bureau. Retrieved from http://2010.census.gov/partners/pdf/C2POMemoNo11.pdf.
24. Madow, W. G., & Olkin, I. (Eds.) (1983). Incomplete Data in Sample Surveys, Volume 3: Proceedings of the Symposium. New York: Academic Press.
25. Mulry, M. H., & Olson, T. P. (2010). Analyses for Partnerships from the Census Barriers, Attitudes and Motivator Survey. Statistical Research Division Research Report Series # RRS2010-07. Washington, D.C.: U.S. Census Bureau. Retrieved from http://www.census.gov/srd/papers/pdf/rrs2010-07.pdf
26. Mulry, M. H., & Olson T. P. (2011). Analyses for the U.S. 2010 Census Partnership Program. Social Marketing Quarterly, 17(1), 27-55.
27. Olson, T. (2010). 2010 Census Parternship Program: One Big Component of the Integrated Communication Campain. Presented at the 2010 Census Advisory Committee meeting, October 21-22, 2010, in Suitland, MD. Washington, D.C.: U.S. Census Bureau.
28. Peytchev, A., Peytcheva, E., & Groves, R. M. (2010). Measurement Error, Unit Nonresponse, and Self-Reports of Abortion Experiences. Public Opinion Quarterly, 74(2), 319-327.
29. Peytcheva, E., & Groves, R. M. (2009). Using Variation in Response Rates of Demographic Subgroups of Nonresponse Bias in Survey Estimates. Journal of Official Statistics, 25(2), 193-201.
30. Robinson, J. G., Johanson, C., and Bruce, A. (2007). The Planning Database: Decennial Data for Historical, Real-time, and Prospective Analysis. Joint Statistical Meetings. Salt Lake City, Utah.
31. Ryan, R. M., & Deci, E. L., (2000). Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions. Contemporary Educational Psychology, 25(1), 54-67.
32. Singer, E. (2011). Toward a Benefit-Cost Theory of Survey Participation: Evidence, Further Tests and Implications. Journal of Official Statistics, 27(2), 379-392.
33. Stoop, I., Billiet, J., Koch, A., & Fitzgerald, R. (2010). Improving Survey Response: Lessons Learned from the European Social Survey. New York: Wiley.
34. UNECE Secretariat (2009). Generic Statistical Business Process Model. Version 4.0 – April 2009. Retrieved from http://www1.unece.org/stat/platform/download/attachments/ 8683538/GSBPM+Final.pdf?version=1
35. U.S. Census Bureau (2010). Leaders of National Government Organization Call on Members to Take On the 2010 Census ‘Take 10’ Challenge. Press Release CB10-CN.27 dated March 10, 2010. Public Information Office. U.S. Census Bureau. Washington, DC. Retrieved from http://2010.census.gov/news/ releases/media-advisories/take-10-challenge.html