Measuring Interviewer Characteristics Pertinent to Social Surveys: A Conceptual Framework

Annelies G. Blom, University of Mannheim
Julie M. Korbmacher, Max-Planck-Institute for Social Law and Social Policy

23.01.2013
How to cite this article:

Blom, A., & Korbmacher, J. (2013). Measuring Interviewer Characteristics Pertinent to Social Surveys: A Conceptual Framework. Survey Methods: Insights from the Field. Retrieved from https://surveyinsights.org/?p=817

Copyright:

© the authors 2013. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0) Creative Commons License


Abstract

Interviewer effects are found across all types of interviewer-mediated surveys crossing disciplines and countries. While studies describing interviewer effects are manifold, identifying characteristics explaining these effects has proven difficult due to a lack of data on the interviewers. This paper proposes a conceptual framework of interviewer characteristics for explaining interviewer effects and its operationalization in an interviewer questionnaire. The framework encompasses four dimensions of interviewer characteristics: interviewer attitudes, interviewers’ own behaviour, interviewers’ experience with measurements, and interviewers’ expectations. Our analyses of the data collected from interviewers working on the fourth wave of SHARE Germany show that the above measures distinguish well between interviewers.

Keywords

, , , , ,


Acknowledgement

The authors would like to thank Ulrich Krieger, Elisa Leonhardt, Ute Hoffstätter, and Martina Brandt at SHARE for comments, suggestions and data keying. We thank Mick Couper, Frauke Kreuter, Mark Trappmann, Joseph Sakshaug, Stephanie Eckman, Jennifer Sinibaldi, and Gabriele Durrant for their comments and ideas concerning the development of the questionnaire, as well as the editors and anonymous reviewers for final suggestions and comments. Finally, the authors thank the German survey agency infas for their cooperation in administrating the interviewer survey during their SHARE interviewer training and all the interviewers who participated in the SHARE interviewer survey.


Copyright

© the authors 2013. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0) Creative Commons License


Introduction

In all interviewer-mediated surveys interviewers play a crucial role during the entire data collection process. They make contact with and gain cooperation from the sample unit, ask survey questions, conduct measurements, record answers and measures, and maintain respondents’ motivation throughout the interview (Schaeffer, Dykema, & Maynard, 2010). As such, the job of an interviewer encompasses a diversity of roles and requires a variety of skills. Especially with the rise of computer-assisted interviewing, which permits the collection of even more complex data, a well-trained staff of interviewers has become indispensable.

When examining survey data we frequently find interviewer effects on all of these interviewer survey tasks indicating that there is variation in how interviewers handle their various responsibilities. Yet often, researchers are far removed from the interviewers and the actual survey operations (Koch, Blom, Stoop, & Kappelhof, 2009) and have little or no information about what determines these interviewer effects. In fact, for the majority of survey data collection is contracted out and thus researchers have no influence on which interviewers work on their study and on how they were trained.

The literature describing interviewer effects on various aspects of the survey process is substantial (for an overview see Schaeffer et al., 2010, chapter 13). However, only few studies have succeeded in explaining the interviewer effects found (cf. Jäckle, Lynn, Sinibaldi, & Tipping, 2013). One possible reason for this research gap is the lack of information on the interviewer level, which is necessary for identifying determinants of interviewer effects. In the past years, paradata (Couper & Lyberg, 2005) have been increasingly used to explain interviewer effects. Another potentially powerful source of auxiliary data is interviewer characteristics collected through an interviewer survey.

This paper presents the conceptual framework of a new international interviewer questionnaire to explain interviewer effects. We specifically focus on interviewer effects other than interviewer falsification, since we believe that the latter cannot be explained by means of interviewer surveys. Furthermore, we developed an interviewer questionnaire for researchers who contract out fieldwork. Survey agencies aiming to identify suitable interviewers through an assessment might find a different questionnaire more appropriate. The questionnaire was developed in cooperation with researchers across various survey projects and will thus be relevant to survey projects across countries and disciplines.

This paper consists of three parts. First, a theoretical background and literature review outlines the main aspects of the data collection process affected by interviewer effects. The subsequent conceptual framework constitutes the core of the paper, where the motivation for surveying various interviewer characteristics is laid out. Finally, the last section presents findings on the variation of interviewer characteristics collected with the new interviewer questionnaire and implemented on the Survey of Health, Ageing, and Retirement in Europe (SHARE) in Germany in 2011. These results show that the survey well discriminates between interviewers, which is a prerequisite for explaining interviewer effects in survey data.

Theoretical background and literature

Exposing interviewer effects implies that outcomes of sample units assigned to the same interviewer are more similar than would be expected if variation were random.[1] Three main types of interviewer effects can be distinguished: interviewer effects on the unit nonresponse process, on item nonresponse and on the actual measurement (Figure 1).

Figure 1: Types of interviewer effects in surveys

Interviewer effects on unit nonresponse

When considering the unit nonresponse process we find that interviewers are differentially successful at recruiting sample units leading to differential unit response rates. A growing literature has examined the role of the interviewer in the nonresponse process and attention has been paid to interviewer attributes, such as experience (Durban & Stuart, 1951; Couper & Groves, 1992; Singer, Frankel, & Glassman, 1983; Snijkers, Hox, & de Leeuw, 1999; Olson & Peytchev, 2007; Lipps & Pollien, 2011), interviewer skills (Morton-Williams, 1993; Campanelli, Sturgis, & Prudon, 1997), interviewer-respondent interaction (Groves & Couper, 1998), as well as survey design characteristics, such as interviewer burden (Japec, 2008) and interviewer payment (de Heer, 1999; Durrant, Groves, Staetsky, & Steele, 2010).

To explain differential response rates across interviewers survey methodologists have examined interviewer attitudes and motivation (Campanelli et al., 1997; Groves & Couper, 1998; Hox & Leeuw, 2002; Durrant et al., 2010; Blom, de Leeuw, & Hox, 2011). This strand of research was inspired by the work of Lehtonen (1996), who developed a short interviewer attitudes scale and showed that attitudes correlate with attained response rate. Another line of studies focuses on interviewer behaviour and interviewer-respondent interaction (Couper & Groves, 1992; Campanelli et al., 1997; Groves & Couper, 1998; Snijkers et al., 1999). This started with the pioneering work of Morton-Williams (1993), who analysed tape recordings of survey introductions and identified successful interviewer strategies, such as, using professional and social skills, and adapting these to the doorstep situation.

Interviewer effects on item nonresponse

In addition, interviewers have an influence on item nonresponse, i.e. on the respondents’ willingness to answer each question in the survey and on their consent to providing additional information. The consent to the collection of additional information can be diverse; typical examples are consent to record linkage (Lessof, 2009; Calderwood & Lessof, 2009; Sakshaug, Couper, Ofstedal, & Weir, 2012; Sala, Burton, & Knies, 2012; Korbmacher & Schröder, forthcoming) and consent to the collection of biomarkers in health surveys (Sakshaug, Couper, & Ofstedal, 2009).

Traditionally, the literature on interviewer effects on item response rates describes a clustering effect of item nonresponse within interviewers and tries to model these interviewer effects by demographic characteristics of the interviewer (Singer et al. 1983). Another strand of research looks into collecting additional information about the interviewers, for example on their expectations, by means of interviewer questionnaires (Singer and Kohnke-Aguirre 1979; Singer et al. 1983).

Interviewer effects on measurement

Finally, interviewers can through their observable characteristics and their actions influence the measurement itself, i.e. which answer a respondent provides. Theory related to this third type of interviewer effect typically stems from the literature on respondents’ cognitive processes when answering survey questions (Tourangeau, Rips, & Rasinski, 2000). This process is complex and iterates through various stages, which may be influenced by the interviewers (Cannell, Miller, & Oksenberg, 1981; Tourangeau et al., 2000). Since survey questions differ widely in content and structure and since interviewer effects are estimate-specific, they can be different for different questions and topics (Schaeffer et al., 2010) and cannot be generalized for all measurements within a survey. Covering all of these different types of interviewer effects on measurement goes beyond the scope of the conceptual framework developed in this paper. Instead we focus on identifying interviewer characteristics potentially associated with interviewer effects on unit and item nonresponse.

As described, there have been several previous attempts at explaining interviewer effects in survey data by means of interviewer surveys. However, the studies found that the predictive power of the variables collected on the interviewer questionnaires was low and explained only part of the observed variance (e.g. Hox & de Leeuw, 2002; Durrant et al., 2010; Blom et al., 2011). The conceptual framework of the interviewer questionnaire presented in this paper ties in with previous work with an important extension. Instead of focusing on interviewer demographics, which seldom prove significant in explaining interviewer effects (c.f. Singer et al., 1983), and avowed doorstep behaviour, the questionnaire covers four dimensions of interviewer characteristics: Interviewers’ attitudes towards the survey process, their own behaviour regarding data collection requests, experiences with conducting certain types of surveys and measurements, and their expectations regarding the survey outcome.

Conceptual framework

The goal of the new questionnaire is to implement an instrument measuring a wide range of interviewer characteristics, which have been shown relevant in previous studies (see the literature review in Theoretical background and literature). In particular, we aim to find correlates of interviewer effects on various types of unit and item nonresponse.

The questionnaire covers all four dimensions of interviewer characteristics: interviewer attitudes towards the survey process, interviewers’ own behaviour regarding data collection requests, interviewers’ experience with measurements, and interviewers’ expectations regarding the survey outcome in terms of response rates. It consists of two parts. First, a battery of general items assumed to be associated with general unit and/or item nonresponse relevant across a variety of social surveys is considered. Second, various blocks of questions aim at explaining interviewer effects that were specific to the fourth wave of SHARE Germany. These blocks may or may not apply to other surveys, which have a different survey design and focus on different research questions. The full questionnaire collects information on interviewer characteristics to explain five groups of interviewer effects: on unit nonresponse in general, on income nonresponse (as an example of item nonresponse), on unit nonresponse across different incentives groups of an experiment, on consent to the collection of four types of biomarkers, and on consent to record linkage. It is obvious that some items in this questionnaire focus on the SHARE survey, but it is not restricted to it. Segmenting the questionnaire along the five groups of interviewer effects also allows other surveys to implement the questionnaire by adopting the relevant elements.

The conceptual framework is based on our own experiences at interviewer trainings on a diversity of studies, from findings in previous analyses of interviewer effects, and from consultations with survey methodologists on various European and US surveys. When aiming to explain interviewer effects by means of characteristics collected in an interviewer survey, the underlying assumption is that interviewers differentially impact on the data collection process, that this differential impact is related to their – conscious and subconscious – appearance and actions, and that these can be explained by characteristics collected in an interviewer survey.

Table 1 displays the four dimensions measured in the interviewer questionnaire (rows) and the interviewer effects they aim to explain (columns). We expect the first three dimensions – attitudes towards the survey process, own behaviour with regards to data collection requests, and experience with relevant types of measurements, to independently impact on the survey outcomes. The fourth dimension – interviewers’ expectations regarding the survey outcome – is expected to be influenced by attitudes, behaviours, and experiences.

The concepts covered by these four dimensions are described in the following. In addition, the interviewer survey collects general interviewer demographics and measures of interviewing experience. The question numbers cited in the following refer to the questions in the SHARE interviewer questionnaire (see Appendix A and Appendix B)

Table 1: Conceptual framework of the interviewer questionnaire

 

General part

SHARE-DE specific part

Unit non- response  Item non- response
(income)
Unit non- response (incentives) Consent to
biomarker collection
Consent to
record linkage
Attitudes Q3: reasons for being an interviewer
Q4: how to conduct standardized interviews
Q5: how to achieve response
Q6, Q11, Q12: trust, data protection concerns
Q4: how to conduct standardized interviews
Q6, Q11, Q12: trust, data protection concerns
Q6, Q11, Q12: trust, data protection concerns Q6, Q11, Q12: trust, data protection concerns
Own behaviour Q8, Q9: own survey participation
Q27, Q28: use of internet social networks / online banking
Q27: use of internet social networks / online banking
Q34: income response
Q10: incentives received Q22: consent to biomarkers, hypothetical
Q24: blood donation
Q13: data disclosure, hypothetical
Q14, Q16: data linkage, hypothetical
Q17: “pension records cleared”
Q27, Q28: use of internet social networks / online banking
Experience with measure- ments Q1, Q2: experience working as an interviewer
Q18: SHARE experience
Q1, Q2: experience working as an interviewer
Q18: SHARE experience
Q1, Q2: experience working as an interviewer
Q18: SHARE experience
Q23: experience with collecting bloodspots
Expecta- tions Q19: effect of incentives on unit response Q20: income response Q19: effect of incentives on unit response Q21: consent to biomarker Q15: consent to data linkage

Note: The question numbering refers to the questions in the SHARE Germany interviewer survey (see Appendix A and Appendix B). Questions on the interviewers’ demographic background are not displayed in the framework

Interviewer attitudes towards the survey process

Interviewers that are good at making contact and gaining cooperation from the sample unit are usually good at tailoring their approach to the situation they find at the visited address (Morton-Williams, 1993). However, tailoring takes more effort and skills than repeating the same routine with each sample unit. The extent to which interviewers make the effort of tailoring their approach might be related to their general attitudes towards their job as interviewers and towards life in general. In addition, interviewers’ own concerns about data protection and their trust in other people might shape the way they approach sample units and ask their respondents for sensitive information.

This first dimension of general interviewer attitudes in the conceptual framework covers these aspects. Some of the attitudes collected in the interviewer questionnaire are related to the questions on previous interviewer questionnaires (e.g. de Leeuw & Hox, 2009). However, in addition to questions on the contacting and cooperation process, i.e. unit nonresponse, the SHARE interviewer questionnaire also collects information that might be related to item nonresponse and non-consent. The attitudes addressed are reasons for being an interviewer (Q3), attitudes towards under which circumstances it is legitimate to deviate from the standard interviewing protocols (Q4), how to best achieve unit response (Q5), and general questions regarding trust and data protection concerns (Q6, Q11 and Q12) that might be particularly effective in explaining non-consent and item nonresponse on income.

Interviewers’ own behaviour regarding data collection requests

The maxim ‘do as you would be done by’ runs as a common theme through many cultures. Therefore, it is not difficult to imagine that survey requests, which interviewers themselves would not answer to, are difficult for them to sell to respondents. The second dimension of the conceptual framework thus assumes that the way interviewers behave or would behave, if faced with a similar situation as the respondent, influences the way they interact with the respondent. If interviewers participate in surveys themselves and supply all of the information asked from them, they are likely to be better at eliciting such information from their respondents.

A series of questions in the interviewer questionnaire covers interviewers’ own behaviour. These questions for example cover whether interviewers have taken part in surveys and, if so, what kind of surveys these were and whether they received any incentives (Q8, Q9 and Q10). Along a more general line, we examine how easily interviewers divulge information about themselves in their daily lives by asking about their membership in social networks like Facebook, Myspace or Twitter and their use of online banking (Q27, Q28). The questionnaire also asks about their income (Q34), to see whether item nonresponse on income on the interviewer questionnaire is correlated with item nonresponse among respondents to the SHARE survey. For measures of consent to the collection of biomarkers and consent to record linkage we inspect interviewers’ actions in similar situations. The questionnaire asks whether the interviewer donates blood (Q24) and whether they have cleared their pension records (“Kontenklärung”), a process German citizens are asked to go through to ensure that the pension records that the state holds are correct (Q17). Finally, the questionnaire contains hypothetical questions on whether interviewers would disclose sensitive information (Q13), consent to record linkage (Q14 and Q16) and consent to the collection of biomarkers (Q22) if asked in an interview situation.

Interviewers’ experience with measurements

Interviewers’ familiarity with different types of surveys and measurements may influence their confidence in conducting these. This, in turn, may shape the professionalism with which they interact with the respondents. Interviewer training levels out some of the differences in experience with measurements; however, only up to a certain degree. If interviewers, for example, have previously worked on SHARE, they have more background knowledge about the content of the study, which is knowledge they may employ in their introduction. Likewise, if interviewers have experience with pricking a small needle into someone’s finger for collecting blood spots in blood sugar tests, they are likely to feel more confident about collecting dried blood spots for biomarkers and to portray this confidence during the interview. The SHARE interviewers are diverse in the experiences that they have gathered on their job and in their life in general. Some wave 4 SHARE interviewers have worked on all of the previous SHARE waves and are well used to the type of sample and the instrument. Others have conducted surveys that cover similar aspects as SHARE does.

The third dimension of the interviewer questionnaire, therefore, investigates interviewers’ experiences with working as an interviewer (Q1 and Q2), with SHARE (Q18), and with conducting blood sugar tests for diabetics (Q23).

Interviewers’ expectations regarding survey outcome

Anecdotal evidence from interviewer trainings suggests that interviewers’ perceptions about the viability of a survey are related to fieldwork outcomes. While implying a causal effect of interviewers’ expectations on fieldwork outcomes would be far-fetched, in the context of explaining interviewer effects empirically testing whether interviewers who are confident about the success of a survey are also more likely to reach high response rates is informative.

The final dimension in the conceptual framework covers interviewers’ expectations of unit nonresponse rates, consent rates and item nonresponse rates. The survey asks interviewers what response and consent rates they expect for the different incentives groups (Q19), for the various biomarker measurements (Q21), for consent to record linkage (Q15), and for the survey questions on income (Q20).

Alternative conceptualization

When developing the interviewer questionnaire we opted for a general conceptualization of just four dimensions. We believe that dimensions one to three influence both the expectations interviewers’ hold about their performance as well as their actual performance. As depicted in Table 1, we expect certain items within each dimension to be correlated with only one of the survey outcomes, unit nonresponse, item nonresponse or biomarker / record-linkage non-consent, while others are expected to be associated with all types of nonresponse.

Our framework for explaining interviewer effects is just one of many possible conceptualizations. One recent interesting conceptualization, while not directly comparable to our approach, can be found in Jäckle et al. (2013). In their complex framework they model interviewers’ influence on a sample persons’ likelihood of cooperation in a survey as the interplay of household psychological predisposition, interviewer observable attributes and interviewer behaviour. All of these are in turn influenced by a complex system of personality traits, interpersonal skills, expectations, experience, and socio-demographic characteristics.

An alternative conceptualization of our framework might also go into more detail on the interrelatedness of interviewers’ demographic characteristics, psychological predispositions, social environment, survey design, and the dimensions measured in the interviewer questionnaire. However, unlike other researchers involved with interviewer questionnaires previously, we consider various types of interviewer effects together. Through the complexity of a more detailed conceptual framework one might miss the wood for the trees. Nonetheless, when analysing processes leading to unit nonresponse, item nonresponse, non-consent to biomarkers or non-consent to record-linkage and considering interviewer effects thereupon, we recommend developing a specific and detailed conceptual framework for each process.

Variation across interviewers: Results from the 2011 SHARE interviewer survey

In early 2011 an interviewer questionnaire based on the conceptual framework described in this paper was implemented at the end of the interviewer training sessions for SHARE Germany. In total, 197 interviewers were trained. Participation in the interviewer survey was voluntary and interviewers did not receive any incentive for participating. 163 interviewers completed the questionnaire, yielding an 83% response rate. There was a negligible amount of item nonresponse and answers that were not codeable.

In addition, other large-scale social surveys implemented this interviewer questionnaire. Having presented and further developed the conceptual framework at the 2010 International Workshop on Household Survey Nonresponse in Nuremberg, Germany, several other studies showed interest fostering cooperation with survey methodologists across surveys and countries. At the end of 2010 the German PASS study (Panel Arbeitsmarkt und soziale Sicherung) at the Institute for Employment Research (IAB)[2] implemented the questionnaire online, with a 10 Euro conditional incentive and well before their interviewer trainings. By 2012 the core of this interviewer survey has been implemented in at least three further large data collections: (1) a survey aimed at measuring the methodological effect of filter questions at the IAB, (2) the German part of the Programme for the International Assessment of Adult Competencies (PIAAC) at GESIS[3] and (3) the recruitment interview of the German Internet Panel (GIP), a longitudinal internet survey based on a face-to-face recruited probability sample of the general population conducted by Mannheim University[4]. In addition, several other studies have shown an interest in implementing interviewer questionnaires based on the conceptual framework in this paper.

Variation in the interviewer data is a prerequisite for explaining interviewer effects in survey data. Paragraphs 4.1–4.4 show that there is considerable variation in key variables in the 2011 SHARE Germany interviewer survey. We focus on variables from the core of our conceptual framework, i.e. those related to item and unit nonresponse, rather than those applicable to the collection of biomarkers and consent to record linkage.

Variation in attitudes towards the survey process

The first dimension of the conceptual framework is the attitudes that interviewers hold regarding survey interviews. In question 3 interviewers were asked for their reasons for working as an interviewer. Figure 2 shows that while many interviewers gave importance to most of the reasons presented, there was considerable variation. For example, while the opportunity of interacting with people (socialize) and gaining insight into other people’s social circumstances were given importance scores of six and seven by about 45% of interviewers, about 80% of interviewers mentioned that the possibility to determine their own working hours and interesting work was this important to them.

Figure 2: Attitudes – reasons for working as an interviewer

“There are different reasons for working as an interviewer. How important are the following aspects to you?” (Q3)

The survey also contains an item battery inquiring interviewers’ attitudes towards sticking to the prescribed interviewing protocols. Since interviewers are regularly trained and know what they are supposed to do, we were concerned that interviewers’ attitudes towards the protocols would only reflect their training. Therefore, all items were phrased such that it would be legitimate for interviewers to admit that they deviate from the protocols. As Figure 3 portrays, there is large variation across items and interviewers. For example, interviewers widely differed in their answers to the statement “If the respondent doesn’t understand a question, I explain what is actually meant by the question.” Approximately 30% of interviewers answered that this statement does not at all apply to them, while almost 40% said that it perfectly applied to them. Similarly, there is great variation across interviewers as to whether they speak faster, if they notice that the respondent is in a hurry. Regarding other statements interviewers answered more homogenously. Almost all interviewers stated that they “always exactly stick to the interviewer instructions, even if [they] don’t consider them sensible” and all agreed that if they “notice that the respondent has difficulties understanding the question, [they] speak more slowly”.

Figure 3: Attitudes – following the standardized interview protocols

“Below follows a series of statements about difficult respondents and contact attempts. We would like to know from you, how you react in the following situations. The statement applies to me …” (Q4)

We researched interviewers’ attitudes towards data protection concerns and asked them how concerned they were about the safety of their personal data. As described above, we assume that this might be an indicator of how much trust in data protection they can instill in the respondent during the interview. Again, the results from the survey demonstrate variation in data protection concerns across interviewers (Figure 4) with between 17% and 40% of answers in each of the four categories.

Figure 4: Attitudes – data protection concerns

“How concerned are you about the safety of your personal data?” (Q11)

Variation in interviewer behaviour regarding data collection requests

The second dimension of the conceptual framework measures interviewers’ own behaviour in survey situations or similar contexts. The items displayed in Figure 5 indirectly look at whether interviewers are concerned about their private data, as we asked them if they used social networks and online banking. The figure illustrates that interviewers by no means are a homogenous group of people when it comes to their behaviour on the Internet. While about 35% of interviewers use social networks, 63% have sufficient trust in the safety of the Internet to use it for online banking.

Figure 5: Own behaviour – social networks and online banking

“Do you use social networks in the internet like Facebook, Myspace or Twitter?” (Q27)
“Do you use the internet for online banking?” (Q28)

Variation in experience

The interviewer survey contains several items measuring interviewers’ experience with various measurements including their experience working as an interviewer, working for previous waves of SHARE, and collecting bloodspots. Figure 6 displays their general experience working as an interviewer. The results show that the interviewers working on the fourth SHARE wave varied in their experience: While 23% had less than one year of experience, 27% had been doing this work for more than 10 years.

Figure 6: Experience with measurements – working as an interviewer

“How long in total have you been working as an interviewer?” (Q1)

Variation in expectations

In 2011 the refresher sample of the SHARE survey in Germany was allocated to an incentives experiment with four treatment groups of unconditional incentives (€0, €10, €20, €40). In addition, all respondents are always promised 10€ for the completion of the interview. In the interviewer survey we asked about interviewers’ expectations regarding their unit response rate for each of experimental conditions. The results show that interviewers differed substantially in their confidence in achieving high response rates (Figure 7). When no unconditional incentive is sent with the advance letter, the SHARE interviewers on average expected unit response rates of 43%. However, as the boxplots in Figure 7 illustrate, the variation around the median is great. Furthermore, interviewers were confident that the higher the value of the incentive the more successful they would be in recruiting respondents. According to the interviewers’ expectations the 40€ unconditional household incentive paired with a 10€ conditional individual incentive would on average yield a 23% increase in the unit response rate compared to a setting where no unconditional incentive is sent.

Figure 7: Expectations – response rates at different incentives levels

“Studies vary as to whether they reward respondents for their survey participation and how much respondents receive. Please imagine that your respondents receive the following incentives. What do you expect, which percentage of your sample persons will agree to the interview, if…” (Q19)

Discussion and conclusion

In this paper we propose a conceptual framework of measuring interviewer characteristics for explaining interviewer effects on unit and item response, including consent to the collection of biomarkers, consent to record linkage, and item response on income measures. The conceptual framework encompasses four dimensions of interviewer characteristics:

 

  • Interviewer attitudes towards the survey process that might shape the way interviewers approach sample units and ask their respondents for sensitive information, such as attitudes towards their job as interviewers, concerns about data protection and trust in other people.
  • Interviewers’ own behaviour regarding data collection requests and hypothetical behaviour when faced with survey requests or similar measurements.
  • Interviewers’ experience with measurements, for example, experience with conducting specific surveys or the collection of specific measurements like biomarkers or consent to record linkage.
  • Interviewers’ expectations about the unit and item response rates they will achieve on a given survey.

This conceptual framework formed the basis of an interviewer questionnaire implemented during the interviewer trainings in the fourth wave of SHARE Germany in early 2011. Exploratory analyses show that the survey well distinguishes between interviewers on the measures implemented along these four dimensions. This is a prerequisite for explaining interviewer effects.

The theory, conceptual framework, and findings presented in this paper are merely a starting point for analyses of interviewer effects. Once data cleaning process are completed, the interviewer data can be linked with paradata and survey data allowing a multitude of analyses into interviewer effects in SHARE Germany. Furthermore, parts of the interviewer questionnaire were also implemented in other surveys. Cross-survey analyses will allow investigating, whether findings are survey specific or hold generally across large-scale social surveys.

This paper aims to contribute to the literature on interviewer effects by stimulating the development, collection, and analysis of new measures of interviewer characteristics to explain and ultimately adjust for interviewer effects in survey data. We make our conceptual framework and the interviewer questionnaire available to the public to encourage the continuous development of both and to conduct analyses of interviewer effects across surveys and countries. We hope to thereby foster research and insights in the area of interviewer effects in interviewer-mediated data collections.


[1] An interviewer effect is typically estimated by an intraclass correlation coefficient (ICC), i.e. the ratio of the interviewer variance to the sum of all variances in the model (e.g. Anderson & Aitkin, 1985, Groves & Magilavy, 1986). The ICC allows us to estimate to which extent the variation across respondents in the survey estimate is clustered within the interviewers conducting the survey.

[2] http://www.iab.de/780/section.aspx

[3] http://www.gesis.org/en/piaac/piaac-home/

[4] http://reforms.uni-mannheim.de/english/internet_panel/home/index.html

References

1. Anderson, D. A., & Aitkin, M. (1985). Variance Component Models with Binary Response: Interviewer Variability. Journal of the Royal Statistical Society: Series B, 47(2), 203-210.

2. Blom, A. G., de Leeuw, E. D., & Hox, J. (2011). Interviewer Effects on Nonresponse in the European Social Survey. Journal of Official Statistics, 27(2), 359-377.

3. Calderwood, L., & Lessof, C. (2009). Enhancing Longitudinal Surveys by Linking to Administrative Data. In P. Lynn (Ed.), Methodology of Longitudinal Surveys (pp. 55-72). Chichester, UK: John Wiley & Sons.

4. Campanelli, P., Sturgis P., & Purdon, S. (1997). Can You Hear Me Knocking? An Investigation into the Impact of Interviewers on Survey Response Rates. London: Social and Community Planning Research.

5. Cannell, C. F., Miller, P. V., & Oksenberg, L. (1981). Research on Interviewing Techniques. In S. Leinhardt (Ed.), Sociological Methodology (pp. 389-437). San Francisco: Jossey-Bass.

6. Couper, M. P., & Groves R. M. (1992). The Role of the Interviewer in Survey Participation. Survey Methodology, 18(2), 263-277.

7. Couper, M. P., & Lyberg, L. (2005). The Use of Paradata in Survey Research [CD-ROM]. Proceedings of the 55th Session of the International Statistical Institute.

8. De Leeuw, E. D., & Hox, J. (2009). International Interviewer Questionnaire (IQUEST): Development and Scale Properties. Working Paper. Utrecht, The Netherlands: Department of Methodology and Statistics, Utrecht University.

9. De Heer, W. (1999). International Response Trends: Results of an International Survey. Journal of Official Statistics, 15(2), 129-142.

10. Durban, J., & Stuart, A. (1951). Differences in Response Rates of Experienced and Inexperienced Interviewers. Journal of the Royal Statistical Society: Series A, 114, 163-206.

11. Durrant, G. B., Groves, R. M., Staetsky, L., & Steele, F. (2010). Effects of Interviewer Attitudes and Behaviors on Refusal in Household Surveys. Public Opinion Quarterly, 74(1), 1-36.

12. Groves, R.M., & Magilavy, L.J. (1986). Measuring and Explaining Interviewer Effects in Centralized Telephone Surveys. Public Opinion Quarterly, 50, 251-66.

13. Groves, R. M., & Couper, M. P. (1998). Nonresponse in Household Interview Surveys. New York: Wiley.

14. Hox, J. J., & de Leeuw, E. D. (2002). The Influence of Interviewers’ Attitude and Behaviour on Household Survey Nonresponse: An International Comparison. In  R.M. Groves, D. A. Dillman, J. L. Eltinge & R. J. Little (Eds.), Survey Nonresponse (pp. 103-118). New York: Wiley.

15. Jäckle, A., Lynn, P., Sinibaldi, J., & Tipping, S. (2013). The Effect of Interviewer Experience, Attitudes, Personality and Skills on Respondent Co-operation with Face-to-Face Surveys. Survey Research Methods, 7(1), 1-15.

16. Japec, L. (2008). Interviewer Error and Interviewer Burden. In J. M. Lepkowski, C. Tucker, J. M. Brick, E. D. de Leeuw, L. Japec, P. J. Lavrakas, M.W. Link & R. L. Sangster (Eds.), Advances in Telephone Survey Methodology (pp. 187-211). Hoboken: Wiley.

17. Koch, A., Blom, A. G., Stoop, I., & Kappelhof, J. (2009). Data Collection Quality Assurance in Cross-National Surveys at the Example of the ESS. Methoden, Daten, Analysen – Zeitschrift für Empirische Sozialforschung, 3(2), 219-247.

18. Korbmacher, J., & Schröder, M. (forthcoming). Consent when Linking Survey Data with Administrative Records: The Role of the Interviewer. Survey Research Methods.

19. Lehtonen, R. (1996). Interviewer Attitudes and Unit Nonresponse in Two Different Interview Schemes. International Perspectives on Nonresponse, Laaksonen, S. (Ed.), Proceedings of the Sixth International Workshop on Household Survey Nonresponse. Helsinki: Statistics Finland.

20. Lessof, C. (2009). Ethical Issues in Longitudinal Surveys. In P. Lynn (Ed.), Methodology of Longitudinal Surveys (pp. 35-54). Chichester, UK: John Wiley & Sons.

21. Lipps, O., & Pollien, A. (2011). Effects of Interviewer Experience on Components of Nonresponse in the European Social Survey. Field Methods, 23(2), 156-172.

22. Morton-Williams, J. (1993). Interviewer Approaches. Aldershot: Dartmouth.

23. Olson, K., & Peytchev, A. (2007). Effect of Interviewer Experience on Interview Pace and Interviewer Attitudes. Public Opinion Quarterly, 71(2), 273-286.

24. Schaeffer, N. C., Dykema, J., & Maynard, D. W. (2010). Interviewers and Interviewing. In P.V. Marsden & J.D. Wright (Eds.), Handbook of Survey Research (pp. 437-470). Binley, UK: Emerald.

25. Sakshaug, J. W., Couper M. P., & Ofstedal, M. B. (2009). Characteristics of Physical Measurement Consent in a Population-Based Survey of Older Adults. Medical Care, 47(12), 64-71.

26. Sakshaug, J. W., Couper M. P., Ofstedal, M. B., & Weir, D. (2012). Linking Survey and Administrative Records: Mechanisms of Consent. Sociological Methods & Research, 41(4), 535-569.

27. Sala E., Burton, J., & Knies, G. (2012). Correlates of Obtaining Informed Consent to Data Linkage: Respondent, Interview, and Interviewer Characteristics. Sociological Methods & Research, 41(3), 414-439.

28. Singer, E., & Kohnke-Aguirre, L. (1979). Interviewer Expectation Effects: A Replication and Extension. Public Opinion Quarterly, 43(2), 245-60.

29. Singer E., Frankel, M. R., & Glassman, M. B. (1983). The Effect of Interviewer Characteristics and Expectations on Response. Public Opinion Quarterly, 47(1), 84-95.

30. Snijkers, G., Hox, J. J., & de Leeuw, E. D. (1999). Interviewers’ Tactics for Fighting Survey Nonresponse. Journal of Official Statistics, 15(2), 185-198. Reprinted in: D. de Vaus (2002). Social Surveys, Part Eleven, Nonresponse Error. London: Sage, Benchmarks in Social Research Methods Series.

31. Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The Psychology of Survey Response. Cambridge: Cambridge University Press.



Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License. Creative Commons License