Questionnaire design in the FReDA panel recruitment: Challenges in transitioning from a face-to-face to a self-administered mixed-mode design

Field report

Lisa Schmid, GESIS – Leibniz-Institute for the Social Sciences, Mannheim, Germany,
Tanja Kunz, GESIS – Leibniz-Institute for the Social Sciences, Mannheim, Germany
Elias Naumann, GESIS – Leibniz-Institute for the Social Sciences, Mannheim, Germany

4.04.2023
How to cite this article:

Schmid, L., Kunz, T. & Naumann, E. (2023). Questionnaire design in the FReDA panel recruitment: Challenges in transitioning from a face-to-face to a self-administered mixed-mode design. Survey Methods: Insights from the Field. Retrieved from https://surveyinsights.org/?p=17948

 

 

 

 

Copyright:

© the authors 2023. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0) Creative Commons License


Abstract

In this field report, we present and discuss methodological issues and challenges in questionnaire design in the context of ‘FReDA – The German Family Demography Panel Study,’ a family demographic mixed-mode panel study. We illustrate the transition from a questionnaire designed initially for the face-to-face mode to a self-administered mixed-mode survey design using web- and paper-based questionnaires. Aspects of questionnaire splitting and mode adaptations for web- and paper-based questionnaires are discussed with examples (e.g., 'no opinion' responses, item batteries, loop questions, and soft prompts). We compare the recruitment rate, panel consent rate, participation rate in the web-based mode, and the share of smartphone respondents in FReDA with other studies and show that the FReDA panel recruitment performed comparably successful.

Keywords

, , , ,


Acknowledgement

This work was funded by the German Federal Ministry of Education and Research (BMBF) as part of FReDA (grant number 01UW2001B).


Copyright

© the authors 2023. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0) Creative Commons License


Introduction

In social science research, there is a continuing trend to switch from interviewer-administered to self-administered or mixed-mode surveys using web- or paper-based questionnaires. This trend is primarily driven by declining response rates, rising survey costs (e.g., Olson et al., 2021; Wolf, Christmann, Gummer, Schnaudt, & Verhoeven, 2021), and increasing internet penetration rates (Eurostat, 2022). Although a switch from single- to mixed-mode designs has been made before by some studies (e.g., Jäckle, Lynn, & Burton, 2015; Luijkx et al., 2021; Martin, 2011; Wolf et al., 2021), this trend was accelerated by the COVID-19 pandemic, when many renowned surveys switched from face-to-face interviews to self-administered mixed-mode or web surveys (e.g., Burton, Lynn, & Benzeval, 2020; Kohler, 2020).

However, moving from face-to-face to a self-administered mixed-mode survey design presents challenges, primarily when recruiting a panel study. In this field report, the main methodological challenges are illustrated using the recruitment of the FReDA panel as an example. We summarize the research design and questionnaire content of FReDA and describe how we adapted the questionnaire to self-administered web- and paper-based modes. We discuss whether the adaptations can serve as good practice for population surveys.

Background

‘FReDA – The German Family Demography Panel Study’ addresses issues related to family life and development, fertility, and family well-being (Schneider et al., 2021). FReDA is designed as a multi-actor panel study surveying target respondents and their partners. FReDA panel recruitment was initially scheduled for 2020 based on the 60-minute Generations and Gender Survey (GGS) questionnaire designed for face-to-face interviews (Gauthier et al., 2020). Due to the COVID-19 pandemic and related restrictions on public and private life beginning in early 2020, several adjustments were made to the FReDA survey design, including the postponement of the field start to early 2021 and the switch to a self-administered mixed-mode design using web- and paper-based questionnaires (Gummer et al., 2020). Different mode choice designs (i.e., concurrent, web-first, and web-only) were implemented (Christmann et al., 2022).

Mixed-mode survey designs may have several advantages, such as reducing survey costs, improving coverage by accounting for respondents’ preferences, and reducing nonresponse. Yet, they also have certain difficulties, especially regarding questionnaire design and differential measurement across modes (de Leeuw, 2005, 2018). Particularly when adapting a face-to-face interview to the self-administered mode, the length and complexity of the questionnaire are critical factors in determining respondent burden and willingness to participate (Dillman, Smyth, & Christian, 2014, p. 32ff). With no interviewer present to assist and motivate respondents as needed, a clear questionnaire structure and simple question layout are essential for complete and accurate survey data (Ganassali, 2008). For mixed-mode designs, a unified stimulus for all modes is generally recommended, including the structure, wording, and visual presentation of the questions (Dillman & Edwards, 2016). However, de Leeuw (2018) points out that “the purpose of equivalent questionnaire design is to maximize data quality in a specific mode and minimize differences in data across modes” (p. 81), so questions may differ between modes if there are good reasons to do so. Especially in the context of self-administered mixed-mode panel studies, the length and complexity of the questionnaire are of particular importance for recruitment rates, panel consent rates, and panel attrition in follow-up waves (Gummer & Daikeler, 2020).

In the FReDA panel recruitment, two measures were taken to adapt the face-to-face interview to a self-administered mixed-mode survey design: (1) splitting up the questionnaire into three shorter questionnaires and (2) adapting the question wording and layout to the web- and paper-based mode.

It should be noted that FReDA is a panel survey, so some of the methodological decisions presented in the following might be different for cross-sectional surveys.

Questionnaire splitting

We split the German version of the 60-minute GGS questionnaire into three parts: a short recruitment wave (W1R) and two further subwaves (W1A three months after W1R and W1B four months after W1A). Each subwave should be close to the ideal length of a (web) survey of 10 to 15 minutes and below the maximum length of 30 minutes (Revilla & Höhne, 2020). There are a few studies on within-respondent modularization of questionnaires by splitting them into shorter parts and offering them to the respondents at several points in time. Findings showed that participants are more willing to respond to shorter questionnaires than to a long undivided questionnaire, but that they are also more likely to drop out, meaning that they do not complete all modularized parts. Overall, this leads to cumulative response rates based on the modularized parts that are similar (Toepoel & Lugtig, 2022) or lower (Andreadis & Kartsounidou, 2020) than the response rate of a long undivided questionnaire. Similarly, some recent studies hint that a self-administered survey longer than 30 minutes may not be as problematic as expected. For example, although the Norwegian GGP 2021 survey had a breakoff rate of 29%, it still achieved a response rate of 33.5% (Dommermuth & Lappegård, 2021; for similar findings for the GGP countries Germany, Croatia, and Portugal, see Emery et al., 2022, and for ISSP countries, see Sapin, Joye, Nisple, Reveilhac, and Steinmetz, 2022). However, these data refer to cross-sectional studies. As there is little empirical evidence yet on how the length of the first wave of a panel study affects panel consent and participation in subsequent waves, FReDA opted for a shorter recruitment survey, accepting the potentially higher risk of dropout between waves.

W1R is a 10-minute questionnaire designed to arouse respondents’ interest in the survey topic and create an enjoyable survey experience, ultimately achieving high recruitment and panel consent rates. W1R is therefore short, contains questions that all respondents have something to say about (e.g., life satisfaction), and are easy to answer (e.g., no retrospective questions). In addition, W1R includes basic socio-demographic questions relevant for nonresponse analyses (e.g., gender, date of birth, education).

W1A and W1B are based on a questionnaire of 25-30 minutes each and includes several modules (see Table 1). Guiding principles for the questionnaire split were:

(1)       maintaining the question modules so that questionnaires remain meaningful to respondents in terms of content,

(2)       maintaining the order of questions within each module to ensure international comparability with the GGS questionnaire.

 

Table 1: Overview of GGS questionnaire modules in FReDA W1A and W1B

 

Questionnaire modules (GGS) W1A W1B
DEM Demographics (X)1
LHI Life Histories X
FER Fertility X
HHD Household Decisions X
GEN Generations X
WEL Well-Being (X)2
WRK Work (X)3
INC Income X
ATT Attitudes X

Notes: X – Module is completely included in the respective subwave; (X) – Module is almost completely included in the respective subwave. Exceptions are:
1 Questions included in W1R: Country of birth, Place of birth, Date of immigration, Citizenship German/Country, Highest school leaving certificate, Date school leaving certificate reached, Highest vocational education, Date vocational education reached, Education: Type of academic institution, Internet connection, Internet use, Language at home.
2 Questions included in W1A: 5-item battery on depression, question on subjective health.
3 Questions included in W1A: 5-item battery on work balance.

 

Some questions are asked repeatedly in two or all three subwaves, including time-varying information with high thematic relevance (e.g., relationship satisfaction, number of children), information to identify respondents (e.g., gender, date of birth), and information to filter follow-up questions (e.g., employment status).

Mode adaptations

In general, the main differences between survey modes are (a) the presence or absence of an interviewer, (b) aural versus visual communication, and (c) computerization (Dillman et al., 2014, p. 99ff). Primarily due to the absence of an interviewer and the visual stimulus presentation in web- and paper-based modes, the design of self-administered questionnaires differs from face-to-face questionnaires (e.g., grid versus item-by-item format). Because of computerization, complex question filtering, dynamic question formats (e.g., drop-downs), and other interactive design features (e.g., dynamic loops) can be used in web-based mode but not in paper-based mode (Couper & Bosnjak, 2010).

In the following, we illustrate some examples of key differences in questionnaire design between the face-to-face and self-administered web- and paper-based modes in the FReDA panel recruitment.

‘No opinion’ category

In interviewer-administered surveys, an explicit ‘don’t know’ (DK) or ‘refuse to answer’ category is usually omitted, as interviewers record such responses with a pre-coded but unread response option. In self-administered surveys, researchers must decide in advance whether to offer a ‘no opinion’ category. Because selecting DK is also considered a satisficing response strategy and a simple way to reduce cognitive effort, an explicit DK category is often omitted in self-administered surveys (DeRouvray & Couper, 2002; Krosnick et al., 2002). For example, de Leeuw, Hox, and Boevé (2016) showed that omitting the DK category leads to the lowest amount of missing information and the highest reliability. Similarly, Kmetty and Stefkovics (2022) consider a skipping-allowed design (i.e., no explicit DK category is offered, but respondents are allowed to skip a question) to be the best choice, as it reduces missing information without harming data quality (e.g., reliability, midpoint responding). Looking at established studies that use self-administered web- or paper-based questionnaires, it seems also common practice to offer a DK category only in exceptional cases (e.g., the German Internet Panel (GIP), German ESS 2020).

In FReDA, there is no explicit DK category, except for a few questions where it can be assumed that respondents really do not know the answer. Exceptions are factual questions that presuppose respondents’ knowledge about themselves or others (e.g., date of immigration), questions that ask for proxy information about others (e.g., partner’s reasons for living apart), or prospective questions (e.g., expected number of children). There is also no explicit ‘refuse to answer’ category, as skipping questions is allowed in the web-based questionnaire.

Item batteries

In interviewer-administered surveys, item batteries comprising several items using the same answer categories are usually presented in an item-by-item format (i.e., items are asked one-by-one). In self-administered web- and paper-based surveys, item batteries are commonly presented in a grid format because they are time- and space-saving (Couper, Traugott, & Lamias, 2001; de Leeuw, 2018; Toepoel, Das, & van Soest, 2009). However, on devices with small screens such as smartphones, a mobile optimized item-by-item format is preferred (Mavletova, Couper, & Lebedev, 2018; Revilla & Couper, 2018; Revilla, Toninelli, & Ochoa, 2017).

Deciding how many items to display on the screen at once is a tradeoff. Multiple items on a single screen shortens the survey duration but may also increase item nonresponse and respondent dissatisfaction (Couper, Tourangeau, Conrad, & Zhang, 2013). Toepoel et al. (2009) recommend “placing four to ten items on a single screen, avoiding the necessity to scroll” (p. 210). Similarly, Hofstein Grady, Greenspan, and Liu (2019) concluded that “having around five rows or potentially fewer per page, and around five columns for answer options, gives the optimal survey experience, with equal or better data quality, when using matrix-style questions in an online survey” (p. 435). Liu and Cernat (2018) recommend an item-by-item format rather than grid format for nine or more answer categories.

In FReDA, item batteries in the web-based questionnaire are generally presented in a grid format, except for devices with small screens where an item-by-item format is used. A maximum of 4 items are shown per screen; longer batteries are split across multiple screens. In the paper-based questionnaire, item batteries are usually presented in a grid format regardless of the number of items.

Loop questions

Loop questions allow dynamic iteration of a series of follow-up questions based on responses to a preceding multiple-choice or frequency question (Eckman & Kreuter, 2018). They are used to request identical information for several people or events. For example, the first question, ‘How many children do you have?’ is followed by the same (set of) questions multiple times for each child. Loop questions allow a lot of detailed information about several persons or events to be collected one after the other without overwhelming the respondents with too much information at once. In a paper-based questionnaire, however, it is not feasible to present sets of identical questions one after the other due to space constraints. Instead, loop questions are presented in a table format.

In FReDA, we use loop questions for previous partnerships, children, and household members in the web-based questionnaire.

Soft prompts

Web-based questionnaires enable the use of soft prompts, which means “not offering DK, but allowing respondents to skip questions, followed by a polite probe when skips occurred” (de Leeuw et al., 2016, p. 116). Soft prompts may successfully reduce the amount of missing information without causing adverse reactance effects such as dropout (e.g., de Leeuw et al., 2016).

In FReDA, we use soft prompts in the web-based questionnaire primarily for ‘core’ filter questions that determine whether a subsequent block of multiple follow-up questions is asked (e.g., respondent has a partner, paid work last week), or for filter questions that prevent inappropriate follow-up questions (e.g., questions about child’s residence in the case of deceased child).

Results

In general, it is assumed that well-designed questionnaires reduce the response burden and motivate respondents to participate in the survey. In the context of recruiting a panel study, this is of particular importance, as a high panel consent rate after the first wave and high re-participation rates in the follow-up waves are crucial for the success of a panel study. To this end, we look at established response indicators (see Table 2).

Table 2: Response indicators for FReDA W1R, W1A, and W1B (in %)

 

Subwave W1R1 W1A2 W1B2
Field period start 07 Apr 2021 07 Jul 2021 04 Nov 2021
Field period end 31 Jun 2021 22 Sep 2021 31 Jan 2022
Recruitment rate 34.9  –  –
Panel consent rate 71.7  –  –
Response rate  – 81.8 75.7
Cumulative response rate  – 28.5 21.6
Participation in web-based mode 79.9 85.4 85.5
Smartphone participation in web-based mode 51.4 52.9 54.3

Notes: Recruitment and response rates are calculated according to AAPOR RR2, where the number of complete and partial interviews is divided by the sum of all interviews, non-interviews, and all cases of unknown eligibility (AAPOR, 2016).
1 Results for W1R are based on FReDA panel data from the release v.1.0.0 (DOI: 10.4232/1.13745), Bujard et al. (2022).
2 Results for W1A and W1B are based on preliminary prerelease data.

 

W1R achieved a recruitment rate of 34.9% (RR2; AAPOR, 2016), which is comparable to other surveys in Germany. For example, the face-to-face cross-sectional ALLBUS 2018 had a response rate of 32% (Wolf et al., 2021) and the self-administered mixed-mode recruitment survey of the German Internet Panel (GIP) 2018 had a recruitment rate of 38% (Cornesse, Felderer, Fikel, Krieger, & Blom, 2022).

In W1R, 71.7% of the respondents, including complete and partial cases, consented to be reinterviewed. Of particular note is that the panel consent rate was considerably higher among respondents who completed the web-based questionnaire (74.8%) than among those who completed the paper-based questionnaire (58.7%). The FReDA panel consent rate is comparable to other studies. For example, the online access DEZiM.panel, which was self-administered recruited in 2021, had a panel consent rate of 73.3%, and the rate was also higher in the web-based mode (84.1%) than in the paper-based mode (63.5%) (Dollmann et al., 2022).

In W1R, most respondents participated via the web-based questionnaire with 79.9%, of whom 51.4% reported completing the questionnaire via their smartphone. Both the proportion of respondents who completed the web-based questionnaire and, of these, the proportion of smartphone respondents increased even further in W1A and W1B. In comparison, participation via the web-based questionnaire was lower at 65% for the GESIS Panel recruitment in 2014 (Pforr & Dannwolf, 2017). Participation via smartphones in FReDA is also quite high and was not reached in any age group in the GESIS Panel in 2020 (65 and above around 5%, 18-25 years 45%) (Weiß, Silber, Struminskaya, & Durrant, 2022).

Response rates in W1a and W1B were 81.8% and 75.7%, respectively. Considering non-consenters and dropouts between waves, the cumulative response rates were 28.5% and 21.6%, respectively.

Conclusion

In response to the COVID-19 pandemic, the survey design of the FReDA panel recruitment was changed from a 60-minute face-to-face interview to three shorter, self-administered web- and paper-based questionnaires in 2021. In this field report, we have summarized the key challenges associated with the transition to self-administered mixed-mode survey designs and describe how FReDA responded to these challenges.

While we do not have experimental data to evaluate how well our measures of questionnaire splitting and mode adaptations worked, common response indicators suggest that the FReDA panel recruitment was successful. Our recruitment rate of 34.9% and consent rate of 71.7% in W1R are satisfactory and comparable to those of other self-administered panel studies. The re-participation in the follow-up waves W1A and W1B can also be considered good with response rates of 81.8% and 75.7%, respectively.

The high proportion of respondents participating via the web-based questionnaire shows that the web-based mode is clearly preferred over the paper-based mode in FReDA. In addition, the high proportion of smartphone respondents underscores the importance of a mobile optimized, responsive questionnaire design.

In future research, nonresponse analysis and further data quality analysis have to show whether the quality of respondents’ answers differ by survey mode. Also, the high participation rate in the web-based mode and the large number of respondents using their smartphone to complete the web-based questionnaire highlight the necessity to evaluate device-specific response behavior and probably further optimizations of the web-based questionnaires’ responsive design in case of increasing smartphone participation.

References

  1. AAPOR (2016). Standard Definitions. Final Dispositions of Case Codes and Outcome Rates for Surveys. Retrieved from https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf
  2. Andreadis, I., & Kartsounidou, E. (2020). The Impact of Splitting a Long Online Questionnaire on Data Quality. Survey Research Methods, 14(1). doi:10.18148/srm/2020.v14i1.7294
  3. Bujard, M.; Gummer, T.; Hank, K.; Neyer, F. J.; Pollak, R.; Schneider, N. F.; Spieß, C. K.; Wolf, C.; …; Weih, U. (2022). FReDA – The German Family Demography Panel Study. GESIS, Cologne. ZA7777 Data File Version 1.0.0. http://dx.doi.org/10.4232/1.13745
  4. Burton, J., Lynn, P., & Benzeval, M. (2020). How Understanding Society: The Uk Household Longitudinal Study Adapted to the COVID-19 Pandemic. Survey Research Methods, 14(2), 235-238. doi:10.18148/srm/2020.v14i2.7746
  5. Christmann, P., Gummer, T., Kunz, T., Oehrlein, A., & Schmid, L. (2022). Concurrent, Sequential or Web-Only? Evidence from a Mixed-Mode Recruitment Experiment in FReDA. CLOSER Conference, online, 20.01.2022. https://www.closer.ac.uk/wp-content/uploads/CLOSER-conference-Pablo-Christmann.pdf.
  6. Cornesse, C., Felderer, B., Fikel, M., Krieger, U., & Blom, A. G. (2022). Recruiting a Probability-Based Online Panel Via Postal Mail: Experimental Evidence. Social Science Computer Review, 40(5), 1259-1284. doi:10.1177/08944393211006059
  7. Couper, M. P., & Bosnjak, M. (2010). Internet Surveys. In P. V. Marsden & J. D. Wright (Eds.), Handbook of Survey Research (pp. 527-550): Emerald.
  8. Couper, M. P., Tourangeau, R., Conrad, F. G., & Zhang, C. (2013). The Design of Grids in Web Surveys. Social Science Computer Review, 3(3), 322-345. doi:10.1177/0894439312469865
  9. Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web Survey Design and Administration. Public Opinion Quarterly, 65(2), 230-253. doi:10.1086/322199
  10. de Leeuw, E. D. (2005). To Mix or Not to Mix Data Collection Modes in Surveys. Journal of Official Statistics, 21(2), 233-255.
  11. de Leeuw, E. D. (2018). Mixed-Mode: Past, Present, and Future. Survey Research Methods, 12(2). doi:10.18148/srm/2018.v12i2.7402
  12. de Leeuw, E. D., Hox, J. J., & Boevé, A. (2016). Handling Do-Not-Know Answers: Exploring New Approaches in Online and Mixed-Mode Surveys. Social Science Computer Review, 34(1), 116-132. doi:10.1177/0894439315573744
  13. DeRouvray, C., & Couper, M. P. (2002). Designing a Strategy for Reducing “No Opinion” Responses in Web-Based Surveys. Social Science Computer Review, 20(1), 3-9. doi:10.1177/089443930202000101
  14. Dillman, D. A., & Edwards, M. L. (2016). Designing a Mixed-Mode Survey. In C. Wolf, D. Joye, T. W. Smith, & Y.-c. Fu (Eds.), The Sage Handbook of Survey Methodology (pp. 254-267). London: SAGE Publications Ltd.
  15. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, Phone, Mail and Mixed-Mode Surveys. The Tailored Design Method. Hoboken, NJ: Wiley.
  16. Dollmann, J., Mayer, S. J., Lietz, A., Siegel, M., & Köhler, J. (2022). Setting Up an Offline Recruited Online Access Panel with an Oversampling of Immigrants and their Descendants: The German DeZIM.panel. doi: 10.31235/osf.io/mdpnx
  17. Dommermuth, L. & Lappegård, T. (2021). The Norwegian Generations and Gender Survey, Round 2 – Wave 1 (2020). Documentation of the data collection process. Technical Working Paper. The Hague, Netherlands Interdisciplinary Demographic Institute.
  18. Eckman, S., & Kreuter, F. (2018). Misreporting to Looping Questions in Surveys: Recall, Motivation and Burden. Survey Research Methods, 12(1), 59-74. doi:10.18148/srm/2018.v12i1.7168
  19. Emery, T., Cabaço, S. L. F., Fadel, L., Lugtig, P., Toepoel, V., Schumann, A., … Bujard, M. (2022). Breakoffs in an Hour Long Online Survey. SocArXiv. doi: 10.31235/osf.io/ja8k4
  20. Eurostat (2022). Households – Level of Internet Access. Retrieved from https://ec.europa.eu/eurostat/databrowser/bookmark/6b56d5a2-1164-4b27-9ccd-40b31be032c2?lang=en.
  21. Ganassali, S. (2008). The Influence of the Design of Web Survey Questionnaires on the Quality of Responses. Survey Research Methods, 2(1), 21-32. doi:10.18148/srm/2008.v2i1.598
  22. Gauthier, A. H., Liefbroer, A., Ajzen, I., Aassve, A., Beets, G., Billari, F., . . . Vikat, A. (2020). Generations and Gender Survey Baseline Questionnaire 3.0.1. Netherlands Interdisciplinary Demographic Institute. Retrieved from https://www.ggp-i.org/wp-content/uploads/2021/04/GGS-Questionnaire-3_0_7.pdf
  23. Gummer, T., & Daikeler, J. (2020). A Note on How Prior Survey Experience with Self-Administered Panel Surveys Affects Attrition in Different Modes. Social Science Computer Review, 38(4), 490-498. doi:10.1177/0894439318816986
  24. Gummer, T., Schmiedeberg, C., Bujard, M., Christmann, P., Hank, K., Kunz, T., . . . Neyer, F. J. (2020). The Impact of COVID-19 on Fieldwork Efforts and Planning in Pairfam and FReDA-GGS. Survey Research Methods, 14(2), 223-227. doi:10.18148/srm/2020.v14i2.7740
  25. Hofstein Grady, R., Greenspan, R. L., & Liu, M. (2019). What Is the Best Size for Matrix-Style Questions in Online Surveys? Social Science Computer Review, 37(3), 435-445. doi:10.1177/0894439318773733
  26. Jäckle, A., Lynn, P., & Burton, J. (2015). Going Online with a Face-to-Face Household Panel: Effects of a Mixed Mode Design on Item and Unit Non-Response. Survey Research Methods, 9 (1): 57-70. doi: 10.18148/srm/2015.v9i1.5475
  27. Kmetty, Z., & Stefkovics, Á. (2022). Assessing the Effect of Questionnaire Design on Unit and Item-Nonresponse: Evidence from an Online Experiment. International Journal of Social Research Methodology, 25(5), 659-672. doi:10.1080/13645579.2021.1929714
  28. Kohler, U. (2020). Survey Research Methods during the COVID-19 Crisis. Survey Research Methods, 14 (2): 93-94. doi:10.18148/srm/2020.v14i2.7769
  29. Liu, M., & Cernat, A. (2018). Item-by-Item Versus Matrix Questions: A Web Survey Experiment. Social Science Computer Review, 36(6), 690-706. doi:10.1177/0894439316674459
  30. Luijkx, R., Jónsdóttir, G. A., Gummer, T., Ernst Stähli, M., Frederiksen, M., Ketola, K., . . . Wolf, C. (2021). The European Values Study 2017: On the Way to the Future Using Mixed-Modes. European Sociological Review, 37(2), 330-346. doi:10.1093/esr/jcaa049
  31. Martin, P. (2011). What Makes a Good Mix? Chances and Challenges of Mixed Mode Data Collection in the European Social Survey. Centre for Comparative Social Surveys, Working Paper Series, Paper 02. City University London: London, UK.
  32. Mavletova, A., Couper, M. P., & Lebedev, D. (2018). Grid and Item-by-Item Formats in PC and Mobile Web Surveys. Social Science Computer Review, 36(6), 647-668. doi:10.1177/0894439317735307
  33. Olson, K., Smyth, J. D., Horwitz, R., Keeter, S., Lesser, V., Marken, S., . . . Wagner, J. (2021). Transitions from Telephone Surveys to Self-Administered and Mixed-Mode Surveys: AAPOR Task Force Report. Journal of Survey Statistics and Methodology, 9(3), 381-411. doi:10.1093/jssam/smz062
  34. Pforr, K., & Dannwolf, T. (2017). What Do We Lose with Online-Only Surveys? Estimating the Bias in Selected Political Variables Due to Online Mode Restriction. Statistics, Politics and Policy, 8(1), 105-120. doi:10.1515/spp-2016-0004
  35. Revilla, M. A., & Couper, M. P. (2018). Comparing Grids with Vertical and Horizontal Item-by-Item Formats for PCs and Smartphones. Social Science Computer Review, 36(3), 349-368. doi:10.1177/0894439317715626
  36. Revilla, M. A., & Höhne, J. K. (2020). How Long Do Respondents Think Online Surveys Should Be? New Evidence from Two Online Panels in Germany. International Journal of Market Research, 62(5), 538–545. doi:10.1177/1470785320943049
  37. Revilla, M. A., Toninelli, D., & Ochoa, C. (2017). An Experiment Comparing Grids and Item-by-Item Formats in Web Surveys Completed through PCs and Smartphones. Telematics and Informatics, 34(1), 30-42. doi:10.1016/j.tele.2016.04.002
  38. Sapin, M., Joye, D., Nisple, K, Reveilhac, M., & Steinmetz, S. (2022). International Social Survey Programme ISSP 2019 – Social Inequality V. Study Monitoring Report. Retrieved from https://access.gesis.org/dbk/73902
  39. Schneider, N. F., Bujard, M., Wolf, C., Gummer, T., Hank, K., & Neyer, F. J. (2021). Family Research and Demographic Analysis (FReDA): Evolution, Framework, Objectives, and Design of “the German Family Demography Panel Study”. Comparative Population Studies, 46. doi:10.12765/CPoS-2021-06
  40. Toepoel, V., Das, M., & van Soest, A. (2009). Design of Web Questionnaires: The Effects of the Number of Items Per Screen. Field Methods, 21(2), 200-213. doi:10.1177/1525822X08330261
  41. Toepoel, V., & Lugtig, P. (2022). Modularization in an Era of Mobile Web: Investigating the Effects of Cutting a Survey into Smaller Pieces on Data Quality. Social Science Computer Review, 40(1), 150-164. doi:10.1177/0894439318784882
  42. Weiß, B., Silber, H., Struminskaya, B., & Durrant, G. (2022). Mobile Befragungen. In N. Baur & J. Blasius (Eds.), Handbuch Methoden Der Empirischen Sozialforschung (pp. 1067-1080). Wiesbaden: Springer Fachmedien Wiesbaden.
  43. Wolf, C., Christmann, P., Gummer, T., Schnaudt, C., & Verhoeven, S. (2021). Conducting General Social Surveys as Self-Administered Mixed-Mode Surveys. Public Opinion Quarterly, 85(2), 623-648. doi:10.1093/poq/nfab039



Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License. Creative Commons License