Measuring Immigrant Populations: Subjective versus Objective Assessments

Sebastian Lundmark, Communication Department, Stanford University, USA
Andrej Kokkonen, Department of Political Science, Aarhus University, Denmark

Innumeracy among survey respondents in estimating a country’s immigrant population is a well-known problem for the social sciences. In general, individuals have been found to overestimate the immigrant population at the country level. Furthermore, individuals were found to be especially prone towards overestimating the number if they already were prejudiced against immigrants. If these findings generalize to lower levels of inquiry such as neighborhoods, then research using subjective assessments of immigrant populations in these contexts might be biased as well. By distributing a questionnaire among 142 small and mid-sized companies in the city Gothenburg, Sweden, respondent’s subjective assessments of the …

, , ,

No Comments

Effects of call patterns on the likelihood of contact and of interview in mobile CATI surveys

Paula Vicente, Instituto Universitário de Lisboa (ISCTE-IUL), Business Research Unit (BRU-IUL), Lisboa, Portugal
Catarina Marques, Instituto Universitário de Lisboa (ISCTE-IUL), Business Research Unit (BRU-IUL), Lisboa, Portugal
Elizabeth Reis, Instituto Universitário de Lisboa (ISCTE-IUL), Business Research Unit (BRU-IUL), Lisboa, Portugal

Despite the acknowledged influence of call patterns on contact and response rates in telephone surveys, this relationship is scarcely investigated in mobile CATI surveys. This paper evaluates the effect of call patterns on the likelihood of making contact and of obtaining an interview in a mobile CATI survey and thus furthers the understanding of the potential of mobile phones as a survey mode. Findings reveal that the likelihood of making contact and of obtaining an interview is not uniform across days of the week or times of the day – Tuesdays and Wednesdays are the worst days to make contact …

, , ,

No Comments

Comparing Continuous and Dichotomous Scoring of Social Desirability Scales: Effects of Different Scoring Methods on the Reliability and Validity of the Winkler-Kroh-Spiess BIDR Short Scale

Patrick Schnapp, Center for Quality in Care, Berlin, Germany
Simon Eggert, Center for Quality in Care, Berlin, Germany
Ralf Suhr, Center for Quality in Care, Berlin, Germany

Survey researchers often include measures of social desirability in questionnaires. The Balanced Inventory of Desirable Responding (BIDR; Paulhus, 1991) is a widely used instrument that measures two components of socially desirable responding: self-deceptive enhancement (SDE) and impression management (IM). An open question is whether these scales should be scored dichotomously (counting only extreme values) or continuously (taking the mean of the answers). This paper compares the two methods with respect to test-retest reliability (stability) and internal consistency using a short German version of the BIDR (Winkler, Kroh, & Spiess, 2006). Tests of criterion validity are also presented. Data are taken …

, , , , ,

No Comments

Testing the Validity of the Crosswise Model: A Study on Attitudes Towards Muslims

David Johann, German Centre for Higher Education Research and Science Studies, Berlin
Kathrin Thomas, City, University of London

This paper investigates the concurrent validity of the Crosswise Model when “high incidence behaviour” is concerned by looking at respondents’ self-reported attitudes towards Muslims. We analyse the concurrent validity by comparing the performance of the Crosswise Model to a Direct Question format. The Crosswise Model was designed to ensure anonymity and confidentiality in order to reduce Social Desirability Bias induced by the tendency of survey respondents to present themselves in a favourable light. The article suggests that measures obtained using either question format are fairly similar. However, when estimating models and comparing the impact of common predictors of negative attitudes …

, ,

No Comments

Nonsampling errors and their implication for estimates of current cancer treatment using the Medical Expenditure Panel Survey

Jeffrey M. Gonzalez, PhD, Office of Survey Methods Research, U.S. Bureau of Labor Statistics
Lisa B. Mirel, MS, Office of Analysis and Epidemiology, National Center for Health Statistics, Centers for Disease Control and Prevention
Nina Verevkina, PhD, Department of Health Policy & Administration, The Pennsylvania State University

Survey nonsampling errors refer to the components of total survey error (TSE) that result from failures in data collection and processing procedures. Evaluating nonsampling errors can lead to a better understanding of their sources, which in turn, can inform survey inference and assist in the design of future surveys. Data collected via supplemental questionnaires can provide a means for evaluating nonsampling errors because it may provide additional information on survey nonrespondents and/or measurements of the same concept over repeated trials on the same sampling unit. We used a supplemental questionnaire administered to cancer survivors to explore potential nonsampling errors, focusing …

No Comments

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License. Creative Commons License