We run an experiment on 2,516 U.S. participants. Participants were randomly assigned to either a “best practices method” that was computer-based and provides privacy and anonymity, or to a “veiled elicitation method” that further conceals individual responses. Answers in the veiled method preclude inference about any particular individual, but can be used to accurately estimate statistics about the population. Comparing the two methods shows sexuality-related questions receive biased responses even under current best practices, and, for many questions, the bias is substantial. The veiled method increased self-reports of non-heterosexual identity by 65% (p<0.05) and same-sex sexual experiences by 59% (p<0.01). The veiled method also increased the rates of anti-gay sentiment.
This NBER study piqued my curiosity so I did some digging to find out what their 'veiled elicitation method' was.
It's hard to elicit a truthful response to a "sensitive" question if social stigma is attached to one of the possible responses. There are clever ways to pose the question to improve the likelihood of an honest answer. Typically these methods will combine the sensitive question with a more innocuous question, so that participants perceive that their responses are anonymous and untraceable--there's no longer the realization "OMG, this person knows that I'm gay."
For instance, you could ask the question this way: "Toss a fair coin. If it lands heads, then report YES. If it lands tails, then truthfully answer Sensitive Question". (I've seen this approach described in at least one stats textbook.) In the case of the NBER study, the 'veiled elicitation method' involved two flavors of survey questions. One version contained a list of statements; each respondent was asked to report the *number* of statements that were true for them. The second group of respondents was given the same list, but with a Sensitive Statement as an additional item. Using either of these approaches it is possible to estimate the proportion of respondents who answered YES to the Sensitive Question.
But the price we pay for these coy approaches to masking the question of real interest is that we are wasting data: we expend the effort to recruit all these respondents, but some of them don't even get to answer the Sensitive Question. In statistics jargon, the resulting estimator may be unbiased, but at the cost of larger variance.
Speaking of unbiased estimators, all of this is predicated on the sample of respondents being a truly random sample from the population of interest. In the case of the NBER study, the participants were solicited using Amazon's Mechanical Turk service, which is hardly a random sample of anything except Mechanical Turk users. It's valid to conclude that the 'veiled elicitation method' does increase affirmative responses to Sensitive Questions; I'm not so sure whether the amount of increase seen among the study participants reflects the general population at large.
More details here