I’ve been spending the past few weeks working on a survey for work, and it’s been some interesting work. The survey itself took a few hours to write, the rest of the time has been attempting to reword the questions so that we make sure we’re actually asking what we want to ask without affecting the respondents opinion or leading to any particular answers. We’re trying to get some data no one’s ever gotten before, so we have no motivation to guide the questions anywhere in particular.
I was thinking about this when I saw this story today. Apparently the British Humanist Society is going after the Church of England for putting out a press release that said that 81% of British adults believed in the power of prayer. The BHS is taking issue with this because apparently this data was taken from this survey. The question that so many people answered in the affirmative was not actually “do you believe in the power of prayer?”, but rather “Irrespective of whether you currently pray or not, if you were to pray for something at the moment, what would it be for?’.
Now that seemed like a bit of a stretch, so I looked a little further.
It turns out that one of the options on the survey was “I would never pray for anything”. 15% of people answered that, and 4% answered that they didn’t know. Thus, the accurate statement really would have been “81% of people don’t say they don’t believe in the power of prayer” not “81% of people believe in the power of prayer”.
I thought this was a bit of interesting story because I don’t often see survey question nuances/reporting make the news, the Church of England did in fact twist the results, but the BHS did leave out the fact that there was actually a “I wouldn’t pray” option.
It’s not the question, it’s how you ask it.