Evangelical Support for Trump: A Review of the Numbers

This is not a particularly new question, but a few friends and readers have asked me over the past few months about the data behind the “Evangelicals support Trump” assertions. All of the people who asked me about this are long term Evangelicals who attend church regularly and typically vote Republican, but did not vote for Trump. They seemed to doubt that Evangelical support for Trump was as high as was being reported, but of course weren’t sure if that was selection bias on their part.

The first data set of interest is the exit polling from right after Election Day. This showed that Evangelical support had gone up from 78% for Romney to 81% for Trump. The full preliminary analysis is here, but I thought it would be interesting to see how all of the tracked religions had changed over the years, so I turned the table in to a bar chart. This shows the percent of people who claimed affiliation with a particular religious group AND said the voted for the Republican candidate:Since some religions tend to show large disparities along racial lines (such as Catholicism), race is included. White evangelical Christian was added as its own affiliation after the 2000 election, when those voters were given credit for putting Bush in office. Mormonism has not been consistently tracked, which is why the 2008 data is missing.

Anyway, I thought it was interesting to see that while support for Trump did increase over Romney’s support, it wasn’t a huge change. On the other hand, Mormons saw a fairly substantial drop in support for Trump as opposed to Romney or Bush. Hispanic Catholics and “other faiths” saw the biggest jump in support for Trump over Romney. However, white Evangelicals remained the most likely to vote for Trump at a full 21 points higher than the next closest group, white Catholics.

So with those kind of numbers, why aren’t my friends hearing this in their churches? A few possible reasons:

We don’t actually know the true percentage of Evangelicals who voted for Trump Even with a number like 81% , we still have to remember that about half of all people don’t vote at all. I couldn’t find data about how likely Evangelicals were to vote, but if it is at the same rate as other groups then only 40% of those sitting in the pews on Sunday morning actually cast a vote for Trump.

Some who have raised this objection have also objected that we don’t know if those calling themselves “Evangelical” actually were sitting in the pews on Sunday morning, so Pew decided to look at this question specifically. At least as of April, Evangelicals stating that they attended church at least once a month were actually the most likely to support Trump and the job he is doing, at 75%. Interestingly, that survey also found that there are relatively few people (20%) who call themselves Evangelical but don’t attend church often.

The pulpit and the pews may have a difference of opinion While exit polls capture the Evangelical vote broadly, some groups decided to poll Evangelical pastors specifically. At least a month before the election, only 30% of Evangelical pastors said they were planning on voting for Trump and 44% were still undecided. While more of them may have ended up voting for him, that level of hesitancy suggests they are probably not publicly endorsing him on Sunday mornings. Indeed, that same poll found that only 3% of pastors had endorsed a candidate from the pulpit during this election.

People weren’t voting based on things you hear sermons about After the data emerged about the Evangelical voting, many pundits hypothesized that the Supreme Court nomination and abortion were the major drivers of Evangelical voting. However, when Evangelicals were actually asked what their primary issues were, they told a different story. When asked to pick their main issues, they named “improving the economy”and “national security”, with the Supreme Court nominee ranking 4th with 10% picking it and abortion ranking 7th, with 4%.  Even when allowed to name multiple issues, the Supreme Court and abortion were ranked as less concerning than terrorism, the economy, immigration, foreign policy and gun policy.

Now the motivation may seem minor, but think about what people actually discuss in church on Sunday morning. Abortion or moral concerns are far more likely to come up in that context than terrorism. Basically, if Evangelicals are voting for Trump based on their beliefs about things that aren’t traditionally talked about on Sunday morning, you are not likely to hear about this on Sunday morning.

National breakdowns may not generalize to individual states I couldn’t find an overall breakdown of the white Evangelical vote by state, but it was widely reported that in some key states like Florida,  Evangelical voters broke for Trump at even higher rates than the national average (85%), which obviously means some states went lower. What might skew the data even further however, is the uneven distribution of Evangelicals themselves. The Pew Research data tells us that about 26% of the voting public is white Evangelical, and Florida is very close to that at 23%.  The states where my friends are from however (New Hampshire and Massachusetts) are much lower at 13% and 9% respectively.  This means some small shifts in Evangelical voting in Florida could be the equivalent of huge shifts in New Hampshire.

As an example: According to the Election Project numbers, Florida had 9.5 million people cast votes and New Hampshire had 750,000. If Evangelicals were represented proportionally in the voting population, that means about 2.18 million Evangelicals cast a vote in Florida, and about 97,500 cast their vote in NH. That’s 22 times as many Evangelical voters in Florida as NH. Roughly speaking, this means a 1% change in Florida would be about 20,000 people….almost 20% of the NH Evangelical population. Massachusetts Evangelicals are similarly outnumbered at about 7 to 1 in comparison to Florida. If 0% of NH/MA Evangelical voters went for Trump but 85% of Florida Evangelicals did vote for him, that would still average out to 71% of Evangelicals voting for Trump across the three states. New England states just really don’t have the population to move the dial much, and even wildly divergent voting patterns wouldn’t move the national average.

Hopefully that sheds a bit of light on the numbers here, even if it is about 7 months too late to be a hot take.

Who Votes When? Untangling Non-Citizen Voting

Right after the election, most people in America saw or heard about this Tweet from then President elect Trump:

I had thought this was just random bluster (on Twitter????? Never!), but then someone sent me  this article. Apparently that comment was presumably based on an actual study, and the study author is now giving interviews. It turns out he’s pretty unhappy with everyone….not just with Trump, but also with Trump’s opponents who claim that no non-citizens voted. So what did his study actually say? Let’s take a look!

Some background: The paper this is based on is called “Do Non-Citizens Vote in US Elections” by Richman et all and was published back in 2014. It took data from a YouGov survey and found that 6.4% of non-citizens voted in 2008 and 2.2% voted in 2010. Non-citizenship status was based on self report, as was voting status, though the demographic data of participants was checked with that of their stated voting district to make sure the numbers at least made sense.

So what stood out here? A few things:

  1. The sample size While the initial survey of voters was pretty large (over 80,000 between the two years) the number of those identifying themselves as non-citizens was rather low: 339 and 489 for the two years. There were a total of 48 people who stated that they were not citizens and that they voted. As a reference, it seems there are about 20 million non-citizens currently residing in the US.
  2. People didn’t necessarily know they were voting illegally One of the interesting points made in the study was that some of this voting may be unintentional. If you are not a citizen, you are never allowed to vote in national elections even if you are a permanent resident/have a green card. The study authors wondered if some people  didn’t know this, so they analyzed the education levels of those non-citizens who voted. It turns out non-citizens with less than a high school degree are more likely to vote than those with more education. This actually is the opposite trend seen among citizens AND naturalized citizens, suggesting that some of those voters have no idea what they’re doing is illegal.
  3. Voter ID checks are less effective than you might think If you’re first question up on reading #2 was “how could you just illegally vote and not know it?” you may be presuming your local polling place puts a little more in to screening people than they do. According to the participants in this study, not only were non-citizens allowed to register and cast a vote, but a decent number of them actually passed an ID check first. About a quarter of non-citizen voters said they were asked for ID prior to voting, and 2/3rds of those said they were then allowed to vote. I suspect this issue is that most polling places don’t actually have much to check their information against. Researching citizenship status would take time and money that many places just don’t have. Another interesting twist to this is that social desirability bias may kick in for those who don’t know voting is illegal. Voting is one of those things more people say they do than actually do, so if someone didn’t know they couldn’t legally vote they’d be more likely to say they did even if they didn’t. Trying to make ourselves look good is a universal quality.
  4. Most of the illegal voters were white Non-citizen voters actually tracked pretty closely with their proportion of the population, and about 44% of them were white. The next most common demographic was Hispanic at 30%, then black, then Asian. In terms of proportion, the same percent of white non-citizens voted as Hispanic non-citizens.
  5. Non-citizens are unlikely to sway a national election, but could sway state level elections When Trump originally referenced this study, he specifically was using it to discuss national popular vote results. In the Wired article, they do the math and find that even if all of the numbers in the study bear out it would not sway the national popular vote. However, the original study actually drilled down to a state level and found that individual states could have their results changed by non-citizen voters. North Carolina and Florida would both have been within the realm of mathematical possibility for the 2008 election, and for state level races the math is also there.

Now, how much confidence you place in this study is up to you. Given the small sample size, things like selection bias and non-response bias definitely come in to play. That’s true any time you’re trying to extrapolate the behavior of 20 million people off of the behavior of a few hundred. It is important to note that the study authors did a LOT of due diligence attempting to verifying and reality check the numbers they got, but it’s never possible to control for everything.

If you do take this study seriously, it’s interesting to note what the authors actually thought the most effective counter-measure against non-citizen voting would be: education. Since they found that low education levels were correlated with increased voting and that poll workers rarely turned people away, they came away from this study with the suggestion that simply doing a better job of notifying people of voting rules might be just as effective (and cheaper!) than attempting to verify citizenship. Ultimately it appears that letting individual states decide on their own strategies would also be more effective than anything on the federal level, as different states face different challenges. Things to ponder.

 

Voter Turnout vs Closeness of Race

I’ve seen a lot of talk about the Electoral College this past week, and discussion about whether or not the system is fair. I’m not particularly going to wade in to this one, but I did get curious if the closeness of the presidential race in a state influenced voter turnout overall. Under the current system, it would stand to reason that voters in states that have large gaps between the two parties (and thus know ahead of time which way their state is going to go) would be less motivated to vote than those living in states with close races. While other races are typically happening in most states that could drive voter turnout, we know that elections held during the presidential election have better turnout than midterm elections by a significant margin. The idea that being able to cast a vote for the president is a big driver of turnout seems pretty solid.

What I wanted to know is if the belief that you’re going to count a potentially “meaningful” vote in an election an even further driver of turnout. With all the commentary about the popular vote vs electoral college and with some petitioning to retroactively change the way we count the votes, it seemed relevant to know if the system we went in to voting day with had a noticeable impact on who voted.

While not all the numbers are final yet, I found voter turnout by state here, and the state results here.  I took the percent of the vote of the winning candidate and subtracted the percent of the vote of the second place candidate to get the % lead number, and plotted that against the voter turnout. Here’s the graph:

votingdiff

The r-squared is about 26.5% for an r of .5.  I didn’t take in to account any other races on the ballot, but I think it’s safe to at least theorize that believing your state is a lock in one direction or the other influences voter turnout. Obviously this provides no comment on how other systems would change things from here, only how people behave under the system we have today.

For those curious, here’s an uglier version of the same graph with state names:

votediffstatenames

It’s interesting to note that the Utah vote got split by McMullin, so the percent lead there is a bit skewed.

A few other fun facts:

  • The average turnout in states where the presidential race was close (<5% between the winning candidate and second place) was 65% vs 58% for all other states. A quick ANOVA tells me this is a statistically significant difference.
  • Once the gap between the winner and second place gets over 10%, things even out. States with a gap of 10-20% have about 58% voter turnout, and those with an over 20% gap have about a 57% voter turnout. Some of this may be even out as states with large gaps also likely take their time with counting their votes.
  • My state (Massachusetts) is one of the weird lopsided but high turnout states, and we had some really contentious ballot questions: charter schools expansion and recreational marijuana.

Again, none of this speaks to whether or not the process we have is a good one, but it’s important to remember that the rules in play at the time people make a decision tend to influence that decision.

I’ll update the post if these margins change significantly as more votes are counted.

5(ish) Posts About Elections, Bias, and Numbers in Politics

It’s election day here in the US, so I thought I’d do a roundup of my favorite posts I’ve done in the past year about the political process and it’s various statistical pitfalls. Regular readers will recognize most of these, but I figured there were worth a repost before they stopped being relevant for another few years.  As always, these posts are meta/about the process type posts, and no candidates or positions are endorsed. The rest of you seem to have that covered quite nicely.

  1. How Do They Call Elections So Early? My most popular post so far this year, I walk through the statistical methods used to call elections before all the votes are counted. No idea if this will come in to play today, but if it does you’ll be TOTALLY prepared to explain this at your next cocktail party or whatever it is the kids do these days.
  2. 5 Studies About Politics and Bias to Get You Through Election Season In this post I do a roundup of my favorite studies on, well, politics and bias. Helpful if you want to figure out what your opponents are doing wrong, but even MORE helpful if you use it to re-examine some of your own beliefs.
  3. Two gendered voting studies. People love to study the secret forces driving individual genders to vote certain ways, but are those studies valid? I examined one study that attempted to link women’s voting patterns and menstrual cycles here, and one that attempted to link threats to men’s masculinity and their voting patterns here. Spoiler alert: I was underwhelmed by both.
  4. Two new logical fallacies (that I just made up) Not specific to politics, but aimed in that direction. I invented the Tim Tebow Fallacy for those situations when someone defends a majority opinion as though they were an oppressed minority. The Forrest Gump Fallacy I made up for those times when someone believes that their own personal life is actually reflective of a greater trend in America….when it doesn’t.
  5. My grandfather making fun of statistical illiteracy of political pundits 40 years ago. The original stats blogger in my family also got irritated by this stuff. Who would have thought.

As a final thought, if you’re in the US, go vote! No, it won’t make a statistically significant difference on the national, but I think there’s a benefit to being part of the process.

Women, Ovulation and Voting in 2016

Welcome to “From the Archives” where I revisit old posts  to see where the science (or my thinking) has gone since I put them up originally.

Back in good old October of 2012, it was an election year and I was getting irritated1. First, I was being bombarded with Elizabeth Warren vs Scott Brown for Senate ads, and then I was confronted with this study:The fluctuating female vote: politics, religion, and the ovulatory cycle (Durante, et al), which purported to show that women’s political and religious beliefs varied wildly around their monthly cycle, but in different ways if they were married or single. For single women they claimed that being fertile caused them to get more liberal and less religious, because they had more liberal attitudes toward sex. For married women, being fertile made them more conservative and religious so they could compensate for their urge to cheat.  The swing was wide too: about 20%.  Of note, the study never actually observed any women changing their vote, but compared two groups of women to find the differences. The study got a lot of attention because CNN initially put it up, then took it back down when people complained.  I wrote two posts about this, one irritated and ranty, and one pointing to some more technical issues I had.

With a new election coming around, I was thinking about this paper and wanted to take a look at where it had gone since then. I knew that Andrew Gelman had ultimately taken shots at the study for reporting an implausibly large effect2 and potentially collecting lots of data/comparisons and only publishing some of them, so I was curious how this study had subsequently fared.

Well, there are updates!  First, in 2014, a different group tried to replicate their results in a paper called  Women Can Keep the Vote: No Evidence That Hormonal Changes During the Menstrual Cycle Impact Political and Religious Beliefs by Harris and Mickes.  This paper recruited a different group, but essentially recreated much of the analysis of the original paper with one major addition. They conducted their survey prior to the 2012 election AND after, to see predicted voting behavior vs actual voting behavior.  A few findings:

  1. The first paper (Durante et al) had found that fiscal policy beliefs didn’t change for women, but social policy beliefs did change around ovulation. The second paper (Harris and Mickes) failed to replicate this finding, and also failed to detect any change in religious beliefs.
  2. In the second paper, married women had a different stated preference for Obama (high when low feritility, lower when high fertility), but that difference went away when you looked at how they actually voted. For single women, it was actually the opposite. They reported the same preference level for Obama regardless of fertility, but voted differently based on the time of the month.
  3. The original Durante study had taken some heat for how they assessed fertility level in their work. There were concerns that self reported fertility level was so likely to be inaccurate that it would render any conclusions void. I was interested to see that Harris and Mickes clarified that the Durante paper actually didn’t accurately describe how they did fertility assessments in the original paper, and that they had both ultimately used the same method. This was supposed to be in the supplementary material, but I couldn’t find a copy of that free online. It’s an interesting footnote.
  4. A reviewer asked them to combine the pre and post election data to see if they could find a fertility/relationship interaction effect. When pre and post election data were kept separate, there was no effect. When they were combined, there was.

Point #4 is where things got a little interesting. The authors of the Harris and Mickes study said combining their data was not valid, but Durante et al hit back and said “why not?”. There’s an interesting piece of stat/research geekery about the dispute here, but the TL;DR version is that this could be considered a partial replication or a failure to replicate, depending on your statistical strategy. Unfortunately this is one of those areas where you can get some legitimate concern that a person’s judgement calls are being shaded by their view of the outcome. Since we don’t know what either researchers original plan was, we don’t know if either one modified their strategy based on results. Additionally the “is it valid to combine these data sets” question is a good one, and would be open for discussion even if we were discussing something totally innocuous. The political nature of the discussion intensifies the debate, but it didn’t create it.

Fast forward now to 2015, when yet another study was published: Menstrual Cycle Phase Does Not Predict Political Conservatism. This study was done using data ALSO from the 2012 election cycle3, but with a few further changes.  The highlights:

  1. This study, by Scott and Pound, addressed some of the “how do you measure fertility when you can’t test” concerns by asking about medical conditions that might influence fertility to screen out women whose self reporting might be less accurate. They also ranked fertility on a continuum as opposed to the dichotomous “high” and “low”. This should have made their assessment more accurate.
  2. The other two studies both asked for voting in terms of Romney vs Obama. Scott and Pound were concerned that this might capture a personal preference change that was more about Obama and Romney as people rather than a political change. They measured both self-reported political leanings and a “moral foundations” test and came up with an overall “conservatism” rank, then tracked that with chances of conception.
  3. They controlled for age, number of children, and other sociological factors.

So overall, what did this show? Well, basically, political philosophy doesn’t vary much no matter where a woman is in her cycle.

The authors have a pretty interesting discussion at the end about the problems with Mechanical Turk (where all three studies recruited their participants in the same few months), the differences of measuring person preference (Obama vs Romney) vs political preference (Republican vs Democrat), and some statistical analysis problems.

So what do I think now?

First off, I’ve realized that getting all ranty when someone brings up women’s hormones effecting things may be counterproductive. Lesson learned.

More seriously though, I find the hypothesis that our preferences for individuals may change with hormonal changes more compelling than the hypothesis that our overall philosophy of religion or government changes with our hormones. The first simply seems more plausible to me. In a tight presidential election though, this may be hopelessly confounded by the candidates actual behavior. It’s pretty well known that single women voted overwhelmingly for Obama, and that Romney had a better chance to capture the votes of married women. Candidates know this and can play to it, so if a candidate makes a statement playing to their base, you may see shifts that have nothing to do with hormones of the voters but are an actual reaction to real time statements. This may be a case where research in to the hypothetical (i.e. made up candidate A vs B) may be helpful.

The discussions on fertility measures and statistical analysis were interesting and a good insight in to how much study conclusions can change based on how we define particular metrics.  I was happy to see that both follow up papers hammered on clear and standard definitions for “fertility”. If that is one of  the primary metrics you are assessing, then the utmost care must be taken to assess it accurately, or else the signal to noise ratio can go through the roof.

Do I still think CNN should have taken the story down? Yes….but just as much as I believe that they should take most sensational new social/psych research stories down. If you follow the research for just two more papers, you see the conclusion go from broad (women change their social, political and religious views and votes based on fertility!) to much narrower (women may in some cases change their preference or voting patterns for particular candidates based on fertility, but their religious and political beliefs do not appear to change regardless). I’ll be interested to see if anyone tries to replicate this with the 2016 election, and if so what the conclusions are.

This concludes your trip down memory lane!
1. Gee, this is sounding familiar
2. This point was really interesting. He pointed out that around elections, pollsters are pretty obsessive about tracking things, and short of a major scandal breaking literally NOTHING causes a rapid 20 point swing. The idea that swings that large were happening regularly and everyone had missed it seemed implausible to him. Statistically of course, the authors were only testing that there was a difference at all, not what it was….but the large effect should possibly have given them pause. It would be like finding that ovulation made women spend twice as much on buying a house. People don’t change THAT dramatically, and if you find that they do you may want to rerun the numbers.
3. Okay, so I can’t be the only one noticing at this point that this means 3 different studies all recruited around 1000 American women not on birth control, not pregnant, not recently pregnant or breastfeeding but of child bearing age, interested in participating in a study on politics, all at the same time and all through Amazon’s Mechanical Turk. Has anyone asked the authors to compare how much of their sample was actually the same women? Does Mechanical Turk have any barriers for this? Do we care? Oh! Yes, turns out this is actually a bit of a problem.