Rewind: Politics and Polling in 1975

I’ve mentioned before on this blog that my grandfather was a statistician who ran his own company producing probability chart paper. For those of you under the age of 40 (50? 60?) who weren’t raised around such things, this was basically graphing software before there were computers. Probability chart paper manipulated the axes of charts and allowed you to graph fancy distributions without actually have to calculate every value out by hand. Kind of like a slide rule, but for graphing. Not helping the under 40 crowd with that analogy I’m sure.

ANYWAY, what I don’t think I’ve mentioned here is that my grandfather also happened to be a stats blogger before computers existed. From 1974 to 1985 he produced a quarterly newsletter teaching people how to use statistics more effectively. I found out a few months ago that my father had actually saved a copy of all of these newsletters, and I’ve made it my goal this summer to read and digitize every issue. While a lot of the newsletters are teaching people how to do hand calculations (shudder), I may be pulling out a few snippets here and there and posting them. Today I was reading the issue from Late Winter (January and February) of 1975, and stumbled across this gem I thought people would appreciate:


I still don’t know how he typed all those equations with a typewriter.

Gee, glad things have improved so much.

Fun possibly exaggerated family legend: my grandfather was a Democrat for most of his life, but he hated Ted Kennedy so much he maintained a Massachusetts address for almost a year after he moved to New Hampshire just so he could continue voting against him.

The Signal and the Noise: Chapter 2

This is a series of posts featuring anecdotes from the book The Signal and the Noise by Nate Silver.  Read the Chapter 1 post here.

Chapter 2 of The Signal and the Noise focuses on why political pundits are so often wrong. When TV channels select for those making crazy predictions, it turns out accuracy rates go way down. You can either get bold, or you can be right, but very rarely can you be both.


Basically, networks don’t care about false positives….big predictions that don’t come true. What they do care about is false negatives….possibilities that don’t get raised. They consider the first just understandable bluster, but the second is unforgivable. So next time you wonder why there’s so many stupid opinions on TV, remember that’s a feature not a bug.

Read all The Signal and the Noise posts here, or go back to Chapter 1 here.

Materialism and Post-Materialism

I got an interesting question from the Assistant Village Idiot recently, pointing me to this blog post1 on materialism and post-materialism in various countries by year, wealth of nation, wealth of individual, age and education level of respondent.  It’s an interesting compilation of graphs and research that seem to show us, as a world, moving from a materialistic mindset, to a post-materialistic mindset. So what does that mean and what’s my take?

First, some definitions.
Up front the definitions are given as follows:

Materialist: mostly concerned with material needs and physical and economic security
Post-materialist: strive for self-actualization, stress the aesthetic and the intellectual, and cherish belonging and esteem

What interested me is that if you go all the way to the end, you find that the question used to categorize people was actually a little more specific.  They asked people the following question:

“If you had to choose among the following things, which are the two that seem most desirable to you?”

  1. Maintaining order in the nation. (Materialist)
  2. Giving the people more say in important political decisions. (Post-materialist)
  3. Fighting rising prices. (Materialist)
  4. Protecting freedom of speech. (Post-materialist)

People then receive a score between 1 and 3.  If you pick both materialist options (#1 and #3), you get a score of 1. If you pick both post-materialist options (#2 and #4), you get a score 3. If you pick one of each, you get a score of 2.

So what are we seeing?

Well, this (from this paper here):


Every country in the world scores (on average) between a 1.4 and a 2.2.  There were also graphs that showed that higher class people moved toward post-materialist mindset, and that the world as a whole has been moving towards it over the years.

I do think it’s worth noting that only about 8 countries score over a 2, with a few more on the line. On the whole, more countries skew materialist than post-materialist on this scale…..though the 8 that are higher are all fairly high on the development index.

So what does this mean?

Well, it seems to be a matter of focus. In my opinion, these questions seem to serve as a proxy for current concerns as much as actual preferences. For example, I did not rank “fighting rising prices” very high, but I also live in a country that has only slow inflation for most of my life. Essentially, this appears to be a sort of political Maslow’s hierarchy of needs. It’s most likely not that people don’t care about safety or price stability, but rather they don’t prioritize it if they already have it. Additionally, I would suspect that many people would argue that they like free speech because it maintains order in a country, as opposed to actually desiring free speech over order.

Most of the data comes from one particular researcher, Ronald Inglehart, who focuses on changing values and theorizing what impact that might have on society. Inglehart is not particularly hypothesizing that being post-materialist is bad, but rather that it represents a departure from the way most people have lived for thousands of years.  Because it appears our values slant is set earlier in life, he proposes that those of us growing up in relative safety and security will always bias towards a post-materialist focus. He researches what effect that may have on society.

While some of this may seem obvious, he brings up a couple related outcomes that were fairly subtle. For instance, he points out in this paper that we have seen a reduction in voting stratified by social class, and an increase in voting stratified around social issues.  This suggests that even a very basic level of security like the type provided by our welfare systems allows people more time to focus on their values and ideals.  It varied by country, but in the US there was almost NO difference in materialist/post-materialist values by education class.

This was an interesting point, because I think many people are troubled by how contentious some of our social issue debates have gotten (abortion, women’s rights, the environmental movement, etc) have all gotten. The idea that these issues are now more contentious because more people are devoting more thought to them is intriguing. Additionally, it seems that there would be less national agreement on those types of issues in comparison to safety and security issues. If your country is under attack, there is no debate about defending yourself. We may debate the method, but the outcome is widely agreed upon. With social issues that’s not true. What effect this will have on country level stability is unknown.

Interesting stuff to keep an eye on going forward, and keep in mind this election season.

1. Max Roser (2016) – ‘Materialism and Post-Materialism’. Published online at Retrieved from: [Online Resource]

5 Studies About Politics and Bias to Get You Through Election Season

Last week, the Assistant Village Idiot posed a short but profound question on his blog:

Okay, let us consider the possibility that it really is the conservatives who are ignorant, aren’t listening, and reflexively reject other points of view.
How are we going to measure whether that is true?  Something that would stand up when presented to a man from Mars.

I liked this question because it calls for empirical evidence on a topic where both sides believe their superiority is breathtakingly obvious. I gave my answer in the comments there, but I wanted to take a few minutes here to review how I think you would measure this, and pull together some of my favorite studies on politically motivated bias as a general reference.

Before we start on that, I should mention that the first three parts of my answer to the original question covered how you would actually define your target demographic. Defining ahead of time who is a conservative and who is a liberal, and/or what types of conservatives or liberals you care about is critical. As we’ve seen in the primaries this year, both conservatives and liberals can struggle to establish who the “true” members of their parties are. With 42% of voters now refusing to identify with a particular political party, this is no small matter. Additionally, we would have to define what types of people we were looking at. Are we surveying your average Joe or Jane, or are we looking at elected leaders? Journalists? Academics? Activists? It’s entirely plausible that visible subgroups of either party could be less thoughtful/more ignorant/etc than the average party member.

On more thing: there’s a really interesting part in Jonathan Haidt’s book “The Righteous Mind” where he talks about how conservatives are better at explaining liberal arguments than liberals are at explaining conservative ones. As far as I can tell, he did not actually publish this study, so it’s not included here. If you want to read about it though, this is a good summary. Alright, with those caveats, let’s look at some studies!

  1. Overall Recognition of Bias: The Bias Blind Spot: Perceptions of Bias in Self Versus Others This one is not politically specific, but does speak to our overall perception of bias. This series of studies asked people (first college students, then random people at an airport) to rate how biased they were in comparison to others. They were also asked to rate themselves on other negative traits such as procrastination and poor planning. Most people were happy to admit they procrastinate even MORE than the average person, but when it came to bias almost everyone was convinced they were better than average. Even after being told bias would likely compel them to overrate themselves, people didn’t really change their opinion. That’s the problem with figuring out who is more biased. The first thing bias does is blind you to it’s existence. It would be rather interesting to see if political affiliation influenced these results though. In the meantime, try the Clearer Thinking political bias test to see where you score.
  2. Biased Interpretations of Objective Facts: Motivated Numeracy and Enlightened Self-Government  Okay, I bring this study up a lot. I wrote about it both here and for another site here.  In this study people were presented with one of four math problems, all containing the same numbers and all requiring the same calculations. The only thing that changed in each version of the problem was the words that set up the math. In two versions, it was a neutral question about whether or not a skin cream worked as advertised. In the other two versions, it was a question about gun control. The researchers then recorded whether or not your political beliefs influenced your ability to do math correctly if doing so would give you an answer you didn’t like. The answer was a strong YES. People who were otherwise great at math did terribly on this question if they didn’t like what the math was telling them. This effect was seen in both parties. The effect was actually worse the better at math you were. The effects size was equal (on average) for both parties.
  3. Dogmatism and Complex Thinking: Are Conservatives Really More Simple-Minded than Liberals? The Domain Specificity of Complex Thinking I posted about this one back in February when I did a sketchnote of the study structure. This study took a look at dogmatic beliefs and the complexity of the reasoning people used to justify their beliefs. The study was done because the typical “dogmatism scale” used to study political beliefs had almost always showed that conservatives were less thoughtful and more dogmatic about their beliefs than liberals were. The study authors suspected that finding was because the test was specifically designed to test conservatives on things they were, well, more dogmatic about. They ran several tests, and each showed that dogmatism and simplistic thinking were actually topic specific, not party specific. For example, conservatives tended to be dogmatic about religion, while liberals tended to be more dogmatic about the environment. This study actually looked at both everyday people AND transcripts from presidential debates for their rankings. The stronger the belief, the more dogmatic people were.
  4. Asking People Directly: Political Diversity in Social and Personality Psychology While we generally assume people won’t admit to bias, sometimes they actually view it as the rational choice. In this paper, two self-described liberal researchers asked other social psychologists what their political affiliation was and if they would discriminate on . They found that social psychology was quite liberal, though most people within the field actually overestimated this. Additionally, many people reported that they would discriminate against a conservative in hiring practices, wouldn’t give them grants, and would reject their papers on the basis of political affiliation. I think this study is a good subset of the dogmatism one….depending on the topic some groups may be more than happy to admit they don’t want to hear the other side. Not everyone considers dismissing those with opposing viewpoints a bad thing. I’m picking on liberals here, but given the dogmatism study above, I would be cautious about thinking this is a  phenomena only one party is capable of. Regardless, asking people directly how much they thought they should listen to the other side might yeild some intriguing results.
  5. Voting Pattern Changes: Unequal Incomes, Ideology and Gridlock: How Rising Inequality Increases Political Polarization When confronted with the results of that last study, one social psychologist ended up stating that social psychology hadn’t gotten more liberal, but rather that conservatives had gotten more conservative. It’s an interesting charge, and one that should be examined a bit. The paper above took a look at this on the state level, and found that in many states the values of conservative and liberal elected leaders have changed. Basically, in states with high income inequality, liberal voters vote out moderate liberals and nominate more extreme liberals. Then, in the general election, the more moderate candidate tends to be Republican, so the unaffiliated voters go there. This means that fewer liberals get elected, but the ones who do get in are more extreme. The Republicans on the other hand now get a majority, meaning the legislatures as a whole skew more conservative. These conservatives are both ideologically farther apart from the remaining liberals AND less incentivized to work with them. So in this case, a liberal looking at their state government could accurately state “things have shifted to the right” and be completely correct. Likewise, a conservative could look at the liberal members of the legislature and say “they seem further to the left than the guys they replaced” and ALSO be correct. So everyone can be right and end up believing the best course is to double down.

Overall, I don’t know where this election is going or what the state of the political parties will be after it’s done. However, I do know that our biases probably aren’t helping.

SCOTUS Nomination Timing

After yesterday’s news about the death of Antonin Scalia’s death, the conversation almost immediately turned to whether or not President Obama should or would nominate a new candidate.  There’s obviously a lot being said about this right now by better legal and political minds than mine, but I did start wondering what kind of timing there normally was between Supreme Court nominations and Presidential Elections.  Thanks to Wikipedia, I was able to find a list of all 160 Supreme Court nominations that have occurred since 1789. I combined this with a list of election dates, and calculated the difference between the day the person was submitted to the Senate and the next presidential election.  I graphed days vs election year, and color coded the dots with the outcome of the nomination.

A few notes:

  1. I didn’t fully vet the Wikipedia data. If there’s an error in that data, it’s in this chart.
  2. All day calculations for years prior to the 1848 election are approximate. Prior to that, states had a 34 day window prior to the first Wednesday in December to hold their election. I gave them a default date of November 3rd for their year, which could be off in some cases.
  3. There were a few cases in which presidents attempted to nominate someone after the election but before the next inauguration. If they got re-elected, I counted that nomination from the election that would take place 4 years later. If they were leaving office, I gave them a negative number.
  4. 310 days is approximately the number of days between January 1st of a year and the general election, so I put a reference line there.
  5. These nominations include Chief Justice nominations….and those nominees may have been active justices when they were nominated.

With that out of the way, here you go:

Days to election

Rutheford B Hayes sets the record for getting things in under the wire, as he nominated William Burnham Woods in late December of 1880. He actually also nominated Stanley Matthews in January of that year, but it didn’t go to a vote. Matthews was renominated and confirmed a few months later by Garfield.

Overall only about 15% of nominations ever have come in this close to the election, and the success rate of those nominations is a little less than half. To compare, those nominees submitted before January 1st of the election year have about an 80% all time success rate. Obviously we haven’t even dealt with this in a while, but it’s interesting to see that historically this was more common than in recent years.

This could get interesting kids!

Are Conservatives Simple Minded?

Not too long ago, there was a bit of buzz going on about a study that suggested that liberals and conservatives can both be simple minded. In the past most of the reporting has suggested that when it comes to politics conservatives as a group are less complex thinkers than liberals, so naturally it created a stir. The buzz and the study intrigued me, so I decided to do a bit of a deep dive and sketchnote out what the researchers did.

I got the original study “Are Conservatives Really More Simple-Minded than Liberals? The Domain Specificity of Complex Thinking” by Conway, et al. , and started to read.  One really important note up front: neither I nor the authors suggest that complex thinking is always a sign of correct thinking or even desirable thinking. If you were out with a new acquaintance who told you their views on, say, cannibalism, were complex, you would probably be squicked out. However, since previous research had suggested that conservatives were almost always less complex than liberals, the authors wanted to check that specifically.  Their basic hypothesis was simple: when it comes to complex thinking, topic matters. They conducted 4 studies to test this hypothesis, so the whole thing got a little crowded….but here’s the overview:FullSizeRender

A couple thoughts/notes:

  1. One of those most interesting findings was that complexity dropped as intensity of feeling increased. This causation could go either way….people could feel strongly about things they believe are straightforward, or we could simplify when our feelings are strong. Or it could be both.
  2. It’s interesting to me that they rated both regular college kids and then rated the debates.  That seemed like a nice balance.
  3. When they used regular college kids, they only used people who scored at the higher end for conservatism or liberalism. They did not include people who were in the middle.

Overall, I don’t think this result is particularly surprising. It makes sense that people are not entirely complex or entirely simple. Interesting study, and I look forward to more!

Women, Ovulation and Voting in 2016

Welcome to “From the Archives” where I revisit old posts  to see where the science (or my thinking) has gone since I put them up originally.

Back in good old October of 2012, it was an election year and I was getting irritated1. First, I was being bombarded with Elizabeth Warren vs Scott Brown for Senate ads, and then I was confronted with this study:The fluctuating female vote: politics, religion, and the ovulatory cycle (Durante, et al), which purported to show that women’s political and religious beliefs varied wildly around their monthly cycle, but in different ways if they were married or single. For single women they claimed that being fertile caused them to get more liberal and less religious, because they had more liberal attitudes toward sex. For married women, being fertile made them more conservative and religious so they could compensate for their urge to cheat.  The swing was wide too: about 20%.  Of note, the study never actually observed any women changing their vote, but compared two groups of women to find the differences. The study got a lot of attention because CNN initially put it up, then took it back down when people complained.  I wrote two posts about this, one irritated and ranty, and one pointing to some more technical issues I had.

With a new election coming around, I was thinking about this paper and wanted to take a look at where it had gone since then. I knew that Andrew Gelman had ultimately taken shots at the study for reporting an implausibly large effect2 and potentially collecting lots of data/comparisons and only publishing some of them, so I was curious how this study had subsequently fared.

Well, there are updates!  First, in 2014, a different group tried to replicate their results in a paper called  Women Can Keep the Vote: No Evidence That Hormonal Changes During the Menstrual Cycle Impact Political and Religious Beliefs by Harris and Mickes.  This paper recruited a different group, but essentially recreated much of the analysis of the original paper with one major addition. They conducted their survey prior to the 2012 election AND after, to see predicted voting behavior vs actual voting behavior.  A few findings:

  1. The first paper (Durante et al) had found that fiscal policy beliefs didn’t change for women, but social policy beliefs did change around ovulation. The second paper (Harris and Mickes) failed to replicate this finding, and also failed to detect any change in religious beliefs.
  2. In the second paper, married women had a different stated preference for Obama (high when low feritility, lower when high fertility), but that difference went away when you looked at how they actually voted. For single women, it was actually the opposite. They reported the same preference level for Obama regardless of fertility, but voted differently based on the time of the month.
  3. The original Durante study had taken some heat for how they assessed fertility level in their work. There were concerns that self reported fertility level was so likely to be inaccurate that it would render any conclusions void. I was interested to see that Harris and Mickes clarified that the Durante paper actually didn’t accurately describe how they did fertility assessments in the original paper, and that they had both ultimately used the same method. This was supposed to be in the supplementary material, but I couldn’t find a copy of that free online. It’s an interesting footnote.
  4. A reviewer asked them to combine the pre and post election data to see if they could find a fertility/relationship interaction effect. When pre and post election data were kept separate, there was no effect. When they were combined, there was.

Point #4 is where things got a little interesting. The authors of the Harris and Mickes study said combining their data was not valid, but Durante et al hit back and said “why not?”. There’s an interesting piece of stat/research geekery about the dispute here, but the TL;DR version is that this could be considered a partial replication or a failure to replicate, depending on your statistical strategy. Unfortunately this is one of those areas where you can get some legitimate concern that a person’s judgement calls are being shaded by their view of the outcome. Since we don’t know what either researchers original plan was, we don’t know if either one modified their strategy based on results. Additionally the “is it valid to combine these data sets” question is a good one, and would be open for discussion even if we were discussing something totally innocuous. The political nature of the discussion intensifies the debate, but it didn’t create it.

Fast forward now to 2015, when yet another study was published: Menstrual Cycle Phase Does Not Predict Political Conservatism. This study was done using data ALSO from the 2012 election cycle3, but with a few further changes.  The highlights:

  1. This study, by Scott and Pound, addressed some of the “how do you measure fertility when you can’t test” concerns by asking about medical conditions that might influence fertility to screen out women whose self reporting might be less accurate. They also ranked fertility on a continuum as opposed to the dichotomous “high” and “low”. This should have made their assessment more accurate.
  2. The other two studies both asked for voting in terms of Romney vs Obama. Scott and Pound were concerned that this might capture a personal preference change that was more about Obama and Romney as people rather than a political change. They measured both self-reported political leanings and a “moral foundations” test and came up with an overall “conservatism” rank, then tracked that with chances of conception.
  3. They controlled for age, number of children, and other sociological factors.

So overall, what did this show? Well, basically, political philosophy doesn’t vary much no matter where a woman is in her cycle.

The authors have a pretty interesting discussion at the end about the problems with Mechanical Turk (where all three studies recruited their participants in the same few months), the differences of measuring person preference (Obama vs Romney) vs political preference (Republican vs Democrat), and some statistical analysis problems.

So what do I think now?

First off, I’ve realized that getting all ranty when someone brings up women’s hormones effecting things may be counterproductive. Lesson learned.

More seriously though, I find the hypothesis that our preferences for individuals may change with hormonal changes more compelling than the hypothesis that our overall philosophy of religion or government changes with our hormones. The first simply seems more plausible to me. In a tight presidential election though, this may be hopelessly confounded by the candidates actual behavior. It’s pretty well known that single women voted overwhelmingly for Obama, and that Romney had a better chance to capture the votes of married women. Candidates know this and can play to it, so if a candidate makes a statement playing to their base, you may see shifts that have nothing to do with hormones of the voters but are an actual reaction to real time statements. This may be a case where research in to the hypothetical (i.e. made up candidate A vs B) may be helpful.

The discussions on fertility measures and statistical analysis were interesting and a good insight in to how much study conclusions can change based on how we define particular metrics.  I was happy to see that both follow up papers hammered on clear and standard definitions for “fertility”. If that is one of  the primary metrics you are assessing, then the utmost care must be taken to assess it accurately, or else the signal to noise ratio can go through the roof.

Do I still think CNN should have taken the story down? Yes….but just as much as I believe that they should take most sensational new social/psych research stories down. If you follow the research for just two more papers, you see the conclusion go from broad (women change their social, political and religious views and votes based on fertility!) to much narrower (women may in some cases change their preference or voting patterns for particular candidates based on fertility, but their religious and political beliefs do not appear to change regardless). I’ll be interested to see if anyone tries to replicate this with the 2016 election, and if so what the conclusions are.

This concludes your trip down memory lane!
1. Gee, this is sounding familiar
2. This point was really interesting. He pointed out that around elections, pollsters are pretty obsessive about tracking things, and short of a major scandal breaking literally NOTHING causes a rapid 20 point swing. The idea that swings that large were happening regularly and everyone had missed it seemed implausible to him. Statistically of course, the authors were only testing that there was a difference at all, not what it was….but the large effect should possibly have given them pause. It would be like finding that ovulation made women spend twice as much on buying a house. People don’t change THAT dramatically, and if you find that they do you may want to rerun the numbers.
3. Okay, so I can’t be the only one noticing at this point that this means 3 different studies all recruited around 1000 American women not on birth control, not pregnant, not recently pregnant or breastfeeding but of child bearing age, interested in participating in a study on politics, all at the same time and all through Amazon’s Mechanical Turk. Has anyone asked the authors to compare how much of their sample was actually the same women? Does Mechanical Turk have any barriers for this? Do we care? Oh! Yes, turns out this is actually a bit of a problem.

Immigration, Poverty and Gumballs

A long time reader (hi David!) forwarded this video and asked what I thought of it:

It’s pretty short, but if you don’t feel like watching it, essentially it’s a video put out by a group attempting to address whether or not immigration to the US can reduce global poverty.  He uses gumballs to represent the population of people in the world living in poverty (one gumball = one million people), and ultimately concludes that immigration will not solves global poverty.

Now, I’m not the most educated of people when it comes to immigration issues, but I was intrigued by his math based demonstration. At one point he even has gumballs fall all over the floor, which drives home exactly how screwed we are when it comes to fixing global poverty. But do I buy it? Are the underlying facts correct? Is this a good video? Well, lets take a look:

First, some context: Context is frequently missing on Facebook, and it can be useful to know the background of what you’re seeing when there’s a video like this.  I did some digging, so here goes:  The man in the video is Roy Beck, who founded a group called Numbers USA, website here. Their tag line is “for lower immigration levels”, and unsurprisingly, that’s what they want.  The video, and presumably the numbers in it, are from 2010.  I thought the name NumbersUSA sounded ambitious, but I did find they have an “Accuracy Guarantee” on their FAQ page promising they would take down any inaccurate numbers or information. I don’t know if they do it (and they have not responded to my complaint yet), but that was cool to see.

Now, the argument:  To start the video, Mr Beck lays out his argument by quantifying the number of desperately poor people in the world. He clarifies that “desperately poor” is defined by the World Bank standard of “making less than two dollars a day”. He begins to name the number of desperately poor people in various regions of the world, and stacks gumballs to represent all of these regions. The number is heartbreakingly high and it worsens as he continues….but when his conclusion came to about half the globe (3 billion people or 8 larger containers of gumballs) living at that level, I was skeptical. I’ve done some reading on extreme poverty, and I didn’t think it was that high. Well, it turns out it isn’t. It’s actually about 12.7% or 890 million. That’s only about 30% of the number he presents….maybe about 3 containers of gumballs instead of 8.

Given that that the video was older (and that extreme world poverty has been declining since the 1980s) I was trying to figure out what happened, so I went to this nifty visualization tool the World Bank provides. You can set the poverty level (less than $1.90/day or less than $3.10/day) and you can filter by country or region.  Not one of the numbers given is accurate. They haven’t even been accurate recently, as far as I can tell. For example, in 2010, China had 150 million people living on under $2/day.  In the video, he says 480 million, where China was in the year 2000 or so.  For India, he uses 890 million, a number I can’t find ever published by the World Bank.  The highest number they list for India at all is 430 million. The best I can conclude is that the numbers he shows here are actually those living under the $3.10/day level, which seem closer. Now $3.10/day is not rich by any means, but it’s not what he asserted either. He emphasizes the “less than 2 dollars a day” point multiple times.  At that point I figured I wasn’t going to check out the rest of the numbers….if the baseline isn’t accurate, anything he adds to it won’t be either. [Edit: It’s been pointed out to me that at the 2:04 mark he changes from using the $2/day standard to “poorer than Mexico”, so it’s possible the numbers after that timepoint do actually work better than I thought they would. It’s hard to tell without him giving a firm number. For reference, it looks like in 2016 the average income in Mexico is $12,800/year .]  It was at this point I decided to email the accuracy check on his website to ask for clarification, and will update if I hear back. I am truly interested in what happened here, because I did find a few websites that gave similar numbers to his….but they all cite the World Bank and all the links are now broken. The World Bank itself does not appear to currently stand by those statistics.

So did this matter? Well, yes and no. His basic argument is that we have 5.6 billion poor people. That grows every year by 80 million people each year. Subtract out 1 million immigrants to the US each year, and you’re not making a difference.  Even if those numbers are wildly different from what’s presented, the fundamental “1 million immigrants doesn’t make much of a dent in world poverty” probably stands.

But is that the question?

On the one hand, I’ll grant that it’s possible “some people say that mass immigration in to the United States can help reduce world poverty”, as he says to open his video. I do not engage much in immigration debates, but I wasn’t entirely sure that “reduce world poverty” was the primary argument. NumbersUSA puts out quite a few videos on many different topics, so it’s interesting that this one appears to be their most viral.  It currently has almost 3 million views, and most of their other videos don’t have even 1% of that. Given that “solve world poverty” is not one of the stated goals or arguments of the immigration organizations I could find, why was this so shared? I did find some evidence that people argue about immigrants sending money back to their home countries helping poverty, but that is not really addressed in this video. So why did so many people want to debunk an argument that is not the primary one being made?

My guess is the pretty demonstration. I covered in this post about graphs and technical pictures, that these sorts of additions seem to make us think an argument is more powerful than we would have otherwise. In this case, it seems a well demonstrated about magnitude and subtraction is trumping most people’s realizations that this is not arguing a point that is commonly made.

Now if the numbers aren’t accurate, that’s even more irritating (his demonstration would not have looked quite as good if it had 3 containers at the start instead of 8), but I’m not sure that’s really the point. These videos work in two ways, both by making an argument that will irritate people who disagree with you, and by convincing those who agree with you that you’ve answered the challenges you’ve gotten. It’s a classic example of a straw man…setting up an argument you can knock down easily. My suspicion is when you do it with math and a colorful demonstration, it convinces people even more. Not the fault so much of the video maker, as that of the consumer.  While it’s possible Mr Beck will reply to me and clarify his numbers with a better source, it looks unlikely. Caveat emptor.

Got a question/meme/thing you want explained or investigated? I’m on it! Submit them here.

Elections and small sample sizes

XKCD hits the nail on the head yet again with a great commentary on election year “no one has ever _____ and won the White House” musings.

These drive me nuts because obviously we have an incredibly small sample size.  Our country may have been around for quite some time now, but we’ve only had 44 presidents.  Think about how few people that really is.

Additionally, states change, demographics change, and the electoral college system is ridiculous.  This gives rise to all sorts of statistical “anomalies” that really are quite probable when you think of how few events we’re looking at.

The sports world does this too, baseball probably more than the rest of them.  While watching the post season this year with my long suffering Oriole’s fan husband, we got quite a kick out of pointing out how specific some of the stats they brought up were.  “He’s 1 for 3 when facing Sabathia during the post season over the last 3 years”.  Four at bats over a whole career and we’re supposed to draw some sort of conclusion from this?  Sigh.

Anyway, here’s the comic.  Happy Thursday.