Presentation: How They Reel You In (Part 2)

Note: This is part 2 in a series  for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 1 here.

In Part 1, we covered things that fake you out by presenting themselves as real when they are just pretty much made up. While those are some of the most obnoxious issues on the internet, they are not really the most insidious issues. For the remaining parts of this series, we’re going to be covering things that attempt to twist, overstate or otherwise misrepresent themselves to sound more convincing than they really are. For those of us who may be skimming through Facebook/Twitter/whatever new thing the kids are using these days, it’s good to start at the beginning….so I’ve called this section:

Headlines and First Impressions

Okay, so what’s the problem here?

NPR probably put it best when they said it in this article last year…Why Doesn’t America Read Anymore?

Did you check it out? Go ahead. I’ll wait.

Back yet?

Okay, so if you clicked you see the issue.  If you’re too lazy to click, well that’s part of my point. That was a fake article….but not in the “made up untrue” sense we covered earlier. See the good folks over at NPR started getting the impression that people were just reading the headlines and then freaking out and commenting before they read further. They decided to test this (on April Fool’s Day no less), and posted a story with the headline above paired with an article that said “Happy April Fools Day!” and explained they were performing a test to see how many people read even a few words of the article before reacting. The reaction is chronicled here, but basically they got thousands of comments about what people presumed the article said. Now clearly some portion of those people were trying to be funny, but some of the screenshots taken suggest many really were reacting to just the headline. Interestingly, the worst effect was not on the NPR website (where you’d have to scroll through the article to get to the comment section), but rather on my arch-nemesis Facebook.

Okay, so what kind of things should we be looking out for?

If nothing else, the above prank should convince you to always make sure there’s something attached to whatever article you want to comment on.  Once that’s out of the way, you should be looking for more subtle bias.  Slate Star Codex recently had a really good side by side of a few different headlines that all came out of the same study:

Same results, same press release, four different ways of framing the issue. Unsurprisingly, it’s these subtle issues that are actually bigger problems in real life. The New Yorker did a great piece on the power of headlines to frame perceptions, and the results were a little unnerving. They focused on a recent study that paired articles with different headlines to see how people’s memories and interpretations of information were effected. Some of what they found:

  • Inaccurate headlines skewed what people remembered the article said, but their inferences from the information stayed sound
  • More subtle framing bias changed both people’s memory and their interpretation of information
  •  People have trouble separating visuals from headlines. If the headline talked about a crime perpetrator but the picture was of the victim, people felt less sympathy for the victim later

Yikes.

So why do we fall for this stuff?

Well, because it was designed to make us fall for it. One of the more interesting articles on the topic is this one from Neiman Lab, and it had a quote I loved:

That new environment is, for instance, the main reason headlines have become so much more emotional and evocative. In a print newspaper, a headline is surrounded by lots of other contextual clues that tell you about the story: Is there a photo with it? Do I recognize the byline? Is it blazed across the top of Page 1 or buried on C22? Online, headlines often pop up alone and disembodied amid an endless stream of other content. They have a bigger job to do.

Basically, news sites that can get you to click will thrive, and those that can’t, won’t. More than ever, headlines are essentially mini commercials…and who better than the advertising industry to take advantage of all of our cognitive biases?

So what can we do about it?

When it comes to headlines, especially ones that make claims about science or data, I think it’s important to think of this as a group of concentric circles.  As you move outward, the claims get bolder, brasher, and all caveats get dropped:

Headlines

It’s also important to remind yourself that it’s frequently editors writing headlines, not journalists. If you view headlines as a commercial and not a piece of information, it may help you spot inconsistencies between the way information was presented and the way the article actually reads. We haven’t progressed far enough in the research to know how much we can negate the impact of headlines by being more aware of them, but it seems reasonable that being a little paranoid couldn’t hurt.

For some specialized and frequently misrepresented fields, it’s also a good idea to read up on what frustrates scientists within the field.  I’ve never looked at headlines about brain function or neuroscience the same way after I watched Molly Crockett’s Ted talk:

On the plus side, the internet makes it easier than ever for people to complain about headlines and actually get them fixed. For example, last year a Washington Post op-ed got published with a headline “One Way to End Violence Against Women? Stop Taking Lovers and Get Married” with a sub-headline that read “the data shows that #yesallwomen would be safer married to their baby daddies”. People were upset, and many took umbrage with the headline for giving the impression that violence against women was women’s fault. Even the author of the piece jumped in and disavowed the headlines, saying the were disappointed in the tone. The paper ended up changing the headline after admitting it was causing a distraction.  Now whatever you think of this particular story, it’s a good sign that this is a type of bias you can actually do something about. It’s a really good example of a place where “see something, say something” might make a difference.

Interestingly, these headline changes are pretty easy to track by checking out the URL at the top of the page:

Headlinechange

This almost never gets changed, and sometimes shows some sneaky/unannounced updates. If you’re looking for a place to make a real difference, headline activism may be a good place to start.

We’ll dive more in to the pictures in Part 3.

Presentation: How they Reel You In (Part 1)

Note: This is part 1 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here.

When I first sat down to write a talk for high school science students about how to read science on the internet, one of my priorities was to immediately establish for the kids why this was different from anything else they were learning. The school I was going to had won awards for their science teaching, and these students were no slackers. I was concerned that given their (probably justifiable) confidence in their skills, many of these kids would assume that they would actually be great at interpreting random articles they stumbled across online. This actually made me think of a totally different study on a completely different topic: sexual harassment. In that study, college aged women were asked what they would do if an interviewer sexually harassed them. All said they would confront/walk out/report.  However, when the experimenters actually put women in fake job interview scenarios and had an interview harass them, none of them did any of those things. The general conclusion is that we have the right answers when we know what we’re being tested on, but in real life we often don’t know what is even being asked of us.

I didn’t end up using the harassment study, but I did feel comfortable putting that framing on my introduction. When you read science in a classroom, the teacher is going to be clear what you’re supposed to get out of the lesson. An astute student can be relatively confident what they’ll be tested on. I’ve had plenty of tests where I changed or re-evaluated my answer because I suspected the teacher was doing something a bit different than the initial reading would suggest. On the internet however, no one gives you even the briefest of heads up as to what material is going to be covered. When you encounter an interesting science story or number, you are almost always going to be thinking about something that is not at all related. Because of this, the first impression of a story is important….it may be all you get.  Now with that in mind, we have to realize that information can be read and absorbed in a few seconds, so an instantaneous skepticism is key.  The first part of this is simple, but critical: make sure whatever you’re seeing is actually true. Like, at all.  That’s why Part 1 here is called

False Information, Deceptive Memes and Other Fake Stuff

Okay, so what’s the problem here?

The problem is, some stuff on the internet is fake. I know, total shocker. But seriously, it’s actually pretty stunning how often people take entirely made up news stories, glance at them, and end up believing they’re real. There’s a whole website called Literally Unbelievable that catalogs peoples reactions to fake news stories from well known sites.   Like this one:

But the issues don’t end with just satire, sometimes people are making things up just for the heck of it, like the guy who spent a couple weeks putting up fake facts with pictures behind it to see who would call him out on it. This was my favorite:

Sometimes people make things up to push a political agenda, as Abraham Lincoln warned us:

Okay, so what kind of things should we be looking out for?

Well, anything with a picture on it designed to be catchy should be immediately suspect, and that goes double if it’s political or has an agenda. Also, anything that falls in to an area you feel pretty confident about should also be scrutinized. It turns out people who feel confident in their expertise on a topic can be more likely to believe they know the definition of made up terms.

Why do we fall for this stuff?

Well, a couple of reasons. Like I said in the intro, sometimes we just flat out don’t have our skepticism engaged. If you’re scrolling through Facebook thinking about your ex, or your friends, or the awkward political commentary your cousin is making, you might be less likely to even consciously register a meme about lightning and cells phones. You may find yourself believing it later because you really never thought it through to begin with.

Conversely, confirmation bias is a powerful force, and it frequently leads us to apply less scrutiny to things we’d like to believe.  That’s why political falsehoods are so easy to pass along….people believe that they have some “truthiness” to them (as Stephen Colbert would say) or that they were “fake but accurate” (as the New York Times would say).

Compounding both of these problems is our own perception of how smart we are. Earlier I linked to this study that showed that people who think they know a lot about a topic can be even more susceptible to accepting fake terms. And lest we think this is just for people who only think they’re smart, I would point you to the Neil Degrasse Tyson/George W Bush quote controversy. Neil Degrasse Tyson is possibly the most famous scientist in the US today, and he was caught quoting George W Bush inaccurately. It took some rather dogged determination by an opposing journalist to get him to admit that he got the quote and it’s context wrong. Now if Neil Degrasse Tyson can get tripped up by wrong information, who are we to claim to be better?

So what can we do about it?

 

It won’t help you every time, but a good first step is simply to Google the information. If you can’t verify, don’t post. Some items are disputed (we’ll get to that later), and their interpretation may questioned, but completely fake stories should have a pretty good Google history to let you know that. For satirical websites, even taking a look at other stories they post can tip you off.  The site fakenewswatch.com has a good list to get you started. Some hoax sites are really trying to trick you….for example MSNBC.com is a real news site, and MSNBC.co is not. For general viral stories snopes.com can point you in the right direction.  Again, this won’t help much if the story is disputed, but it should point you in the right direction for completely made up stuff.  In future posts, we’ll get in to the nuances, but for now, remember that sometimes there is no nuance. Sometimes things are just fake.

 

Want more? Click here for Part 2.

5 Ways to Statistically Analyze Your Fantasy Football League

For the past few years I’ve been playing in a fantasy football league with a few folks I grew up with. One of the highlights of the league is the weekly recap/power rankings sent out by our league commissioner. Recently I had to fill in for him, and it got me thinking about how to use various statistical analysis methods to figure out who the best team was overall and who was doing better as the season progressed. I figured since I put the work in, I might as well put a post together going over what I did.  Also, I’m completely tanking this year, so this gives me something a little more fun to focus on1. Our league is a ten team, head to head match-up, PPR league, for what it’s worth.

  1. Mean Comparison Using Tukey’s Method: The first and most obvious question I had when looking at the numbers was who was really better than who to a statistically significant level? ESPN provides a good running total of points scored by each team, but I was curious at what level those differences were statistically significant. The Tukey method lets you calculate a number that shows how far apart average scores have to be before the difference is significant. I had Minitab help me out and got that 36 points was the critical difference in our league at this point in the season. It also gave me this nifty table, with my score and feelings in red:FFtukeySo really there are three distinct groups, each connoted with a different letter. Kyle is showing a bit of spunk here though and rising a bit above the rest, while Heidi is drifting towards the bottom.  I also did this analysis using the scores of each person’s opponents, and despite the perception that some people have gotten lucky, none of the means were significantly different when it came to opponent score.
  2. Box Plot or Box and Whisker Diagram: So if the mean comparison gives us a broad picture of the entire season’s performance, with many teams clumped together, how do we tease out some further detail? I decided to use a box plot, because using quartiles and medians rather than averages helps account for fluky games. As anyone who has ever played fantasy sports knows, even the worst team can have a player explode one week…or have normally good players tank completely. Showing the median performance is more informative of how the player is doing week to week, and how likely they are to outscore opponents. Since I did this at week 11, the box represents about 6 games, and each tail represents about 3.
    FFBoxplot

    The worst part about this graph is it called my best game an outlier.  Why you gotta be so negative there box plot? What did I ever do to you?

    This shows a few interesting things, namely that three players in our league (Ryan, David and JA) have nearly the same median but are having wildly different seasons. It also is one of the clearest ways of putting all the data on one graph. I tried a histogram, and boy did that get messy with 10 different people to keep track of.

  3. Regression Lines/Line of Best FitOkay, so now that we have a good picture of the season, let’s see some trends! Because of course fantasy football, like all sports, cares a lot more about where you end than where you start. Players get injured, people have weak benches, people come back from suspensions, etc etc. By fitting a regression line we can see where everyone started and where they’re headed:FFregression Now this shows us some interesting patterns. I checked the significance levels on these, and 7 of them actually had significant patterns (my scores, David and Jonathan’s were not significant at the .05 level). This is how I ultimately determined the rankings I sent out. Amusingly, one of our most all over the place players didn’t actually get a linear relationship as the best fitting model. I ignored that, but it made me laugh.
  4. Games over League Median (GOLM): This is one I’m working on just for giggles. Basically it’s the number of games each player has played where they scored over the median number of points our league scores. For example, out of the 110 individual performances so far in our league this year, the median score is 133.2 I then calculated the percentage of games each team scored above that number. I was hoping to figure out something a little more accurate than just wins and losses, because of course it doesn’t matter what the league scores…only what your opponent scores. Here’s what I got:FFGOLMI added a line that I will dub “the line of fairness”. Basically, this is where everyone should be based on their scores. If you’re above the line, you’ve actually had a lucky season with more wins then scores over the median. If you’re below the line, you’ve had an unlucky season. On the line is a perfectly fair season. The further away from the line, the more out of range your season has been.
  5. Normal Distribution Comparisons: This one isn’t for the overall league, but does give you a good picture of your weekly competition. I wasn’t actually sure I could do this one because I wasn’t sure my data was normally distributed, but Ryan-Joiner assured me that was an okay assumption to make in this case. Basically, I wanted to see what my chances were of beating my opponent (Ryan) this week. I wasn’t expecting much, and I didn’t get it: FFNormal I did the math to figure out my exact chances, but gave up when it got too depressing. Let’s just say my chances are rather, um, slim. Svelte even. Sigh.

So that’s that! Got any interesting ways of looking at small sample sizes like this? Let me know! I’ll need something to keep me entertained during the games tomorrow, as I certainly won’t be enjoying watching my team.

1. I renamed my team the Sad Pandas. That’s how bad it is. I grabbed Peyton with my first pick and everything has been downhill from there.
2. I also checked the medians for each week, then took the median of that to see if there was a significant difference on a week to week basis. That number was 135, so I didn’t worry about it.

Intro to Internet Science: A Prelude

For the past couple of years I’ve gotten the chance to give a talk to my brother’s high school class about how to read science you encounter “in the wild” and what types of things you should be looking out for.

The basic premise of my talk is that no matter how far you go in life in one particular subject, you will always be bombarded with science stories on Facebook and other social media that fall outside your expertise.  Given that, it’s pretty critical that you have a general schema to sort through popular science reporting, and develop an idea of what to look out for.

Basically, I tell the kids that every time they see something scientific mentioned on Facebook, they should immediately look like this:

download

Yes, I show this picture. Obviously.

The issue, as far as I can tell, is that most of us get used to two different tracks of scientific thought: one that we encounter when we’re expecting it, and one we encounter when we’re not. When you’re sitting around waiting for a teacher to quiz you, it’s relatively easy to remember to read carefully and think critically.  When you’re scrolling through Facebook or Twitter or whatever else is popular these days, it’s not so easy.

I have a lot of fun with this talk, and I’ve started to develop a general 10 point list of things the kids should keep in mind.  For my next series of blog posts, I’m essentially going to blog out the different points of my talk.  This will hopefully serve the dual purpose of helping me improve my points and anecdotes, and helping me go through my archives and start categorizing my old posts around these topics.

Ready? Great! Then here’s what you’re in for:

There’s four main topics,  each with a few subcategories. They are:

Presentation: How They Reel You In

  1. False Information, Memes, and Other Fake Stuff
  2. Headlines and First Impressions

Pictures: Trying to Distract You

  1. Narrative Images: Framing the Story
  2. Graphs: Changing the View

Proof: Using Numbers to Deceive

  1. The Anecdote Effect
  2. Experts and Balances
  3. Crazy Stats Tricks: False Positives, Failure to Replicate, Correlations, Etc

People: Our Own Worst Enemy

  1. Biased Interpretations and Motivated Reasoning
  2. Surveys and Self Reporting
  3. Acknowledging our Limitations

I’ll be attempting to put up one per week, but given life and all, we’ll see how it goes.

Want to go straight to Part 1? Click here.  Want the wrap up with all the links? Click here.

Millenials and Parenting

Recently Time Magazine ran an article called “Help! My Parents are Millennials!” that caught my interest.  Since I am both a parent and (possibly) a millennial, I figured I’d take a look to see what exactly they were presuming my child would complain about.

I was particularly interested in how they were defining “millennial”, since Amanda Hess pointed out over a year ago that many articles written about millennials actually end up interviewing Gen Xers and just hoping no one notices. Time’s article started off doing exactly that, but then they quickly clarified that they define “millennial” as those born from the late 70s to the late 90s.  This is actually about a seven year shift from what most other groups consider millennials, with the most commonly cited years of birth being 1982 to 2004 or so. Interestingly, only Baby Boomers get their own official generational definition1 endorsed by the Census Bureau: birth years 1946 to 1964.

I bring all this up, because the Time article include some really interesting polling data that purports to show parental attitude differences. Those results are here. Now it looks like they polled 2,000 parents, representing 3 generations with kids under 18.  I DESPERATELY want to know what the number of respondents for each group was. See, if you do the math with the years I gave above, the only Boomers who still have kids under the age of 18 are those who had them after the age of 33….and that’s for the very youngest year of Boomers. While of course it’s not impossible to have or adopt children over that age, it does mean the available pool of Boomers that meet the criteria is going to be smaller and skewed toward those who had children later. Additionally, if you look at the Gen X range, you realize that Time cut this down to just 10 years because of how early they started the Millennials. I don’t know for sure, but I’d guess the 2,000 was heavily skewed towards Millennials.  Of course, since we couldn’t even get numbers, we can’t possibly know which of the attitude differences they looked at were statistically significant. This annoys me, but is pretty common.

What irritated me the most though, is the idea that you can really compare parenting attitudes for parents who are in entirely different phases of parenting.  For example, there was a large discrepancy in Millennial vs Boomer parents who worried that other people judge what their kids eat. Well, yeah. Millennials are parenting small children right now, and people do judge parents more for what a 5 year old eats than a 16 year old.

Additionally, there were some other oddities in the reporting that made me think the questions were either asked differently than reported, the respondents were unclear on what they should answer, or the sample size was small.  For example, equal numbers of Boomers and Millennials said they were stay-at-home parents, which made me wonder how the question was phrased. Are 22% of Boomers still really staying home with their teenagers? My guess is some of them answered what they had done.  Another oddity was the number who said they’d never shared a picture of their child on social media. I would have been more interested in the results if they’d sorted this out by those who actually had a social media account. I also am thinking this phrasing could be deceptive. I know a few Boomers who would probably say they don’t share pictures of their kids, but will post family photos. YMMV.

Anyway, I think it’s always good to keep in mind how exactly generations are being defined, and what the implications of these definitions are. Attitude surveys among generations will always be tough to do in real time, as much of what you’ll end up testing is really just some variation of “people in their 50s think differently from those in their 20s”.

1. Typical

Blog Updates

As many of you know, I used to run a blog called Bad Data Bad! and this morning I figured out how to import all of those old blog posts in to this blog. I’ll be going back and tinkering a bit…tagging the posts, possibly removing some if I don’t like them anymore, etc. and I may be mucking about with other parts of the site as well.

Stay tuned.

Guns and Graphs Part 2

In the comment section on my last post about guns and graphs there was some interesting discussion about some of the data.  SJ had some good data to toss in, and DH made a suggestion that a graph of gun murders vs non-gun murders might be interesting.  I thought that sounded pretty interesting as well, so I gave it a whirl:

Gun graph 4

Apologies that not every state abbreviation is clear, but at least you get the outliers. Please note that the axes are different ranges (it was not possible to read if I made them the same) so Nevada is really just a 50/50 split, whereas Louisiana is actually pretty lopsided in favor of guns.  That being said, the correlation here is running at about .6, so it seems fair to say that states that have more gun homicides have more homicides in general. Now to be fair, this chart may underestimate non-gun murders, as those are likely a little harder to count than gun related murders. I don’t have hard data on it, but I’m somewhat inclined to believe that a shooting is easier to classify then a fall off a tall building.  Anyway, I pulled the source data from here.

While I was looking at that data, I thought it would be interesting to see if the percent of the population that owned guns was correlated with the number of gun murders:
Gun graph 5

Aaaaaaaaand…there’s no real correlation there. It’s interesting to note that Hawaii and Wyoming are dramatically different in ownership percentage, but not gun homicide rate. Louisiana and Vermont OTOH, have nearly identical ownership rates and completely different gun homicide rates.

Then, just for giggles I decided to go back to the original gun law ranking I was using, and see if gun ownership percentage followed that trend:

Gun graph 6

There does appear to be a trend there, but as the Assistant Village Idiot pointed out after the last post, it could simply be that places with lower gun ownership have an easier time passing these laws.

 

Bitterness and Psychopathy

I’m having some insomnia problems at the moment, so it was about 4am today when I turned on my coffee maker and sat down to do some internet perusing. I was just taking my first sip, when I stumbled upon this article titled “People Who Take Their Coffee Black Have Psychopathic Tendencies“.

Oh. Huh.

As a fairly dedicated black coffee drinker, I had to take a look.  The article references a study here that tested the hypothesis that an affinity for bitter flavors might be associated with the “Dark Tetrad” traits: Machiavellianism, psychopathy, every day sadism and narcissim.

I read through the study1, and I thought it was a good time to talk about effect sizes. First, lets cover a few basics:

  1. This was a study done through Mechanical Turk
  2. People took personality tests and rated how much they liked different foods, the researchers ran some regressions and reported the correlations  for these results
  3. They did some other interesting stuff to make sure people really liked the bitter versions of the foods they were rating and to make sure their results were valid

Alright, so what did they find? Well, there was a correlation between preference for bitter tastes and some of the “Dark Tetrad” scores, especially everday sadism2. The researchers pretty much did what they wanted to do, and they found statistically significant correlations.

So what’s my issue?

My issue is we need to talk about effect sizes, especially as this research gets repeated. The correlation between mean bitter taste preference and the “Dark Tetrad” scores over the two studies ranged from .14 to .20.  Now that’s a significant finding in terms of the hypothesis, but if you’re trying to figure out if a black coffee drinker you love might be a psychopath3? Not so useful.

See, an r of .14 translates in to an R2 of about .02. Put in stats terms, that means that 2% of the variation in psychopathy score can be explained by4 variation in the preference for bitter foods or beverages. The other 98% is based on things outside the scope of this study. For r = .2, that goes up to 4% explained, 96% unexplained.

Additionally, it should be made clear that no one bitter taste was associated with these traits, only the overall score on ALL bitter foods was.  So if you like coffee black, but have an issue with tonic water or celery, you’re fine.

The researchers didn’t include the full list of foods, but I was surprised to note that they included beer as one of the bitter options. Especially when looking at antisocial tendencies, it seems potentially confounding to include a highly mood altering beverage alongside foods like grapefruit. I’d be interested in seeing the numbers rerun with beer excluded.

 

1. And no, I didn’t add cream to my coffee. Fear me.
2. It’s worth noting that the mean score for this trait was lower than any other trait however…1.77 out of 5. It’s plausible that only the bottom of the range was tested.
3. Hi honey!
4. In the mathematical sense that is, this does not prove causation by itself

Popular Opinion

A few years ago, there was a brief moment in the NFL where all anyone could talk about was Tim Tebow.  Tebow was a controversial figure who I didn’t have much of an opinion on, but he sparked a comment from Chuck Klosterman (I think) that changed the way I think about political discussions.  I’ve never been able to track the exact quote down, but it was something like “half the country loves him, half the country hates him, but both sides think they’re an oppressed minority.” Now I don’t know if that was really true with Tebow, but I think about it every time someone says “what no one is talking about…” or “the conventional wisdom is….” or even just a basic “most people think is….” I always get curious about how we know this.  It’s not unusual that I’ll hear someone I know and love assert that the media never talks about something I think the media constantly talks about. It’s a perplexing issue.

Anyway, that interest is why I was super excite by this Washington Post puzzle that showed how easily our opinions about what others think can be skewed even if we’re not engaging in selection bias.  It also illustrates two things well: 1) why the opinion of well known people can be important and 2) why a well known person advocating for something does not automatically mean that issue is “settled”.

Good things to consider the next time I find myself claiming that “no one realizes” or “everyone thinks that”.

Guns and Graphs

One of my favorite stats-esque topics is graphs. Specifically how we misrepresent with graphs, or how we can present data better.  This weeks gun control debate provided a lot of good examples of how we present these things….starting with this article at Slate States With Tighter Gun Control Laws Have Fewer Gun Deaths.  It came with this graph:

Gun graph 1

Now my first thought when looking at this graph was two-fold:

  1. FANTASTIC use of color
  2. That’s one heck of a correlation

Now because of point #2, I looked closer. I was sort of surprised to see that the correlation was almost a perfect -1….the line went almost straight from (0,50) to (50,0).  But that didn’t make much sense….why are both axes using the same set of numbers? That’s when I looked at the labels and realized they were both ranks, not absolute numbers. Now for gun laws, this makes sense. You can’t count number of laws due to variability in the scope of laws, so you have to use some sort of ranking system. The gun control grade (the color) also gives a nice overview of which states are equivalent to each other. Not bad.

For gun deaths on the other hand, this is a little annoying. We actually do have a good metric for that: deaths per 100,000.  This would help us maintain the sense of proportion as well.  I decided to grab the original data here to see if the curve change when using the absolute numbers.  I found those here.   This is what I came up with:

Gun graph 2

Now we see a more gradual slope, and a correlation of probably around -.8 or so (Edited to add: I should be clear that because we are dealing with ordinal data for the ranking, a correlation is not really valid…I was just describing what would visually jump out at you.). We also get a better sense of the range and proportion.  I didn’t include the state labels, in large part because I’m not sure if I’m using the same year of data the original group was.1

The really big issue here though, is that this graph with it’s wonderful correlation reflects gun deaths, not gun homicides….and of course the whole reason we are currently having this debate is because of gun homicides. I’m not the only one who noticed this, Eugene Volokh wrote about it at the Washington Post as well. I almost canned this post, but then I realized I didn’t particularly like his graph either. No disrespect to Prof Volokh, it’s really mostly that I don’t understand what the Brady Campaign means when it gives states a negative rating.  So I decided to plot both sets of data on the same graph and see what happened.  I got the data on just gun homicides here.

Gun graph 3

That’s a pretty big difference.  Now I think there’s some good discussion to have around what accounts for this difference – suicides and accidents – and if that’s something to take in to account when reviewing gun legislation, but Volokh most certainly handles that discussion better than I.  I’m just a numbers guy.

 

1. I also noticed that Slate flipped the order the Law Center to prevent Gun Violence had originally used, so if you look at the source data you will see a difference. The originally rankings had 1 as the strongest gun laws and 50 as the weakest. However, Slate flipped every state rank to reflect this change, so no meaning was lost. I think it made the graph easier to read.