Proof: Using Facts to Deceive (Part 5)

Note: This is part 5 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 4 here.

Okay! So we’re almost half way done with the series, and we’re finally reaching the article!  Huzzah! It may seem like overkill to spend this much time talking about articles without, you know, talking about the articles, but the sad truth is by the time you’ve read the headline and looked at the picture, a huge amount of your opinion will already be forming. For this next section, we’re going to talk about the next thing that will probably be presented to you: a compelling anecdote. That’s why I’m calling this section:

The Anecdote Effect

Okay, so what’s the problem here?

The problem here isn’t really a problem, but a fundamental part of human nature that journalists have known for just about forever: we like stories. Since our ancestors gathered around fires, we have always used stories to illustrate and emphasize our points. Anyone who has even taken high school journalism has been taught something like this. I Googled “how to write an article”, and this was one of the first images that came up:

Check out that point #2 “Use drama, emotion, quotations, rhetorical questions, descriptions, allusions, alliteration and metaphors”. That’s how journalists are being taught to reel you in, and that’s what they do. It’s not necessarily a problem, but a story is designed to set your impressions from the get go.  That’s not always bad (and pretty much ubiquitous) but it is difficult when it leaves you with an impression that the numbers are different than they actually are.

What should we be looking out for?

Repeat after me: the plural of anecdote is not data.


Article writers want you to believe that the problem they are addressing is big and important, and they will do everything in their power to make sure that their opening paragraph leaves you with that impression. This is not a bad thing in and of itself (otherwise how would any lesser known disease or problem get traction?), but it can be abused.  Stories can leave you with an exaggerated impression of the problem, and exaggerated impression of the solution, or an association between two things that aren’t actually related.  If you look hard enough, you can find a story that backs up almost any point you’re trying to make.  Even something with a one in a million chance happens 320 times a day in the US alone.

So don’t take one story as evidence. It could be chance. They could be lying, exaggerating, or mis-remembering.  I mean, I bet I could find a Mainer who could tell me a very sad story about how their wife changing their shopping habits to less processed food led to their divorce.  I could even include this graph with it:


Source.  None of this however, would mean that margarine consumption was actually driving divorce rates in any way.

Why do we fall for this stuff?

Nassim Taleb has dubbed this whole issue “the narrative fallacy”, the idea that if you can  tell a story about something, you can understand it. Stories allow us to tie a nice bow around things and see causes and effects where they don’t really exist.

Additionally, we tend to approach stories differently than we approach statistics. One of the most interesting meditations on the subject is from John Allen Paulos in the New York Times here. He has a great quote about this:

In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled.

I think that sums it up.

So what can we do about it?

First and foremost, always remember that story or statistic that opens an article is ultimately trying to sell you something, even if that something is just the story itself.  Tyler Cowen’s theory is that the more you like the story, the more you should distrust it.

Even under the best of circumstances, people can’t always be trusted to accurately interpret events in their own life:


Of course, this almost always works the opposite way. People can be very convinced that the Tupperware they used while pregnant caused their child’s behavior problems, but that doesn’t make it true. Correlation does not prove causation in even a large data set, and especially not when it’s just one story.

It also helps to be aware of words that are used, and to think about the numbers behind them. Words like “doubled” can mean a large increase, or that your chances went from 1 in 1,000,000,000 to 1 in 500,000,000. Every time you hear a numbers word, ask what the underlying number was first.  This isn’t always nefarious, but it’s worth paying attention to.

One final thing with anecdotes: it’s an unfortunate fact of life that not everyone is honest. Even journalists with fact checkers and large budget can totally screw up when presented with a sympathetic sounding source. This week, I had the bizarre experience of seeing a YouTube video of a former patient whose case several coworkers worked on. She was eventually “cured” by some alternative medicine, and has taken to YouTube to tell her story.  Not. One. Part. Of. What. She. Said. Was. True. I am legally prohibited from even pointing you in her direction, but I was actually stunned at the level of dishonesty she showed. She lied about her diagnosis, prognosis and everything in between. I had always suspected that many of these anecdotes were exaggerated, but it was jarring to see someone flat out lie so completely. I do believe that most issues with stories are more innocent than that, but don’t ever rule out “they are making it up”, especially in the more random corners of the internet.

By the way, at this point in the actual talk, I have a bit of break the fourth wall moment. I point out that I’ve been using the “tell a story to make your point” trick for nearly every part of this talk, and that they are most certainly appreciating it more than if I just came in with charts and equations. The more astute students are probably already thinking this, and if they’re not thinking it, it’s good to point out how it immediately relates.

Until next week!  Read Part 6 here.

Pictures: Trying to Distract You (Part 4)

Note: This is part 4 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 3 here.

When last we met, we covered what I referred to as “narrative pictures”, or pictures that were being used to add to the narrative part of the story.  In this section, we’re going to start looking at pictures where the problems are more technical in nature…ie graphs and sizing. This is really a blend of a picture problem and a proof problem, because these deceptions are using numbers not narratives.   Since most of these issues are a problem of scales or size, I’m calling this section:

Graphs: Changing the View

Okay, so what’s the problem here?

The problem, once again, is a little bit of marketing. Have you ever judged the quality of a company based on how slick their website looks? Or judged a book by it’s cover, to coin a phrase? Well, it turns out we’re very similar when it comes to judging science. In a 2014 study, (Update: this lab that performed this study has come under review for some questionable data practices. It is not clear if this study is affected, but you can read the details of the accusations here and here) researchers gave people two articles on the effectiveness of a made up drug. The text of both articles was the same, but one had a graph that showed the number of people the drug helped, the other did not.  Surprisingly, 97% of the people who saw the graph believed the drug worked, whereas only 68% of people who read the text did. The researchers did a couple of other experiments, and basically found that not just graphs, but ANY “science-ish” pictures (chemical formulas, etc) influenced what people thought of the results.

So basically, adding graphs or other “technical” pictures to things to lend credibility to their articles or infographic, and you need to watch out.

Okay, so what kind of things should we be looking out for?

Well, in many cases, this isn’t a really a problem. Graphs or charts that reiterate the point of the article are not necessarily bad, but they will influence your perception. If the data warrants it, a chart reiterating the point is fantastic. It’s how nearly every scientific paper written operates, and it’s not inherently deceptive….but you may want to be aware that these pictures by themselves will influence your perception of the data. Not necessarily a problem, but good to be aware of under any circumstance.

There are some case though, where the graph is a little trickier.  Let’s go through a few:

Here’s one from Jamie Bernstein over at Skepchick, who showed this great example in a Bad Chart Thurday post:pica

Issue: The graph y-axis shows percent of growth, not absolute value. This makes hospitalized pica cases look several times larger than anorexia or bulimia cases. In reality, hospitalized anorexia cases are 5 times as common and bulimia cases are 3 times as common as pica cases. These numbers are given at the bottom, but the graph itself could be tricky if you don’t read it carefully.

How about this screen shot from Fox News, found here?

 Issue: Visually, this chart shows the tax rate will quadruple or quintuple if the Bush tax cuts aren’t extended. If the axis started at zero however, the first bar would be about 90% the size of the second one.

How about this tweeted graph from shared by the National Review?

Issue: The problem with this one is the axis does start with zero. The Huffington Post did a good cover of this graph here, along with what some other graphs would look like if you set the scale that large. Now of course there can be legitimate discussion over where a fair axis scale would be, but you should make sure the visual matches the numbers.

And one more example that combines two issues in one:

See those little gas pumps right there? They’ve got two issues going on. The first is a start date that had an unusually low gas price:


The infographic implies that Obama sent gas prices through the roof….but as we can see gas prices were actually bizarrely low the day he took office.  Additionally, the gas pumps involved are deceptive:


If you look, they’re claiming to show that prices doubled. However, the actual size of the second one is four times the one of the first one.  They doubled the height and the width:


While I used a lot of political examples here, this isn’t limited to politics. Andrew Gelman caught the CDC doing it here, and even he couldn’t figure out why they’d have mucked with the axis.

There’s lots of repositories for these, and Buzzfeed even did a listicle here if you want more. It’s fun stuff.

Why do we fall for this stuff?

Well, as we’ve said before, visual information can reinforce or skews your perceptions, and visual information with numbers can intensify that effect. This isn’t always a bad thing…after all nearly every scientific paper ever published includes charts and graphs. When you’re reading for fun though, it’s easy to let these things slip by. If you’re trying to process text, numbers, implications AND read the x and y axis and make sure the numbers are fairly portrayed, it can be a challenge.

So what can we do about it?

A few years ago, I asked a very smart colleague how he was able to read and understand so many research papers so quickly. He seemed to read and retain a ton of highly technical literature, while I always found my attention drifting and would miss things. His advice was simple: start with the graphs. See I would always try to read papers from start to finish, looking at graphs when they were cited. He suggested using the graphs to get a sense of the paper, then read the paper with an eye towards explaining the graphs. I still do this, even when I’m reading for fun. If there’s a graph, I look at it first when I know nothing, then read the article to see if my questions about it get answered. It’s easier to notice discrepancies this way. At the very least, it reminds you that the graph should be there to help you. Any evidence that it’s not should make you suspicious of the whole article and the author’s agenda.

So that wraps up our part on Pictures! In part 5, we’ll finally reach the text of the article.

Read Part 5 here.

Pictures: How They Try to Distract You (Part 3)

Note: This is part 3 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 2 here.

As any good internet person knows, you can’t have a good story without a featured image…pictures being worth a thousand words and all that jazz. Nine times out of ten if you see a story pop up on Facebook, it’s going to have an image attached.  Those deceptive and possibly false memes I was talking about in Part 1? All basically words on images. For most stories, there are really two different types of images: graphs or technical images and what I’m going to call “narrative” images.  In this section I’m going to cover the narrative images or what I call:

Narrative Images: Framing the Story

Okay, so what’s the problem here?

In Part 2 I mentioned a study that was mostly about headlines, but that had a really interesting point about pictures as well. In that study, they purposefully mismatched the headline and the picture in a story about a crime. Basically they had the headline read something like “Father of two murdered” and showed a picture of the criminal, or they had it read “Local man murders father of two” and showed a picture of the victim. Later, people who had read a “victim” headline with a picture of a murderer actually felt more sympathy towards the murderer, and those who read a “criminal” headline with a picture of a victim liked the victim less. That’s the power of pictures. We know this, which is why newspapers can end up getting sued for putting the wrong picture on a cover even if they don’t mention any names.

Any picture used to frame a story will potentially end up influencing how we remember that story. A few weeks ago, there was a big kerfluffle over some student protests at Oberlin. The NY Post decided to run the story with a picture of Lena Dunham, an alum of the college who is in no way connected to the protests. In a couple of months, my bet is a good number of people who read that story will end up remembering that she had something to do with all this.

What should we be looking out for?

When you read an article, at the very least you should ask how the picture matches the story. Most of the time this will be innocuous, but don’t forget for a minute the picture is part of an attempt to frame whatever the person is saying.  This can be deviously subtle too.  One of the worst examples I ever heard of was pointed out by Scott Alexander after a story about drug testing welfare recipients hit the news.  The story came with lots of headline/picture combos like this one from Jezebel:


Now check that out! Only .2% of welfare applicants failed a drug screening! That’s awesome.  But what Scott Alexander pointed out in that link up there is that urine drug testing actually has a false positive rate higher than .2%.  This means if you tested a thousand people that you knew weren’t on drugs, you’d get more than a .2% failure rate.  So what happened here? How’d it get so low?

The answer lies in that technically-not-inaccurate word “screening”.  Once you saw that picture, your brain probably filled in immediately what “screening” meant, and it conjured up a picture of a bunch of people taking a urine drug test. The problem is, that’s not what happened. The actual drug screening used here was a written drug screening. That’s what those people failed, and that’s why we didn’t get a whole bunch of false positives.  Now I have no idea if the author did this on purpose or not, but it certainly leaves people with a much different impression than the reality.

Why do we fall for this stuff?

Every time we see a picture, we’re processing information with a slightly different part of our brain. In the best case scenario, this enhances our understanding of information and engages us more fully. In the worse case scenario, it skews our memory of written information, like in the murderer/victim study I mentioned above. This actually works in both directions….asking people questions with bad verbal information can skew their visual memory of events.  Even people who are quite adept at interpreting numbers, words and data can forget to consider the picture attached.

So what can we do about it?

Awareness is key. Any time you see a picture, you have to think “what is this trying to get me to think?”  You have to remember that we are visual creatures, and if text worked better than visuals commercials wouldn’t look the way they do.

Now, before I go, I have to say a few words about infographics. These terrible things are an attempt to take entire topic and make them in to nothing but a narrative photo. They suck. I hate them. Everything I say in this entire series can be assumed to go TRIPLE for infographics. Every time you see an infographic, you should remember that 95% of them are inaccurate. I just made that up, but if you keep that rule in mind you’ll never be caught off guard. Think Brilliant has the problem summed up very nicely with this very meta infographic about the problem with infographics:

Think Brilliant has more here.

The key with infographics is much like those little picture memes that are everywhere: consider them wrong at baseline, and only trust after verifying.

If you want more, read Part 4 here.

Presentation: How They Reel You In (Part 2)

Note: This is part 2 in a series  for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 1 here.

In Part 1, we covered things that fake you out by presenting themselves as real when they are just pretty much made up. While those are some of the most obnoxious issues on the internet, they are not really the most insidious issues. For the remaining parts of this series, we’re going to be covering things that attempt to twist, overstate or otherwise misrepresent themselves to sound more convincing than they really are. For those of us who may be skimming through Facebook/Twitter/whatever new thing the kids are using these days, it’s good to start at the beginning….so I’ve called this section:

Headlines and First Impressions

Okay, so what’s the problem here?

NPR probably put it best when they said it in this article last year…Why Doesn’t America Read Anymore?

Did you check it out? Go ahead. I’ll wait.

Back yet?

Okay, so if you clicked you see the issue.  If you’re too lazy to click, well that’s part of my point. That was a fake article….but not in the “made up untrue” sense we covered earlier. See the good folks over at NPR started getting the impression that people were just reading the headlines and then freaking out and commenting before they read further. They decided to test this (on April Fool’s Day no less), and posted a story with the headline above paired with an article that said “Happy April Fools Day!” and explained they were performing a test to see how many people read even a few words of the article before reacting. The reaction is chronicled here, but basically they got thousands of comments about what people presumed the article said. Now clearly some portion of those people were trying to be funny, but some of the screenshots taken suggest many really were reacting to just the headline. Interestingly, the worst effect was not on the NPR website (where you’d have to scroll through the article to get to the comment section), but rather on my arch-nemesis Facebook.

Okay, so what kind of things should we be looking out for?

If nothing else, the above prank should convince you to always make sure there’s something attached to whatever article you want to comment on.  Once that’s out of the way, you should be looking for more subtle bias.  Slate Star Codex recently had a really good side by side of a few different headlines that all came out of the same study:

Same results, same press release, four different ways of framing the issue. Unsurprisingly, it’s these subtle issues that are actually bigger problems in real life. The New Yorker did a great piece on the power of headlines to frame perceptions, and the results were a little unnerving. They focused on a recent study that paired articles with different headlines to see how people’s memories and interpretations of information were effected. Some of what they found:

  • Inaccurate headlines skewed what people remembered the article said, but their inferences from the information stayed sound
  • More subtle framing bias changed both people’s memory and their interpretation of information
  •  People have trouble separating visuals from headlines. If the headline talked about a crime perpetrator but the picture was of the victim, people felt less sympathy for the victim later


So why do we fall for this stuff?

Well, because it was designed to make us fall for it. One of the more interesting articles on the topic is this one from Neiman Lab, and it had a quote I loved:

That new environment is, for instance, the main reason headlines have become so much more emotional and evocative. In a print newspaper, a headline is surrounded by lots of other contextual clues that tell you about the story: Is there a photo with it? Do I recognize the byline? Is it blazed across the top of Page 1 or buried on C22? Online, headlines often pop up alone and disembodied amid an endless stream of other content. They have a bigger job to do.

Basically, news sites that can get you to click will thrive, and those that can’t, won’t. More than ever, headlines are essentially mini commercials…and who better than the advertising industry to take advantage of all of our cognitive biases?

So what can we do about it?

When it comes to headlines, especially ones that make claims about science or data, I think it’s important to think of this as a group of concentric circles.  As you move outward, the claims get bolder, brasher, and all caveats get dropped:


It’s also important to remind yourself that it’s frequently editors writing headlines, not journalists. If you view headlines as a commercial and not a piece of information, it may help you spot inconsistencies between the way information was presented and the way the article actually reads. We haven’t progressed far enough in the research to know how much we can negate the impact of headlines by being more aware of them, but it seems reasonable that being a little paranoid couldn’t hurt.

For some specialized and frequently misrepresented fields, it’s also a good idea to read up on what frustrates scientists within the field.  I’ve never looked at headlines about brain function or neuroscience the same way after I watched Molly Crockett’s Ted talk:

On the plus side, the internet makes it easier than ever for people to complain about headlines and actually get them fixed. For example, last year a Washington Post op-ed got published with a headline “One Way to End Violence Against Women? Stop Taking Lovers and Get Married” with a sub-headline that read “the data shows that #yesallwomen would be safer married to their baby daddies”. People were upset, and many took umbrage with the headline for giving the impression that violence against women was women’s fault. Even the author of the piece jumped in and disavowed the headlines, saying the were disappointed in the tone. The paper ended up changing the headline after admitting it was causing a distraction.  Now whatever you think of this particular story, it’s a good sign that this is a type of bias you can actually do something about. It’s a really good example of a place where “see something, say something” might make a difference.

Interestingly, these headline changes are pretty easy to track by checking out the URL at the top of the page:


This almost never gets changed, and sometimes shows some sneaky/unannounced updates. If you’re looking for a place to make a real difference, headline activism may be a good place to start.

We’ll dive more in to the pictures in Part 3.

Presentation: How they Reel You In (Part 1)

Note: This is part 1 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here.

When I first sat down to write a talk for high school science students about how to read science on the internet, one of my priorities was to immediately establish for the kids why this was different from anything else they were learning. The school I was going to had won awards for their science teaching, and these students were no slackers. I was concerned that given their (probably justifiable) confidence in their skills, many of these kids would assume that they would actually be great at interpreting random articles they stumbled across online. This actually made me think of a totally different study on a completely different topic: sexual harassment. In that study, college aged women were asked what they would do if an interviewer sexually harassed them. All said they would confront/walk out/report.  However, when the experimenters actually put women in fake job interview scenarios and had an interview harass them, none of them did any of those things. The general conclusion is that we have the right answers when we know what we’re being tested on, but in real life we often don’t know what is even being asked of us.

I didn’t end up using the harassment study, but I did feel comfortable putting that framing on my introduction. When you read science in a classroom, the teacher is going to be clear what you’re supposed to get out of the lesson. An astute student can be relatively confident what they’ll be tested on. I’ve had plenty of tests where I changed or re-evaluated my answer because I suspected the teacher was doing something a bit different than the initial reading would suggest. On the internet however, no one gives you even the briefest of heads up as to what material is going to be covered. When you encounter an interesting science story or number, you are almost always going to be thinking about something that is not at all related. Because of this, the first impression of a story is important….it may be all you get.  Now with that in mind, we have to realize that information can be read and absorbed in a few seconds, so an instantaneous skepticism is key.  The first part of this is simple, but critical: make sure whatever you’re seeing is actually true. Like, at all.  That’s why Part 1 here is called

False Information, Deceptive Memes and Other Fake Stuff

Okay, so what’s the problem here?

The problem is, some stuff on the internet is fake. I know, total shocker. But seriously, it’s actually pretty stunning how often people take entirely made up news stories, glance at them, and end up believing they’re real. There’s a whole website called Literally Unbelievable that catalogs peoples reactions to fake news stories from well known sites.   Like this one:

But the issues don’t end with just satire, sometimes people are making things up just for the heck of it, like the guy who spent a couple weeks putting up fake facts with pictures behind it to see who would call him out on it. This was my favorite:

Sometimes people make things up to push a political agenda, as Abraham Lincoln warned us:

Okay, so what kind of things should we be looking out for?

Well, anything with a picture on it designed to be catchy should be immediately suspect, and that goes double if it’s political or has an agenda. Also, anything that falls in to an area you feel pretty confident about should also be scrutinized. It turns out people who feel confident in their expertise on a topic can be more likely to believe they know the definition of made up terms.

Why do we fall for this stuff?

Well, a couple of reasons. Like I said in the intro, sometimes we just flat out don’t have our skepticism engaged. If you’re scrolling through Facebook thinking about your ex, or your friends, or the awkward political commentary your cousin is making, you might be less likely to even consciously register a meme about lightning and cells phones. You may find yourself believing it later because you really never thought it through to begin with.

Conversely, confirmation bias is a powerful force, and it frequently leads us to apply less scrutiny to things we’d like to believe.  That’s why political falsehoods are so easy to pass along….people believe that they have some “truthiness” to them (as Stephen Colbert would say) or that they were “fake but accurate” (as the New York Times would say).

Compounding both of these problems is our own perception of how smart we are. Earlier I linked to this study that showed that people who think they know a lot about a topic can be even more susceptible to accepting fake terms. And lest we think this is just for people who only think they’re smart, I would point you to the Neil Degrasse Tyson/George W Bush quote controversy. Neil Degrasse Tyson is possibly the most famous scientist in the US today, and he was caught quoting George W Bush inaccurately. It took some rather dogged determination by an opposing journalist to get him to admit that he got the quote and it’s context wrong. Now if Neil Degrasse Tyson can get tripped up by wrong information, who are we to claim to be better?

So what can we do about it?


It won’t help you every time, but a good first step is simply to Google the information. If you can’t verify, don’t post. Some items are disputed (we’ll get to that later), and their interpretation may questioned, but completely fake stories should have a pretty good Google history to let you know that. For satirical websites, even taking a look at other stories they post can tip you off.  The site has a good list to get you started. Some hoax sites are really trying to trick you….for example is a real news site, and is not. For general viral stories can point you in the right direction.  Again, this won’t help much if the story is disputed, but it should point you in the right direction for completely made up stuff.  In future posts, we’ll get in to the nuances, but for now, remember that sometimes there is no nuance. Sometimes things are just fake.


Want more? Click here for Part 2.

Intro to Internet Science: A Prelude

For the past couple of years I’ve gotten the chance to give a talk to my brother’s high school class about how to read science you encounter “in the wild” and what types of things you should be looking out for.

The basic premise of my talk is that no matter how far you go in life in one particular subject, you will always be bombarded with science stories on Facebook and other social media that fall outside your expertise.  Given that, it’s pretty critical that you have a general schema to sort through popular science reporting, and develop an idea of what to look out for.

Basically, I tell the kids that every time they see something scientific mentioned on Facebook, they should immediately look like this:


Yes, I show this picture. Obviously.

The issue, as far as I can tell, is that most of us get used to two different tracks of scientific thought: one that we encounter when we’re expecting it, and one we encounter when we’re not. When you’re sitting around waiting for a teacher to quiz you, it’s relatively easy to remember to read carefully and think critically.  When you’re scrolling through Facebook or Twitter or whatever else is popular these days, it’s not so easy.

I have a lot of fun with this talk, and I’ve started to develop a general 10 point list of things the kids should keep in mind.  For my next series of blog posts, I’m essentially going to blog out the different points of my talk.  This will hopefully serve the dual purpose of helping me improve my points and anecdotes, and helping me go through my archives and start categorizing my old posts around these topics.

Ready? Great! Then here’s what you’re in for:

There’s four main topics,  each with a few subcategories. They are:

Presentation: How They Reel You In

  1. False Information, Memes, and Other Fake Stuff
  2. Headlines and First Impressions

Pictures: Trying to Distract You

  1. Narrative Images: Framing the Story
  2. Graphs: Changing the View

Proof: Using Numbers to Deceive

  1. The Anecdote Effect
  2. Experts and Balances
  3. Crazy Stats Tricks: False Positives, Failure to Replicate, Correlations, Etc

People: Our Own Worst Enemy

  1. Biased Interpretations and Motivated Reasoning
  2. Surveys and Self Reporting
  3. Acknowledging our Limitations

I’ll be attempting to put up one per week, but given life and all, we’ll see how it goes.

Want to go straight to Part 1? Click here.  Want the wrap up with all the links? Click here.