Internet Science: Some Updates

Well, it’s that time of year again: back to school. Next week I am again headed back to high school to give the talk that spawned my Intro to Internet Science series last year.

This is my third year talking to this particular class, and I have a few updates that I thought folks might be interested in. It makes more sense if you read the series (or at least the intro to what I try to do in this talk), so if you missed that you can check it out here.

Last year, the biggest issue we ran in to was kids deciding they can’t believe ANY science, which I wrote about here. We’re trying to correct that a bit this year, without losing the “be skeptical” idea. Since education research kinda has a replication problem, all the things we’re trying are generally just a discussion between the teacher and I.

  1. Skin in the game/eliminating selection bias In order to make the class a little more interactive, I’ve normally given the kids a quiz to kick things off. We’ve had some trouble over the years getting the kids answers compiled, so this year we’re actually giving them the quiz ahead of time. This means I’ll be able to have the results available before the talk, so I can show them during the talk. I’m hoping this will help me figure out my focus a bit. When I only know the feedback of the kids who want to raise their hands, it can be hard to know which issues really trip the class up.
  2. Focus on p-values and failure to replicate In the past during my Crazy Stats Tricks part, I’ve tried to cram a lot in. I’ve decided this is too much, so I’m just going to include a bit about failed replications. Specifically, I’m going to talk about how popular studies get repeated even when it turns out they weren’t true. Talking about Wonder Woman and power poses is a pretty good attention getter, and I like to point out that the author’s TED talk page contains no disclaimer that her study failed to replicate (Update: As of October 2016, the page now makes note of the controversy). It does however tell us it’s been viewed 35,000,000 times.
  3. Research checklist As part of this class, these kids are eventually going to have to write a research paper. This is where the whole “well we can’t really know anything” issue got us last year. So to end the talk, we’re going to give the kids this research paper checklist, which will hopefully help give them some guidance. Point #2 on the checklist is “Be skeptical of current findings, theories, policies, methods, data and opinions” so our thought is to basically say “okay, I got you through #2….now you have the rest of the year to work through the rest”. I am told that many of the items on that list meet the learning objectives for the class, so this should give the teacher something to go off of for the rest of the year as well.

Any other thoughts or suggestions (especially from my teacher readers!) are more than welcome. Wish me luck!

Intro to Internet Science: A Postlude

All right, we did it! 10 topics, 10 weeks, and a whole slew of examples. I’ve had a lot of fun, gotten some great feedback, and had some very kind comments from some very lovely teachers. It’s also given me some good ideas for some ongoing posts.  In the talks I give I almost never have time to get in to any actual math, but hey, what’s the point of having a blog if you can’t go on and on about the stuff you like? I’ll probably be calling that “crazy stats tricks” and at a minimum I’ll cover some of the topics I complained about in Part 7.  Any suggestions for that series, or feedback on this series is welcome either in the comments or on the feedback page.

Now that I have that out of the way, lets take a moment to reflect on what we’ve learned, eh?  Overall, there are four P’s:

Presentation
Pictures
Proof
People

We spent a little time on all four:

Presentation: How They Reel You In
In Parts 1 and 2, we learned how quickly the internet spreads completely false information, and to always make sure what you are quoting is actually real. We also learned that headlines are marketing tools, and to be wary of what they are selling.

Pictures: Trying to Distract You
In Parts 3 and 4, we added some visuals. Narrative pictures, or those that help illustrate the story, can set impressions that can be ridiculously hard to correct. It gets even worse when you add graphs. Even a little bit of technical information can make things look more credible than they deserve.

Proof: Using Facts to Deceive
In Parts 5, 6, and 7, we covered “the truths people use to lie with”. Here we covered information that is true, but used to give false impressions. We started with stories and anecdotes, which are often used to humanize and emphasize various points. Next we moved on to experts and balance, and how we need to be careful who we listen to and who we dismiss.  Finally I gave a woefully short and incomplete overview of some statistical tricks that get used a lot.

People: Our Own Worst Enemy
And now we come to the part where we have only ourselves to blame. First we took a look at how our own pre-existing beliefs color our views of facts and even impact our ability to do math. Next, we take a look at how our tendency to not be entirely honest can screw up surveys and research based on them.  Finally, we had a bit of a discussion about the limits of scientific understanding, research ethics and things we may never know.

And that’s a wrap!

People: Our Own Worst Enemy (Part 10)

Note: This is part 10 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to Part 9 here.

Wow folks….10 weeks later we are coming to the end.   This is a shorter one than the rest of the series, but I think it’s still important. Up until now I’ve been referencing science as thought it can always provide the guidance we need if we just know where to look. Unfortunately that’s not always true. It’s at this point that I like to step back and get a little bit reflective about evidence and science in general, and how we acknowledge what we may never know. That’s why I call this section:

Acknowledging our Limitations

Okay, so what’s the problem here?

The problem is that just like research and evidence can be manipulated, so can lack of research and evidence. The reality is that there are practical, financial, moral and ethical issues facing all researchers, and there are limits on both what we know at the moment and what we can ever know. A lack of evidence doesn’t always mean someone’s hiding something. Unfortunately, none of this stops people from claiming it does. This normally comes up when someone is explaining why their opponent’s evidence doesn’t count.

What kinds of things should we be looking out for? 

Mostly calls for more research. It’s tricky business because sometimes this is a perfectly reasonable claim, but sometimes it’s not. Sometimes it’s just a smokescreen for an agenda.

For example, in 2012 two doctors from the CDC were called in front of Congress to discuss vaccine safety. As part of the hearing, Congressman Bill Posey asked the doctors if they had done a study on autism in vaccinated vs unvaccinated children. You can read the whole exchange here, but the answer to the question was no. Why? Well,  a double-blind placebo controlled trial of vaccines would be unethical to do. For non-fatal diseases you can sometimes do them, but you can’t actually knowingly put people in the way of harm no matter how much you need or want the data. To give a placebo (i.e. fake) measles vaccine to a child  just to see if they get sick and die or not would be unethical. The NIH requires studies to actually have a “fair risk-benefit ratio”, so there either has to be low risk or high benefit. I work in oncology and have actually seen trials closed to enrollment immediately because data suggested a new treatment might have more side effects than we suspected.

A Congressman looking in to vaccine safety should know this, but to anyone listening it might have sounded like a reasonable question. Why aren’t we doing the gold standard research? What are they hiding?

Other examples of this can be asking for evidence such as “prove to me my treatment DOESN’T work“.

Why do we fall for this stuff?

Well, mostly because many of us never considered it. If you’re not working in research, it can be hard to notice when someone’s asking for something that would never get past the IRB. Even if something would be ethical, it’s hard to realize how tricky some studies would be. In something like nutrition science this is rampant.  I mean, how much money would it take for you to change your diet for the next 30 years to scientists could study you?

I took a “Statistics in Clinical Trials” class a few years ago, and I was surprised that nearly half of it was really an ethics class. Every two years I (and everyone else at my institute) also have to take 8 hours of training in human subject research, just to make sure I stay clear on the guidelines. It’s not easy stuff, but you have to remember the data can’t always come first.

So what can we do about it?

Well first, recognize these limitations exist. We can and should always be refining our research, but we have to respect limits. Read about famous cases where this has gone wrong, if you’ve got the stomach for it. The Tuskegee Syphilis Experiments  and the Doctor’s Trial that resulted in the Nuremberg Code are two of the most famous examples of this, but there are others.  The more you know, the more you’ll be prepared for this one when you see it.

 

All right, that wraps up part 10! I think I’m going to cut this off here and do my wrap up next week. See you then!

People: Our Own Worst Enemies (Part 9)

Note: This is part 9 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to Part 8 here.

Okay, we’re in the home stretch here! In part 8 I talked about how we as individuals work to confuse ourselves when we read and interpret data. Today I’m going to talk about how we as a society collectively work to undermine our own understanding of science, one little step at a time.  Oh that’s right, we’re talking about:

Surveys and Self Reporting

Okay, so what’s the problem here?

The problem is that people are weird. Not any individual really (ed note: this is false, some people really are weird), but collectively we have some issues that add up. Nowhere is this more evident than on surveys. There is something about those things that brings out the worst in us.  For example, in this paper from 2013, researchers found that 59% of men and 67% of women in the National Health and Nutrition Examination Survey (NHANES) database had reported calorie intake that were “physiologically implausible” and “incompatible with life”.  The NHANES database is incredibly widely used for nutrition research for about 40 years, and these findings have caused some to call for an end to self-reporting in nutrition research.  Now I doubt any individual was intending to mislead, but as a group those effects add up.

Nutrition isn’t the only field with a problem though. Any field that studies something where people think they can make themselves look better has an issue. For example, the Bureau of Labor Statistic found that most people exaggerate how many hours they work per week. People who say they work 40 hours normally only work 37. People who say they work 75 hours a week typically work about 50. One or two people exaggerating doesn’t make a difference, but when it’s a whole lot of people it adds up.

So what kinds of things should we be looking out for?

Well, any time things say they’re based on a survey, you may want to get the particulars. Before we even get to some of the reporting bias I mentioned above, we also have to contend with questions that are asked one way and reported another.  For example back in 2012 I wrote about an article that said “1/3rd of women resent their husbands don’t make more money”. When you read the original question, it asked if the “sometimes” resent that their husband doesn’t make more money.  It’s a one word difference, but it changes the whole tone of the question.  Every time you see a headline about what “people think”, be a little skeptical.  Especially if it looks like this:

lizardpeople

That one’s from a survey about conspiracy theories, and they got that 12 million number from extrapolating out the 4% of respondents to the survey who said they believed in lizard people to the entire US population.  In the actual survey, this represented 50 people.  Do you think it’s more plausible that the pollsters found 50 people who believed in lizard people or 50 people who thought this was an amusing thing to say yes to?

But people who troll polls aren’t the only problem, polling companies play this game too, asking questions designed to grab a headline. For example, recently a poll found that 10% of college graduates believe a woman named Judith Sheindlin sits on the Supreme Court.  College graduates were given a list of names and told to pick the one who was a current Supreme Court justice.  So what’s the big deal, other than a wrong answer? Well apparently Judith Sheindlin is the real life name of “Judge Judy” a TV show judge. News outlets had a field day with the “college grads think Judge Judy is on the Supreme Court” headlines. However, the original question never used the phrase “Judge Judy”, only the nearly unrecognizable name “Judith Sheindlin. The Washington Post thankfully called this out, but all the headlines had already been run. Putting in a little known celebrity name in your question then writing a headline with the well known name is beyond obnoxious. It’s a question designed to make people look dumb and make everyone reading feel superior. I mean, quick, who is Caryn Elaine Johnson? Thomas Mapother IV? People taking a quiz will often guess things that sound vaguely right or familiar, and I wouldn’t read too much in to it.

Why do we fall for this stuff?

This one I fully blame on the people reporting things for not giving proper context. This is one area where journalists really don’t seem to be able to help themselves. They want the splashy headline, methodology or accuracy be damned. They’re playing to our worst tendencies and desires….the desire to feel better about yourself. I mean, it’s really just a basic ego boost. If you know that Judge Judy isn’t on the Supreme Court, then you must clearly be smarter than all those people who didn’t right?

So what can we do about it?

The easiest thing to do is not to trust the journalists. Don’t let someone else tell you what people said, try to find the question itself.  Good surveys will always provide the actual questions that they asked people. Remember that tiny word shifts can change answers enormously.  Words like “sometimes” “maybe” and “occasionally” can be used up front, then dropped later when reported. Even more innocuous word choices can make a difference. For example, in 2010 CBS found that asking if “gays and lesbians” should be able to serve in the military instead of “homosexuals” causes quite the change in people’s opinions:

gaysinmilitary

So watch the questions, watch the wording, watch out for people lying, and watch out for the reporting.  Basically, paranoia is just good sense when lizard people really are out to get you.

See you in Week 10! Read Part 10 here.

People: Our Own Worst Enemy (Part 8)

Note: This is part 8 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here or go back to Part 7 here.

I love this part of the talk because I get to present my absolute favorite study of all time. Up until now I’ve mostly been covering things about how other people are trying to fool you to get them to your side, but now I’m going to wade in to how we seek to fool ourselves.  That’s why I’m calling this part:

Biased Interpretations and Motivated Reasoning

Okay, so what’s the problem here?

The problem here my friend, is you. And me. Well, all of us really…..especially if you’re smart.  The unfortunate truth is that for all the brain power we put towards things, our application of that brain power can vary tremendously when we’re evaluating information that we like, that we’re neutral towards, or that we don’t like.  How tremendously? Well, in 2013 the working paper “Motivated Numeracy and Enlightened Self-Government“, some researchers decided to ask if people with a rash got better if they used a new skin cream.  They provided this data:

Pt8matrix

The trick here is that you are comparing absolute value to proportion.  More people got better in the “use the skin cream” group, but more people also got worse. The proportion is better for those who did not use the cream (about 5:1) as opposed to those who did use it (about 3:1). This is a classic math skills problem, because you have to really think through what question you’re trying to answer before you calculate, and what you are actually comparing. Baseline about 40% of people in the study got this right.

What the researchers did next was really cool. For some participants, they took the original problem, kept the numbers the same, but changed “patients” to “cities”, “skin cream” to “strict gun control laws” and “rash” to “crime”.  They also flipped the outcome around for both problems, so participants had one of four possible questions.  In one the skin cream worked, in one it didn’t, in one strict gun control worked, in one it didn’t. The numbers in the matrix remained the same, but the words around them flipped.  They also asked people their political party and a bunch of other math questions to get a sense of their overall mathematical ability. Here’s how people did when they were assessing rashes and skin cream:

rashgraph

Pretty much what we’d expect. Regardless of political party, and regardless of the outcome of the question, people with better math skills did better1.

Now check out what happens when people were shown the same numbers but believed they were working out a problem about the effectiveness of gun control legislation:

gunproblem

Look at the end of that graph there, where we see the people with a high mathematical ability. If using their brain power got them an answer that they liked politically, the did it. However, when the answer didn’t fit what they liked politically, they were no better than those with very little skill at getting the right answer.  Your intellectual capacity does NOT make you less likely to make an error….it simply makes you more likely to be a hypocrite about your errors.  Yikes.

Okay, so what kind of things should we be looking out for?

Well, this sort of thing is most common on debates where strong ethical or moral stances intersect with science or statistics. You’ll frequently see people discussing various issues, then letting out a sigh and saying “I don’t know why other people won’t just do their research!”. The problem is that if you believe something strongly already, you’re quite likely to think any research that agrees with you is more compelling than it actually is. On the other hand, research that disagrees with you will look less compelling than it may be.

This isn’t just a problem for the hoi polloi either. I just wrote earlier this week about two research groups who were accusing the other of choosing statistical methods that would support their own pet conclusions. We all do it, we just see it more clearly when it’s those we disagree with.

Why do we fall for this stuff?

Oh so many reasons.  In fact Carol Tarvis has written an excellent book about this (Mistakes Were Made (but Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts) that should be required reading for everyone. In most cases though it’s pretty simple: we like to believe that all of our beliefs are perfectly well reasoned and that all the facts back us up. When something challenges that assumption, we get defensive and stop thinking clearly.  There’s also some evidence that the internet may be making this worse by giving us access to other people who will support our beliefs and stop us from reconsidering our stances when challenged.

In fact, researchers have found that the stronger your stance towards something, the more likely you are to hold simplistic beliefs about it (ie “there are only two types of people, those who agree with me and those who don’t”).

An amusing irony: the paper I cited in that last paragraph was widely reported on because it showed evidence that liberals are as bad about this as conservatives. That may not surprise most of you, but in the overwhelmingly liberal field of social psychology this finding was pretty unusual. Apparently when your field is >95% liberal, you mostly find that bias, dogmatism and simplistic thinking are conservative problems. Probably just a coincidence.

So what can we do about it?

Richard Feynman said it best:

If you want to see this in action, watch your enemy. If you want to really make a difference, watch yourself.

Well that got heavy.  See you next week for Part 9!

Read Part 9 here.

1. You’ll note this is not entirely true at the lowest end. My guess is if you drop below a certain level of mathematical ability, guessing is your best bet.

 

 

Proof: Using Facts to Deceive (Part 7)

Note: This is part 7 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to Part 6 here.

Okay, now we come to the part of the talk that is unbelievably hard to get through quickly. This is really a whole class, and I will probably end up putting some appendices on this series just to make myself feel better.  If the only thing I ever do in life is to teach as many people as possible the base rate fallacy, I’ll be content. Anyway, this part is tough because I at least attempt to go through a few statistical tricks that actually require some explaining. This could be my whole talk, but I’ve decided against it in favor some of the softer stuff. Anyway, this part is called:

Crazy Stats Tricks: False Positives, Failure to Replicate, Correlations, Etc

Okay, so what’s the problem here?

Shenanigans, chicanery, and folks otherwise not understanding statistics and numbers. I’ve made reference to some of these so far, but here’s a (not-comprehensive) list:

  1. Changing the metric (ie using growth rates vs absolute rates, saying “doubled” and hiding the initial value, etc)
  2. Correlation and causation confusion
  3. Failure to Replicate
  4. False Positives/False Negatives

They each have their own issues. Number 1 deceives by confusing people, Number 2 makes people jump to conclusions, Number 3 presents splashy new conclusions that no one can make happen again, and Number 4 involves too much math for most people but yields some surprising results.

Okay, so what kind of things should we be looking out for?

Well each one is a little different. I touched on 1 and 2 a bit previously with graphs and anecdotes. For failure to replicate, it’s important to remember that you really need multiple papers to confirm findings, and having one study say something doesn’t necessarily mean subsequent studies will say the same thing. The quick overview though is that many published studies don’t bear out. It’s important to realize that any new shiny study (especially psychology or social science) could turn out to not be reproducible, and the initial conclusions invalid. This warning is given as a boilerplate “more research is needed” at the end of articles, but it’s meant literally.

False positives/negatives are a different beast that I wish more people understood.  While this applies to a lot of medical research, it’s perhaps clearest to explain in law enforcement.  An example:

In 2012, a (formerly CIA) couple was in their home getting their kids ready for school when they were raided by a SWAT team. They were accused of being large scale marijuana growers, and their home was searched. Nothing was found.  So why did they come under investigation? Well it turns out they had been seen buying gardening equipment frequently used by marijuana growers, and the police had then tested their trash for drug residue. They got two positive tests, and they raided the house.

Now if I had heard this reported in a news story, I would have thought that was all very reasonable. However, the couple eventually discovered that the drug test used on their trash has a 70% false positive rate. Even if their trash had been perfectly fine, there was still at least a 50% they’d get two positive tests in a row (and that assumes nothing in their trash was triggering this). So given a street with ZERO drug users, you could have found evidence to raid half the houses.  The worst part of this is that the courts ruled that the police themselves were not liable for not knowing that the test was that inaccurate, so their assumptions and treatment of the couple were okay. Whether that’s okay is a matter for legal experts, but we should all feel a little uneasy that we’re more focused on how often our tests get things right than how often they’re wrong.

Why do we fall for this stuff?

Well, some of this is just a misunderstanding or lack of familiarity with how things work, but the false positive/false negative issue is a very specific type of confirmation bias. Essentially we often don’t realize that there is more than one way to be wrong, and in avoiding one inaccuracy, we increase our chances of different types of inaccuracy.  In the case of the police departments using the inaccurate tests, they likely wanted something that would detect drugs when they were present. They focused on making sure they’d never get a false negative (ie a test that said no drugs when there were). This is great, until you realize that they traded that for lots of innocent people potentially being searched. In fact, since there are more people who don’t use drugs than those who do, the chances that someone with a positive test doesn’t have drugs is actually higher than the chance that they do….that’s the base rate fallacy I was talking about earlier.

To further prove this point, there’s an interesting experiment called the Wason Selection task that shows that when it comes to numbers in particular, we’re especially vulnerable to only confirming an error in one direction. In fact 90% of people fail this task because they only look at one way of being wrong.

Are you confused by this? That’s pretty normal. So normal in fact that the thing we use to keep it all straight is literally called a confusion matrix and it looks like this:

If you want to do any learning about stats, learn about this guy, because it comes up all the time. Very few people can do this math well, and that includes the majority of doctors. Yup, the same people most likely to tell you “your test came back positive” frequently can’t accurately calculate how worried you should really be.

So what can we do about it?

Well, learn a little math! Like I said, I’m thinking I need a follow up post just on this topic so I have a reference for this. However, if you’re really not mathy, just remember this: there’s more than one way to be wrong. Any time you reduce your chances of being wrong in one direction, you probably increase them in another. In criminal justice, if we make sure we never miss a guilty person, we might also increase the number of innocent people we falsely accuse. The reverse is also true. Testings, screenings, and judgment calls aren’t perfect, and we shouldn’t fool ourselves in to thinking they are.

Alright, on that happy note, I’ll bid you adieu for now. See ya next week!

Read Part 8 here.

Proof: Using Facts to Deceive (Part 6)

Note: This is part 6 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 5 here.

Okay, I’ll be honest here: this part is one of the hardest parts of my talk to cover. The issue I’m going to talk about here is another framing issue, and it has to do with what experts get quoted on what issues and in what proportions. This is a huge issue open to broad interpretations and many legitimate approaches, so I’m going to intentionally tread lightly here.  A large amount about what you feel is deceptive here will depend on what you already believe.  Additionally, when I cover this in the classroom, I have a pretty good idea that there won’t be any raging conspiracy theorists seating in the seats.  Not so on the internet. I’ve been blogging off and on for about a decade now, and you would not believe how many people do key word searches so they can pop in and spew their theories….so forgive me if I speak in generalities. Oh yes, it’s the controversial issue of:

Experts and Balance

Okay, so what’s the problem here?

The problem, in general, is a public misunderstanding about how science works. Not everyone is a scientist, and that’s okay. We often rely on experts to interpret information for us. This is also okay. In the age of the internet though, almost anyone can find an expert or two to back up their own view. Everyone wants to be the first to break a story, and much can get lost in the rush to be on top of the latest and greatest thing. Like, you know, evidence.

Okay, so what kinds of things should we be looking out for?

Well, there are two sides to this coin. The classic logical fallacy here is argumentum ad verecundiam, or “argument from authority”.  Kinda like this guy:

…though my three year old tells me he’s pretty cool.  In all seriousness though, “TV doctors” love to get up and use their credentials to emphasize their points. Their reach can be enormous, but research has found that over half the claims of people like Dr Oz are either totally unsubstantiated or flat out contradicted by the evidence. Just because someone has a certain set of credentials doesn’t mean they’re always right.

 

So if the popular credentialed people aren’t always right, then good old common sense can guide us right? No, sadly, that’s not true either. The flip side of arguing from authority is “appeal to the common man”, where you respect someone’s opinion because they’re not an authority. For medicine you frequently hear this as “the secret doctors don’t want you to know!” or “my doctor said my child was ______, but as a mother my heart knew that was wrong” (side note: remember that mother’s who turn out to be wrong almost never get interviewed). For some people, this type of argument is even stronger than the one above….but that doesn’t mean it’s not fallacious.

So basically, the water gets really murky because almost anyone can claim to know stuff, cant throw out credentials that may or may not be valid or relevant, can throw out research that may or may not be valid, and otherwise sound very compelling. Yeesh.

Complicating matters even further is the idea of balance and false balance. Balance is when a reporter/news cast presents two opposing sides and gives them both time to state their case. False balance is when you give two wildly unequal sides the same amount of time.

All of this can seem pretty reasonable when it comes to hotly debated topics, like say, nutrition and what we should be eating. If you want to pit the FDA vs Gary Taubes vs Carbsane, I will watch the heck out of that. But there are other issues where the debate gets a little harrier, and the stakes get much higher….like say criminal trials. Do you want a psychic on the stand getting time to explain why they think you’re a murderer? Do you want them getting as much time as the forensics experts who say you aren’t?

At some point we have to say it….science does back up certain opinions more than others, and some experts are more reliable than others. Where you draw the line, sadly, probably depends on what you already believe about the topic.

Why do we fall for this stuff?

Well, partially because we should.  On the whole, experts are probably a pretty good bet when it comes to most scientific matters. They may be wrong at times (just ask Barry Marshall), but I have a lot of faith in science on the whole to move forward and self correct. The scientific process is quite literally mans attempt to correct for all of our fallacies in order to move forward based on reality.  It’s a lofty goal, and we’re not always perfect, but it’s start. We listen to experts because as people with more training, more experience and more context than us actually do frequently do better at controlling their biases.

On the other hand, those of us who have been burned might start to love the anti-hero instead. The idea that a lone wolf can take on the establishment is so cool! Because truthfully sometimes the establishment sucks. People are misdiagnosed, treated rudely, and otherwise incorrectly cast aside. Sometimes “crazy” ideas do turn out to be right. Being a contrarian isn’t always the worst way to go….as my favorite George Bernard Shaw quote says “The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”  Every progress maker is a bit unreasonable at times.

So what can we do about it?

So I may have made it sound like it’s not possible to know anything. That’s not true. Science is awesome, but you have to do a little work to figure out what’s going on. So if you really care about an issue, do a little homework. Read the best and brightest minds that defend the side you don’t agree with. Don’t just read what your side says the other side is saying, read the other side. Read the best of what they have to offer….but check their sources. If they prove to be not-credible on one topic, treat them with suspicion going forward. Scientists should not have to play fast and loose with the truth to get where they need to go. Be suspicious of anyone who does this. Beware of anything that sounds too neat, too clean, too cutting edge. Science and proof move slowly.

Also, follow the money….but remember that works both ways. I work in oncology and there are people who will tell you the treatments we offer are crap and that they have better ones. Evidence to the contrary is dismissed as us not wanting to lose our money. However, the people making these claims frequently make 10 to 20 times what our doctors make. They throw out numbers that represent our whole hospital, while neglecting to mention that their personal income far exceeds any individual employee we have. People make tons of money peddling alternatives.

And if all else fails, just ask a math teacher:

No one’s questioning that a2 + b2 = c2 stuff. Well, except this guy.

Alright, that’s it for part 6….see you next week for part 7!  Read Part 7 here.

Proof: Using Facts to Deceive (Part 5)

Note: This is part 5 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 4 here.

Okay! So we’re almost half way done with the series, and we’re finally reaching the article!  Huzzah! It may seem like overkill to spend this much time talking about articles without, you know, talking about the articles, but the sad truth is by the time you’ve read the headline and looked at the picture, a huge amount of your opinion will already be forming. For this next section, we’re going to talk about the next thing that will probably be presented to you: a compelling anecdote. That’s why I’m calling this section:

The Anecdote Effect

Okay, so what’s the problem here?

The problem here isn’t really a problem, but a fundamental part of human nature that journalists have known for just about forever: we like stories. Since our ancestors gathered around fires, we have always used stories to illustrate and emphasize our points. Anyone who has even taken high school journalism has been taught something like this. I Googled “how to write an article”, and this was one of the first images that came up:

Check out that point #2 “Use drama, emotion, quotations, rhetorical questions, descriptions, allusions, alliteration and metaphors”. That’s how journalists are being taught to reel you in, and that’s what they do. It’s not necessarily a problem, but a story is designed to set your impressions from the get go.  That’s not always bad (and pretty much ubiquitous) but it is difficult when it leaves you with an impression that the numbers are different than they actually are.

What should we be looking out for?

Repeat after me: the plural of anecdote is not data.

From smbc.com

Article writers want you to believe that the problem they are addressing is big and important, and they will do everything in their power to make sure that their opening paragraph leaves you with that impression. This is not a bad thing in and of itself (otherwise how would any lesser known disease or problem get traction?), but it can be abused.  Stories can leave you with an exaggerated impression of the problem, and exaggerated impression of the solution, or an association between two things that aren’t actually related.  If you look hard enough, you can find a story that backs up almost any point you’re trying to make.  Even something with a one in a million chance happens 320 times a day in the US alone.

So don’t take one story as evidence. It could be chance. They could be lying, exaggerating, or mis-remembering.  I mean, I bet I could find a Mainer who could tell me a very sad story about how their wife changing their shopping habits to less processed food led to their divorce.  I could even include this graph with it:

chart

Source.  None of this however, would mean that margarine consumption was actually driving divorce rates in any way.

Why do we fall for this stuff?

Nassim Taleb has dubbed this whole issue “the narrative fallacy”, the idea that if you can  tell a story about something, you can understand it. Stories allow us to tie a nice bow around things and see causes and effects where they don’t really exist.

Additionally, we tend to approach stories differently than we approach statistics. One of the most interesting meditations on the subject is from John Allen Paulos in the New York Times here. He has a great quote about this:

In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled.

I think that sums it up.

So what can we do about it?

First and foremost, always remember that story or statistic that opens an article is ultimately trying to sell you something, even if that something is just the story itself.  Tyler Cowen’s theory is that the more you like the story, the more you should distrust it.

Even under the best of circumstances, people can’t always be trusted to accurately interpret events in their own life:

Source.

Of course, this almost always works the opposite way. People can be very convinced that the Tupperware they used while pregnant caused their child’s behavior problems, but that doesn’t make it true. Correlation does not prove causation in even a large data set, and especially not when it’s just one story.

It also helps to be aware of words that are used, and to think about the numbers behind them. Words like “doubled” can mean a large increase, or that your chances went from 1 in 1,000,000,000 to 1 in 500,000,000. Every time you hear a numbers word, ask what the underlying number was first.  This isn’t always nefarious, but it’s worth paying attention to.

One final thing with anecdotes: it’s an unfortunate fact of life that not everyone is honest. Even journalists with fact checkers and large budget can totally screw up when presented with a sympathetic sounding source. This week, I had the bizarre experience of seeing a YouTube video of a former patient whose case several coworkers worked on. She was eventually “cured” by some alternative medicine, and has taken to YouTube to tell her story.  Not. One. Part. Of. What. She. Said. Was. True. I am legally prohibited from even pointing you in her direction, but I was actually stunned at the level of dishonesty she showed. She lied about her diagnosis, prognosis and everything in between. I had always suspected that many of these anecdotes were exaggerated, but it was jarring to see someone flat out lie so completely. I do believe that most issues with stories are more innocent than that, but don’t ever rule out “they are making it up”, especially in the more random corners of the internet.

By the way, at this point in the actual talk, I have a bit of break the fourth wall moment. I point out that I’ve been using the “tell a story to make your point” trick for nearly every part of this talk, and that they are most certainly appreciating it more than if I just came in with charts and equations. The more astute students are probably already thinking this, and if they’re not thinking it, it’s good to point out how it immediately relates.

Until next week!  Read Part 6 here.

Pictures: Trying to Distract You (Part 4)

Note: This is part 4 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 3 here.

When last we met, we covered what I referred to as “narrative pictures”, or pictures that were being used to add to the narrative part of the story.  In this section, we’re going to start looking at pictures where the problems are more technical in nature…ie graphs and sizing. This is really a blend of a picture problem and a proof problem, because these deceptions are using numbers not narratives.   Since most of these issues are a problem of scales or size, I’m calling this section:

Graphs: Changing the View

Okay, so what’s the problem here?

The problem, once again, is a little bit of marketing. Have you ever judged the quality of a company based on how slick their website looks? Or judged a book by it’s cover, to coin a phrase? Well, it turns out we’re very similar when it comes to judging science. In a 2014 study, (Update: this lab that performed this study has come under review for some questionable data practices. It is not clear if this study is affected, but you can read the details of the accusations here and here) researchers gave people two articles on the effectiveness of a made up drug. The text of both articles was the same, but one had a graph that showed the number of people the drug helped, the other did not.  Surprisingly, 97% of the people who saw the graph believed the drug worked, whereas only 68% of people who read the text did. The researchers did a couple of other experiments, and basically found that not just graphs, but ANY “science-ish” pictures (chemical formulas, etc) influenced what people thought of the results.

So basically, adding graphs or other “technical” pictures to things to lend credibility to their articles or infographic, and you need to watch out.

Okay, so what kind of things should we be looking out for?

Well, in many cases, this isn’t a really a problem. Graphs or charts that reiterate the point of the article are not necessarily bad, but they will influence your perception. If the data warrants it, a chart reiterating the point is fantastic. It’s how nearly every scientific paper written operates, and it’s not inherently deceptive….but you may want to be aware that these pictures by themselves will influence your perception of the data. Not necessarily a problem, but good to be aware of under any circumstance.

There are some case though, where the graph is a little trickier.  Let’s go through a few:

Here’s one from Jamie Bernstein over at Skepchick, who showed this great example in a Bad Chart Thurday post:pica

Issue: The graph y-axis shows percent of growth, not absolute value. This makes hospitalized pica cases look several times larger than anorexia or bulimia cases. In reality, hospitalized anorexia cases are 5 times as common and bulimia cases are 3 times as common as pica cases. These numbers are given at the bottom, but the graph itself could be tricky if you don’t read it carefully.

How about this screen shot from Fox News, found here?

 Issue: Visually, this chart shows the tax rate will quadruple or quintuple if the Bush tax cuts aren’t extended. If the axis started at zero however, the first bar would be about 90% the size of the second one.

How about this tweeted graph from shared by the National Review?

Issue: The problem with this one is the axis does start with zero. The Huffington Post did a good cover of this graph here, along with what some other graphs would look like if you set the scale that large. Now of course there can be legitimate discussion over where a fair axis scale would be, but you should make sure the visual matches the numbers.

And one more example that combines two issues in one:

See those little gas pumps right there? They’ve got two issues going on. The first is a start date that had an unusually low gas price:

gas

The infographic implies that Obama sent gas prices through the roof….but as we can see gas prices were actually bizarrely low the day he took office.  Additionally, the gas pumps involved are deceptive:

b1fb2-gas

If you look, they’re claiming to show that prices doubled. However, the actual size of the second one is four times the one of the first one.  They doubled the height and the width:

76fed-gas2

While I used a lot of political examples here, this isn’t limited to politics. Andrew Gelman caught the CDC doing it here, and even he couldn’t figure out why they’d have mucked with the axis.

There’s lots of repositories for these, and Buzzfeed even did a listicle here if you want more. It’s fun stuff.

Why do we fall for this stuff?

Well, as we’ve said before, visual information can reinforce or skews your perceptions, and visual information with numbers can intensify that effect. This isn’t always a bad thing…after all nearly every scientific paper ever published includes charts and graphs. When you’re reading for fun though, it’s easy to let these things slip by. If you’re trying to process text, numbers, implications AND read the x and y axis and make sure the numbers are fairly portrayed, it can be a challenge.

So what can we do about it?

A few years ago, I asked a very smart colleague how he was able to read and understand so many research papers so quickly. He seemed to read and retain a ton of highly technical literature, while I always found my attention drifting and would miss things. His advice was simple: start with the graphs. See I would always try to read papers from start to finish, looking at graphs when they were cited. He suggested using the graphs to get a sense of the paper, then read the paper with an eye towards explaining the graphs. I still do this, even when I’m reading for fun. If there’s a graph, I look at it first when I know nothing, then read the article to see if my questions about it get answered. It’s easier to notice discrepancies this way. At the very least, it reminds you that the graph should be there to help you. Any evidence that it’s not should make you suspicious of the whole article and the author’s agenda.

So that wraps up our part on Pictures! In part 5, we’ll finally reach the text of the article.

Read Part 5 here.

Pictures: How They Try to Distract You (Part 3)

Note: This is part 3 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 2 here.

As any good internet person knows, you can’t have a good story without a featured image…pictures being worth a thousand words and all that jazz. Nine times out of ten if you see a story pop up on Facebook, it’s going to have an image attached.  Those deceptive and possibly false memes I was talking about in Part 1? All basically words on images. For most stories, there are really two different types of images: graphs or technical images and what I’m going to call “narrative” images.  In this section I’m going to cover the narrative images or what I call:

Narrative Images: Framing the Story

Okay, so what’s the problem here?

In Part 2 I mentioned a study that was mostly about headlines, but that had a really interesting point about pictures as well. In that study, they purposefully mismatched the headline and the picture in a story about a crime. Basically they had the headline read something like “Father of two murdered” and showed a picture of the criminal, or they had it read “Local man murders father of two” and showed a picture of the victim. Later, people who had read a “victim” headline with a picture of a murderer actually felt more sympathy towards the murderer, and those who read a “criminal” headline with a picture of a victim liked the victim less. That’s the power of pictures. We know this, which is why newspapers can end up getting sued for putting the wrong picture on a cover even if they don’t mention any names.

Any picture used to frame a story will potentially end up influencing how we remember that story. A few weeks ago, there was a big kerfluffle over some student protests at Oberlin. The NY Post decided to run the story with a picture of Lena Dunham, an alum of the college who is in no way connected to the protests. In a couple of months, my bet is a good number of people who read that story will end up remembering that she had something to do with all this.

What should we be looking out for?

When you read an article, at the very least you should ask how the picture matches the story. Most of the time this will be innocuous, but don’t forget for a minute the picture is part of an attempt to frame whatever the person is saying.  This can be deviously subtle too.  One of the worst examples I ever heard of was pointed out by Scott Alexander after a story about drug testing welfare recipients hit the news.  The story came with lots of headline/picture combos like this one from Jezebel:

Drugtesting

Now check that out! Only .2% of welfare applicants failed a drug screening! That’s awesome.  But what Scott Alexander pointed out in that link up there is that urine drug testing actually has a false positive rate higher than .2%.  This means if you tested a thousand people that you knew weren’t on drugs, you’d get more than a .2% failure rate.  So what happened here? How’d it get so low?

The answer lies in that technically-not-inaccurate word “screening”.  Once you saw that picture, your brain probably filled in immediately what “screening” meant, and it conjured up a picture of a bunch of people taking a urine drug test. The problem is, that’s not what happened. The actual drug screening used here was a written drug screening. That’s what those people failed, and that’s why we didn’t get a whole bunch of false positives.  Now I have no idea if the author did this on purpose or not, but it certainly leaves people with a much different impression than the reality.

Why do we fall for this stuff?

Every time we see a picture, we’re processing information with a slightly different part of our brain. In the best case scenario, this enhances our understanding of information and engages us more fully. In the worse case scenario, it skews our memory of written information, like in the murderer/victim study I mentioned above. This actually works in both directions….asking people questions with bad verbal information can skew their visual memory of events.  Even people who are quite adept at interpreting numbers, words and data can forget to consider the picture attached.

So what can we do about it?

Awareness is key. Any time you see a picture, you have to think “what is this trying to get me to think?”  You have to remember that we are visual creatures, and if text worked better than visuals commercials wouldn’t look the way they do.

Now, before I go, I have to say a few words about infographics. These terrible things are an attempt to take entire topic and make them in to nothing but a narrative photo. They suck. I hate them. Everything I say in this entire series can be assumed to go TRIPLE for infographics. Every time you see an infographic, you should remember that 95% of them are inaccurate. I just made that up, but if you keep that rule in mind you’ll never be caught off guard. Think Brilliant has the problem summed up very nicely with this very meta infographic about the problem with infographics:

Think Brilliant has more here.

The key with infographics is much like those little picture memes that are everywhere: consider them wrong at baseline, and only trust after verifying.

If you want more, read Part 4 here.