Internet Science: Some Updates

Well, it’s that time of year again: back to school. Next week I am again headed back to high school to give the talk that spawned my Intro to Internet Science series last year.

This is my third year talking to this particular class, and I have a few updates that I thought folks might be interested in. It makes more sense if you read the series (or at least the intro to what I try to do in this talk), so if you missed that you can check it out here.

Last year, the biggest issue we ran in to was kids deciding they can’t believe ANY science, which I wrote about here. We’re trying to correct that a bit this year, without losing the “be skeptical” idea. Since education research kinda has a replication problem, all the things we’re trying are generally just a discussion between the teacher and I.

  1. Skin in the game/eliminating selection bias In order to make the class a little more interactive, I’ve normally given the kids a quiz to kick things off. We’ve had some trouble over the years getting the kids answers compiled, so this year we’re actually giving them the quiz ahead of time. This means I’ll be able to have the results available before the talk, so I can show them during the talk. I’m hoping this will help me figure out my focus a bit. When I only know the feedback of the kids who want to raise their hands, it can be hard to know which issues really trip the class up.
  2. Focus on p-values and failure to replicate In the past during my Crazy Stats Tricks part, I’ve tried to cram a lot in. I’ve decided this is too much, so I’m just going to include a bit about failed replications. Specifically, I’m going to talk about how popular studies get repeated even when it turns out they weren’t true. Talking about Wonder Woman and power poses is a pretty good attention getter, and I like to point out that the author’s TED talk page contains no disclaimer that her study failed to replicate. It does however tell us it’s been viewed 35,000,000 times.
  3. Research checklist As part of this class, these kids are eventually going to have to write a research paper. This is where the whole “well we can’t really know anything” issue got us last year. So to end the talk, we’re going to give the kids this research paper checklist, which will hopefully help give them some guidance. Point #2 on the checklist is “Be skeptical of current findings, theories, policies, methods, data and opinions” so our thought is to basically say “okay, I got you through #2….now you have the rest of the year to work through the rest”. I am told that many of the items on that list meet the learning objectives for the class, so this should give the teacher something to go off of for the rest of the year as well.

Any other thoughts or suggestions (especially from my teacher readers!) are more than welcome. Wish me luck!

Intro to Internet Science: A Postlude

All right, we did it! 10 topics, 10 weeks, and a whole slew of examples. I’ve had a lot of fun, gotten some great feedback, and had some very kind comments from some very lovely teachers. It’s also given me some good ideas for some ongoing posts.  In the talks I give I almost never have time to get in to any actual math, but hey, what’s the point of having a blog if you can’t go on and on about the stuff you like? I’ll probably be calling that “crazy stats tricks” and at a minimum I’ll cover some of the topics I complained about in Part 7.  Any suggestions for that series, or feedback on this series is welcome either in the comments or on the feedback page.

Now that I have that out of the way, lets take a moment to reflect on what we’ve learned, eh?  Overall, there are four P’s:

Presentation
Pictures
Proof
People

We spent a little time on all four:

Presentation: How They Reel You In
In Parts 1 and 2, we learned how quickly the internet spreads completely false information, and to always make sure what you are quoting is actually real. We also learned that headlines are marketing tools, and to be wary of what they are selling.

Pictures: Trying to Distract You
In Parts 3 and 4, we added some visuals. Narrative pictures, or those that help illustrate the story, can set impressions that can be ridiculously hard to correct. It gets even worse when you add graphs. Even a little bit of technical information can make things look more credible than they deserve.

Proof: Using Facts to Deceive
In Parts 5, 6, and 7, we covered “the truths people use to lie with”. Here we covered information that is true, but used to give false impressions. We started with stories and anecdotes, which are often used to humanize and emphasize various points. Next we moved on to experts and balance, and how we need to be careful who we listen to and who we dismiss.  Finally I gave a woefully short and incomplete overview of some statistical tricks that get used a lot.

People: Our Own Worst Enemy
And now we come to the part where we have only ourselves to blame. First we took a look at how our own pre-existing beliefs color our views of facts and even impact our ability to do math. Next, we take a look at how our tendency to not be entirely honest can screw up surveys and research based on them.  Finally, we had a bit of a discussion about the limits of scientific understanding, research ethics and things we may never know.

And that’s a wrap!

People: Our Own Worst Enemy (Part 10)

Note: This is part 10 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to Part 9 here.

Wow folks….10 weeks later we are coming to the end.   This is a shorter one than the rest of the series, but I think it’s still important. Up until now I’ve been referencing science as thought it can always provide the guidance we need if we just know where to look. Unfortunately that’s not always true. It’s at this point that I like to step back and get a little bit reflective about evidence and science in general, and how we acknowledge what we may never know. That’s why I call this section:

Acknowledging our Limitations

Okay, so what’s the problem here?

The problem is that just like research and evidence can be manipulated, so can lack of research and evidence. The reality is that there are practical, financial, moral and ethical issues facing all researchers, and there are limits on both what we know at the moment and what we can ever know. A lack of evidence doesn’t always mean someone’s hiding something. Unfortunately, none of this stops people from claiming it does. This normally comes up when someone is explaining why their opponent’s evidence doesn’t count.

What kinds of things should we be looking out for? 

Mostly calls for more research. It’s tricky business because sometimes this is a perfectly reasonable claim, but sometimes it’s not. Sometimes it’s just a smokescreen for an agenda.

For example, in 2012 two doctors from the CDC were called in front of Congress to discuss vaccine safety. As part of the hearing, Congressman Bill Posey asked the doctors if they had done a study on autism in vaccinated vs unvaccinated children. You can read the whole exchange here, but the answer to the question was no. Why? Well,  a double-blind placebo controlled trial of vaccines would be unethical to do. For non-fatal diseases you can sometimes do them, but you can’t actually knowingly put people in the way of harm no matter how much you need or want the data. To give a placebo (i.e. fake) measles vaccine to a child  just to see if they get sick and die or not would be unethical. The NIH requires studies to actually have a “fair risk-benefit ratio”, so there either has to be low risk or high benefit. I work in oncology and have actually seen trials closed to enrollment immediately because data suggested a new treatment might have more side effects than we suspected.

A Congressman looking in to vaccine safety should know this, but to anyone listening it might have sounded like a reasonable question. Why aren’t we doing the gold standard research? What are they hiding?

Other examples of this can be asking for evidence such as “prove to me my treatment DOESN’T work“.

Why do we fall for this stuff?

Well, mostly because many of us never considered it. If you’re not working in research, it can be hard to notice when someone’s asking for something that would never get past the IRB. Even if something would be ethical, it’s hard to realize how tricky some studies would be. In something like nutrition science this is rampant.  I mean, how much money would it take for you to change your diet for the next 30 years to scientists could study you?

I took a “Statistics in Clinical Trials” class a few years ago, and I was surprised that nearly half of it was really an ethics class. Every two years I (and everyone else at my institute) also have to take 8 hours of training in human subject research, just to make sure I stay clear on the guidelines. It’s not easy stuff, but you have to remember the data can’t always come first.

So what can we do about it?

Well first, recognize these limitations exist. We can and should always be refining our research, but we have to respect limits. Read about famous cases where this has gone wrong, if you’ve got the stomach for it. The Tuskegee Syphilis Experiments  and the Doctor’s Trial that resulted in the Nuremberg Code are two of the most famous examples of this, but there are others.  The more you know, the more you’ll be prepared for this one when you see it.

 

All right, that wraps up part 10! I think I’m going to cut this off here and do my wrap up next week. See you then!

People: Our Own Worst Enemies (Part 9)

Note: This is part 9 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to Part 8 here.

Okay, we’re in the home stretch here! In part 8 I talked about how we as individuals work to confuse ourselves when we read and interpret data. Today I’m going to talk about how we as a society collectively work to undermine our own understanding of science, one little step at a time.  Oh that’s right, we’re talking about:

Surveys and Self Reporting

Okay, so what’s the problem here?

The problem is that people are weird. Not any individual really (ed note: this is false, some people really are weird), but collectively we have some issues that add up. Nowhere is this more evident than on surveys. There is something about those things that brings out the worst in us.  For example, in this paper from 2013, researchers found that 59% of men and 67% of women in the National Health and Nutrition Examination Survey (NHANES) database had reported calorie intake that were “physiologically implausible” and “incompatible with life”.  The NHANES database is incredibly widely used for nutrition research for about 40 years, and these findings have caused some to call for an end to self-reporting in nutrition research.  Now I doubt any individual was intending to mislead, but as a group those effects add up.

Nutrition isn’t the only field with a problem though. Any field that studies something where people think they can make themselves look better has an issue. For example, the Bureau of Labor Statistic found that most people exaggerate how many hours they work per week. People who say they work 40 hours normally only work 37. People who say they work 75 hours a week typically work about 50. One or two people exaggerating doesn’t make a difference, but when it’s a whole lot of people it adds up.

So what kinds of things should we be looking out for?

Well, any time things say they’re based on a survey, you may want to get the particulars. Before we even get to some of the reporting bias I mentioned above, we also have to contend with questions that are asked one way and reported another.  For example back in 2012 I wrote about an article that said “1/3rd of women resent their husbands don’t make more money”. When you read the original question, it asked if the “sometimes” resent that their husband doesn’t make more money.  It’s a one word difference, but it changes the whole tone of the question.  Every time you see a headline about what “people think”, be a little skeptical.  Especially if it looks like this:

lizardpeople

That one’s from a survey about conspiracy theories, and they got that 12 million number from extrapolating out the 4% of respondents to the survey who said they believed in lizard people to the entire US population.  In the actual survey, this represented 50 people.  Do you think it’s more plausible that the pollsters found 50 people who believed in lizard people or 50 people who thought this was an amusing thing to say yes to?

But people who troll polls aren’t the only problem, polling companies play this game too, asking questions designed to grab a headline. For example, recently a poll found that 10% of college graduates believe a woman named Judith Sheindlin sits on the Supreme Court.  College graduates were given a list of names and told to pick the one who was a current Supreme Court justice.  So what’s the big deal, other than a wrong answer? Well apparently Judith Sheindlin is the real life name of “Judge Judy” a TV show judge. News outlets had a field day with the “college grads think Judge Judy is on the Supreme Court” headlines. However, the original question never used the phrase “Judge Judy”, only the nearly unrecognizable name “Judith Sheindlin. The Washington Post thankfully called this out, but all the headlines had already been run. Putting in a little known celebrity name in your question then writing a headline with the well known name is beyond obnoxious. It’s a question designed to make people look dumb and make everyone reading feel superior. I mean, quick, who is Caryn Elaine Johnson? Thomas Mapother IV? People taking a quiz will often guess things that sound vaguely right or familiar, and I wouldn’t read too much in to it.

Why do we fall for this stuff?

This one I fully blame on the people reporting things for not giving proper context. This is one area where journalists really don’t seem to be able to help themselves. They want the splashy headline, methodology or accuracy be damned. They’re playing to our worst tendencies and desires….the desire to feel better about yourself. I mean, it’s really just a basic ego boost. If you know that Judge Judy isn’t on the Supreme Court, then you must clearly be smarter than all those people who didn’t right?

So what can we do about it?

The easiest thing to do is not to trust the journalists. Don’t let someone else tell you what people said, try to find the question itself.  Good surveys will always provide the actual questions that they asked people. Remember that tiny word shifts can change answers enormously.  Words like “sometimes” “maybe” and “occasionally” can be used up front, then dropped later when reported. Even more innocuous word choices can make a difference. For example, in 2010 CBS found that asking if “gays and lesbians” should be able to serve in the military instead of “homosexuals” causes quite the change in people’s opinions:

gaysinmilitary

So watch the questions, watch the wording, watch out for people lying, and watch out for the reporting.  Basically, paranoia is just good sense when lizard people really are out to get you.

See you in Week 10! Read Part 10 here.

People: Our Own Worst Enemy (Part 8)

Note: This is part 8 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here or go back to Part 7 here.

I love this part of the talk because I get to present my absolute favorite study of all time. Up until now I’ve mostly been covering things about how other people are trying to fool you to get them to your side, but now I’m going to wade in to how we seek to fool ourselves.  That’s why I’m calling this part:

Biased Interpretations and Motivated Reasoning

Okay, so what’s the problem here?

The problem here my friend, is you. And me. Well, all of us really…..especially if you’re smart.  The unfortunate truth is that for all the brain power we put towards things, our application of that brain power can vary tremendously when we’re evaluating information that we like, that we’re neutral towards, or that we don’t like.  How tremendously? Well, in 2013 the working paper “Motivated Numeracy and Enlightened Self-Government“, some researchers decided to ask if people with a rash got better if they used a new skin cream.  They provided this data:

Pt8matrix

The trick here is that you are comparing absolute value to proportion.  More people got better in the “use the skin cream” group, but more people also got worse. The proportion is better for those who did not use the cream (about 5:1) as opposed to those who did use it (about 3:1). This is a classic math skills problem, because you have to really think through what question you’re trying to answer before you calculate, and what you are actually comparing. Baseline about 40% of people in the study got this right.

What the researchers did next was really cool. For some participants, they took the original problem, kept the numbers the same, but changed “patients” to “cities”, “skin cream” to “strict gun control laws” and “rash” to “crime”.  They also flipped the outcome around for both problems, so participants had one of four possible questions.  In one the skin cream worked, in one it didn’t, in one strict gun control worked, in one it didn’t. The numbers in the matrix remained the same, but the words around them flipped.  They also asked people their political party and a bunch of other math questions to get a sense of their overall mathematical ability. Here’s how people did when they were assessing rashes and skin cream:

rashgraph

Pretty much what we’d expect. Regardless of political party, and regardless of the outcome of the question, people with better math skills did better1.

Now check out what happens when people were shown the same numbers but believed they were working out a problem about the effectiveness of gun control legislation:

gunproblem

Look at the end of that graph there, where we see the people with a high mathematical ability. If using their brain power got them an answer that they liked politically, the did it. However, when the answer didn’t fit what they liked politically, they were no better than those with very little skill at getting the right answer.  Your intellectual capacity does NOT make you less likely to make an error….it simply makes you more likely to be a hypocrite about your errors.  Yikes.

Okay, so what kind of things should we be looking out for?

Well, this sort of thing is most common on debates where strong ethical or moral stances intersect with science or statistics. You’ll frequently see people discussing various issues, then letting out a sigh and saying “I don’t know why other people won’t just do their research!”. The problem is that if you believe something strongly already, you’re quite likely to think any research that agrees with you is more compelling than it actually is. On the other hand, research that disagrees with you will look less compelling than it may be.

This isn’t just a problem for the hoi polloi either. I just wrote earlier this week about two research groups who were accusing the other of choosing statistical methods that would support their own pet conclusions. We all do it, we just see it more clearly when it’s those we disagree with.

Why do we fall for this stuff?

Oh so many reasons.  In fact Carol Tarvis has written an excellent book about this (Mistakes Were Made (but Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts) that should be required reading for everyone. In most cases though it’s pretty simple: we like to believe that all of our beliefs are perfectly well reasoned and that all the facts back us up. When something challenges that assumption, we get defensive and stop thinking clearly.  There’s also some evidence that the internet may be making this worse by giving us access to other people who will support our beliefs and stop us from reconsidering our stances when challenged.

In fact, researchers have found that the stronger your stance towards something, the more likely you are to hold simplistic beliefs about it (ie “there are only two types of people, those who agree with me and those who don’t”).

An amusing irony: the paper I cited in that last paragraph was widely reported on because it showed evidence that liberals are as bad about this as conservatives. That may not surprise most of you, but in the overwhelmingly liberal field of social psychology this finding was pretty unusual. Apparently when your field is >95% liberal, you mostly find that bias, dogmatism and simplistic thinking are conservative problems. Probably just a coincidence.

So what can we do about it?

Richard Feynman said it best:

If you want to see this in action, watch your enemy. If you want to really make a difference, watch yourself.

Well that got heavy.  See you next week for Part 9!

Read Part 9 here.

1. You’ll note this is not entirely true at the lowest end. My guess is if you drop below a certain level of mathematical ability, guessing is your best bet.

 

 

Proof: Using Facts to Deceive (Part 7)

Note: This is part 7 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to Part 6 here.

Okay, now we come to the part of the talk that is unbelievably hard to get through quickly. This is really a whole class, and I will probably end up putting some appendices on this series just to make myself feel better.  If the only thing I ever do in life is to teach as many people as possible the base rate fallacy, I’ll be content. Anyway, this part is tough because I at least attempt to go through a few statistical tricks that actually require some explaining. This could be my whole talk, but I’ve decided against it in favor some of the softer stuff. Anyway, this part is called:

Crazy Stats Tricks: False Positives, Failure to Replicate, Correlations, Etc

Okay, so what’s the problem here?

Shenanigans, chicanery, and folks otherwise not understanding statistics and numbers. I’ve made reference to some of these so far, but here’s a (not-comprehensive) list:

  1. Changing the metric (ie using growth rates vs absolute rates, saying “doubled” and hiding the initial value, etc)
  2. Correlation and causation confusion
  3. Failure to Replicate
  4. False Positives/False Negatives

They each have their own issues. Number 1 deceives by confusing people, Number 2 makes people jump to conclusions, Number 3 presents splashy new conclusions that no one can make happen again, and Number 4 involves too much math for most people but yields some surprising results.

Okay, so what kind of things should we be looking out for?

Well each one is a little different. I touched on 1 and 2 a bit previously with graphs and anecdotes. For failure to replicate, it’s important to remember that you really need multiple papers to confirm findings, and having one study say something doesn’t necessarily mean subsequent studies will say the same thing. The quick overview though is that many published studies don’t bear out. It’s important to realize that any new shiny study (especially psychology or social science) could turn out to not be reproducible, and the initial conclusions invalid. This warning is given as a boilerplate “more research is needed” at the end of articles, but it’s meant literally.

False positives/negatives are a different beast that I wish more people understood.  While this applies to a lot of medical research, it’s perhaps clearest to explain in law enforcement.  An example:

In 2012, a (formerly CIA) couple was in their home getting their kids ready for school when they were raided by a SWAT team. They were accused of being large scale marijuana growers, and their home was searched. Nothing was found.  So why did they come under investigation? Well it turns out they had been seen buying gardening equipment frequently used by marijuana growers, and the police had then tested their trash for drug residue. They got two positive tests, and they raided the house.

Now if I had heard this reported in a news story, I would have thought that was all very reasonable. However, the couple eventually discovered that the drug test used on their trash has a 70% false positive rate. Even if their trash had been perfectly fine, there was still at least a 50% they’d get two positive tests in a row (and that assumes nothing in their trash was triggering this). So given a street with ZERO drug users, you could have found evidence to raid half the houses.  The worst part of this is that the courts ruled that the police themselves were not liable for not knowing that the test was that inaccurate, so their assumptions and treatment of the couple were okay. Whether that’s okay is a matter for legal experts, but we should all feel a little uneasy that we’re more focused on how often our tests get things right than how often they’re wrong.

Why do we fall for this stuff?

Well, some of this is just a misunderstanding or lack of familiarity with how things work, but the false positive/false negative issue is a very specific type of confirmation bias. Essentially we often don’t realize that there is more than one way to be wrong, and in avoiding one inaccuracy, we increase our chances of different types of inaccuracy.  In the case of the police departments using the inaccurate tests, they likely wanted something that would detect drugs when they were present. They focused on making sure they’d never get a false negative (ie a test that said no drugs when there were). This is great, until you realize that they traded that for lots of innocent people potentially being searched. In fact, since there are more people who don’t use drugs than those who do, the chances that someone with a positive test doesn’t have drugs is actually higher than the chance that they do….that’s the base rate fallacy I was talking about earlier.

To further prove this point, there’s an interesting experiment called the Wason Selection task that shows that when it comes to numbers in particular, we’re especially vulnerable to only confirming an error in one direction. In fact 90% of people fail this task because they only look at one way of being wrong.

Are you confused by this? That’s pretty normal. So normal in fact that the thing we use to keep it all straight is literally called a confusion matrix and it looks like this:

If you want to do any learning about stats, learn about this guy, because it comes up all the time. Very few people can do this math well, and that includes the majority of doctors. Yup, the same people most likely to tell you “your test came back positive” frequently can’t accurately calculate how worried you should really be.

So what can we do about it?

Well, learn a little math! Like I said, I’m thinking I need a follow up post just on this topic so I have a reference for this. However, if you’re really not mathy, just remember this: there’s more than one way to be wrong. Any time you reduce your chances of being wrong in one direction, you probably increase them in another. In criminal justice, if we make sure we never miss a guilty person, we might also increase the number of innocent people we falsely accuse. The reverse is also true. Testings, screenings, and judgment calls aren’t perfect, and we shouldn’t fool ourselves in to thinking they are.

Alright, on that happy note, I’ll bid you adieu for now. See ya next week!

Read Part 8 here.

Proof: Using Facts to Deceive (Part 6)

Note: This is part 6 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to part 5 here.

Okay, I’ll be honest here: this part is one of the hardest parts of my talk to cover. The issue I’m going to talk about here is another framing issue, and it has to do with what experts get quoted on what issues and in what proportions. This is a huge issue open to broad interpretations and many legitimate approaches, so I’m going to intentionally tread lightly here.  A large amount about what you feel is deceptive here will depend on what you already believe.  Additionally, when I cover this in the classroom, I have a pretty good idea that there won’t be any raging conspiracy theorists seating in the seats.  Not so on the internet. I’ve been blogging off and on for about a decade now, and you would not believe how many people do key word searches so they can pop in and spew their theories….so forgive me if I speak in generalities. Oh yes, it’s the controversial issue of:

Experts and Balance

Okay, so what’s the problem here?

The problem, in general, is a public misunderstanding about how science works. Not everyone is a scientist, and that’s okay. We often rely on experts to interpret information for us. This is also okay. In the age of the internet though, almost anyone can find an expert or two to back up their own view. Everyone wants to be the first to break a story, and much can get lost in the rush to be on top of the latest and greatest thing. Like, you know, evidence.

Okay, so what kinds of things should we be looking out for?

Well, there are two sides to this coin. The classic logical fallacy here is argumentum ad verecundiam, or “argument from authority”.  Kinda like this guy:

…though my three year old tells me he’s pretty cool.  In all seriousness though, “TV doctors” love to get up and use their credentials to emphasize their points. Their reach can be enormous, but research has found that over half the claims of people like Dr Oz are either totally unsubstantiated or flat out contradicted by the evidence. Just because someone has a certain set of credentials doesn’t mean they’re always right.

 

So if the popular credentialed people aren’t always right, then good old common sense can guide us right? No, sadly, that’s not true either. The flip side of arguing from authority is “appeal to the common man”, where you respect someone’s opinion because they’re not an authority. For medicine you frequently hear this as “the secret doctors don’t want you to know!” or “my doctor said my child was ______, but as a mother my heart knew that was wrong” (side note: remember that mother’s who turn out to be wrong almost never get interviewed). For some people, this type of argument is even stronger than the one above….but that doesn’t mean it’s not fallacious.

So basically, the water gets really murky because almost anyone can claim to know stuff, cant throw out credentials that may or may not be valid or relevant, can throw out research that may or may not be valid, and otherwise sound very compelling. Yeesh.

Complicating matters even further is the idea of balance and false balance. Balance is when a reporter/news cast presents two opposing sides and gives them both time to state their case. False balance is when you give two wildly unequal sides the same amount of time.

All of this can seem pretty reasonable when it comes to hotly debated topics, like say, nutrition and what we should be eating. If you want to pit the FDA vs Gary Taubes vs Carbsane, I will watch the heck out of that. But there are other issues where the debate gets a little harrier, and the stakes get much higher….like say criminal trials. Do you want a psychic on the stand getting time to explain why they think you’re a murderer? Do you want them getting as much time as the forensics experts who say you aren’t?

At some point we have to say it….science does back up certain opinions more than others, and some experts are more reliable than others. Where you draw the line, sadly, probably depends on what you already believe about the topic.

Why do we fall for this stuff?

Well, partially because we should.  On the whole, experts are probably a pretty good bet when it comes to most scientific matters. They may be wrong at times (just ask Barry Marshall), but I have a lot of faith in science on the whole to move forward and self correct. The scientific process is quite literally mans attempt to correct for all of our fallacies in order to move forward based on reality.  It’s a lofty goal, and we’re not always perfect, but it’s start. We listen to experts because as people with more training, more experience and more context than us actually do frequently do better at controlling their biases.

On the other hand, those of us who have been burned might start to love the anti-hero instead. The idea that a lone wolf can take on the establishment is so cool! Because truthfully sometimes the establishment sucks. People are misdiagnosed, treated rudely, and otherwise incorrectly cast aside. Sometimes “crazy” ideas do turn out to be right. Being a contrarian isn’t always the worst way to go….as my favorite George Bernard Shaw quote says “The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”  Every progress maker is a bit unreasonable at times.

So what can we do about it?

So I may have made it sound like it’s not possible to know anything. That’s not true. Science is awesome, but you have to do a little work to figure out what’s going on. So if you really care about an issue, do a little homework. Read the best and brightest minds that defend the side you don’t agree with. Don’t just read what your side says the other side is saying, read the other side. Read the best of what they have to offer….but check their sources. If they prove to be not-credible on one topic, treat them with suspicion going forward. Scientists should not have to play fast and loose with the truth to get where they need to go. Be suspicious of anyone who does this. Beware of anything that sounds too neat, too clean, too cutting edge. Science and proof move slowly.

Also, follow the money….but remember that works both ways. I work in oncology and there are people who will tell you the treatments we offer are crap and that they have better ones. Evidence to the contrary is dismissed as us not wanting to lose our money. However, the people making these claims frequently make 10 to 20 times what our doctors make. They throw out numbers that represent our whole hospital, while neglecting to mention that their personal income far exceeds any individual employee we have. People make tons of money peddling alternatives.

And if all else fails, just ask a math teacher:

No one’s questioning that a2 + b2 = c2 stuff. Well, except this guy.

Alright, that’s it for part 6….see you next week for part 7!  Read Part 7 here.