People: Our Own Worst Enemy (Part 10)

Note: This is part 10 in a series for high school students about reading and interpreting science on the internet. Read the intro and get the index here, or go back to Part 9 here.

Wow folks….10 weeks later we are coming to the end.   This is a shorter one than the rest of the series, but I think it’s still important. Up until now I’ve been referencing science as thought it can always provide the guidance we need if we just know where to look. Unfortunately that’s not always true. It’s at this point that I like to step back and get a little bit reflective about evidence and science in general, and how we acknowledge what we may never know. That’s why I call this section:

Acknowledging our Limitations

Okay, so what’s the problem here?

The problem is that just like research and evidence can be manipulated, so can lack of research and evidence. The reality is that there are practical, financial, moral and ethical issues facing all researchers, and there are limits on both what we know at the moment and what we can ever know. A lack of evidence doesn’t always mean someone’s hiding something. Unfortunately, none of this stops people from claiming it does. This normally comes up when someone is explaining why their opponent’s evidence doesn’t count.

What kinds of things should we be looking out for? 

Mostly calls for more research. It’s tricky business because sometimes this is a perfectly reasonable claim, but sometimes it’s not. Sometimes it’s just a smokescreen for an agenda.

For example, in 2012 two doctors from the CDC were called in front of Congress to discuss vaccine safety. As part of the hearing, Congressman Bill Posey asked the doctors if they had done a study on autism in vaccinated vs unvaccinated children. You can read the whole exchange here, but the answer to the question was no. Why? Well,  a double-blind placebo controlled trial of vaccines would be unethical to do. For non-fatal diseases you can sometimes do them, but you can’t actually knowingly put people in the way of harm no matter how much you need or want the data. To give a placebo (i.e. fake) measles vaccine to a child  just to see if they get sick and die or not would be unethical. The NIH requires studies to actually have a “fair risk-benefit ratio”, so there either has to be low risk or high benefit. I work in oncology and have actually seen trials closed to enrollment immediately because data suggested a new treatment might have more side effects than we suspected.

A Congressman looking in to vaccine safety should know this, but to anyone listening it might have sounded like a reasonable question. Why aren’t we doing the gold standard research? What are they hiding?

Other examples of this can be asking for evidence such as “prove to me my treatment DOESN’T work“.

Why do we fall for this stuff?

Well, mostly because many of us never considered it. If you’re not working in research, it can be hard to notice when someone’s asking for something that would never get past the IRB. Even if something would be ethical, it’s hard to realize how tricky some studies would be. In something like nutrition science this is rampant.  I mean, how much money would it take for you to change your diet for the next 30 years to scientists could study you?

I took a “Statistics in Clinical Trials” class a few years ago, and I was surprised that nearly half of it was really an ethics class. Every two years I (and everyone else at my institute) also have to take 8 hours of training in human subject research, just to make sure I stay clear on the guidelines. It’s not easy stuff, but you have to remember the data can’t always come first.

So what can we do about it?

Well first, recognize these limitations exist. We can and should always be refining our research, but we have to respect limits. Read about famous cases where this has gone wrong, if you’ve got the stomach for it. The Tuskegee Syphilis Experiments  and the Doctor’s Trial that resulted in the Nuremberg Code are two of the most famous examples of this, but there are others.  The more you know, the more you’ll be prepared for this one when you see it.

 

All right, that wraps up part 10! I think I’m going to cut this off here and do my wrap up next week. See you then!

Bad data vs False Data

We here at Bad Data Bad would like to note that when we pick studies to criticize, we operate under the assumption that what the studies actually published are accurate, and that most of the mistakes are made in the interpretation or the translation of those findings in to news.

This article from the New York Times last week reminds us that this may not always be a good assumption.

A few fabricated papers have managed to make news headlines over the past few years….the Korean researcher who said he’d cloned a stem cell….the UConn researcher who falsified data in a series of papers on the health benefits of red wine….and a Dutch social scientist who faked entire experiments to get his data.

This is where the scientific principle of replication is supposed to step in, and why it’s always a decent idea to withhold judgement until somebody else can find the same thing the first study did.  Without that, it’s nearly impossible to know if someone falsified their data, without people in their own lab blowing the whistle.

If you’re curious about these retractions, the Retraction Watch blog is a pretty good source for papers that get yanked.

There’s bad data, and then there’s data that’s just plain mean….

I’ve worked at teaching hospitals for pretty much my whole post-college career, so I generally heave a bit of a sigh when I hear the initials “IRB”.  IRB’s (Institutional Review Board) are set up to protect patients and approve of research, but they also have power to reject proposed studies and cause lots of paperwork.  Sometimes though, you need a good reminder of why they were invented.

Apparently, some scientists in the 1940’s tried to develop a pain scale based on burning people and rating the pain.  Then, to make sure they had a good control, they burned pregnant women while in between contractions.

While it actually wasn’t a half bad way of figuring out what their numerical scale should look like, that is just WRONG.  As a pregnant women, I can pretty confidently say that anyone coming at me with a flat iron during labor will be kicked.  Hard.

Unethical gathering of data is not only not worth it, but also frequently wasted.  In the study mentioned above, the data proved useless, as pain is too subjective to be really quantified.  After this fiasco, it wouldn’t be until 2010 that someone came up with a really workable pain scale.