So Why ARE Most Published Research Findings False? The Corollaries

Welcome to “So Why ARE Most Published Research Findings False?”, a step by step walk through of the John Ioannidis paper “Why Most Published Research Findings Are False”. It probably makes more sense if you read this in order, so check out the intro here , Part 1  here and Part 2  here.

Okay, first a quick recap: Up until now, Ioannidis has spent most of the paper providing a statistical justification for considering not just study power and p values, but also made a case for including pre-study odds, bias measures, and the number of teams working on a problem as items to look at when trying to figure out if a published finding is true or not. Because he was writing a scientific paper and not a blog post, he did a lot less editorializing than I did when I was breaking down what he did. In this section he changes all that, and he goes through a point by point breakdown of what this all means with a set of 7 6 corollaries. The words here in bold are his, but I’ve simplified the explanations. Some of this is a repeat from the previous posts, but hey, it’s worth repeating.

Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. In part 1 and part 2, we saw a lot of graphs that showed good study power had a huge effect on result reliability. Larger sample sizes = better study power.

Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. This is partially just intuitive, but also part of the calculation for study power. Larger effect sizes = better study power. Interestingly, Ioannidis points out here that given all the math involved, any field looking for effect sizes smaller than 5% is pretty much never going to be able to confirm their results.

Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. That R value we talked about in part 1 is behind this one. Pre-study odds matter, and fields that are generating new hypotheses or exploring new relationships are always going to have more false positives than studies that replicate others or meta-analyses.

Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. This should be intuitive, but it’s often forgotten. I work in oncology, and we tend to use a pretty clear cut end point for many of our studies: death. Our standards around this are so strict that if you die in a car crash less than 100 days after your transplant, you get counted in our mortality statistics. Other fields have more wiggle room. If you are looking for mortality OR quality of life OR reduced cost OR patient satisfaction, you’ve quadrupled your chance of a false positive.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. This one’s pretty obvious. Worth noting: he points out “trying to get tenure” and “trying to preserve ones previous findings” are both sources of potential bias.

Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. This was part of our discussion last week. Essentially it’s saying that if you have 10 people with tickets to a raffle, the chances that one of you wins is higher than the chances that you personally win. If we assume 5% of positive findings happen due to chance, having multiple teams work on a question will inevitably lead to more false positives.

Both before and after listing these 6 things out, Ioannidis reminds us that none of these factors are independent or isolated. He gives some specific examples from genomics research, but then also gives this helpful table.  To refresh your memory, the 1-beta column is study power (influenced by sample size and effect size), R is the pre-study odds (varies by field), u is bias, and the “PPV” column over on the side there is the chance that a paper with a positive finding is actually true. Oh, and “RCT” is “Randomized Control Trial”:

I feel a table of this sort should hang over the desk of every researcher and/or science enthusiast.

Now all this is a little bleak, but we’re still not entirely at the bottom. We’ll get to that next week.

Part 4 is up! Click here to read it.

5 thoughts on “So Why ARE Most Published Research Findings False? The Corollaries

  1. #7? Or is that what you meant by “next?”
    I appreciate that #5 is not just about cash. I can see being very suspicious of the researcher receiving cash. But tenure, and support for one’s research, and popularity with peers are also powerful motivators that get ignored.

    Like

    • Yeah, the seven was my mistake. I kept forgetting to go back and fix it. Done now.

      There’s a common joke at conferences I go to at least, when you get to the financial disclosure piece and say something like “I don’t have any, but I’m open to it if anyone’s offering”. Most researchers simply aren’t going to be offered money from any company of any kind to do research. Social and career pressure though? Can’t escape that.

      Like

  2. Hmmm. Corollaries 5 and 6 seem like good reasons to be somewhat skeptical of a lot of ‘climate’ research (from both sides of the question). There are enormous financial consequences from the findings, both in terms of funding for research, and things like tax consequences, expensive public policy, etc. Also, I don’t think there is any area that is ‘hotter’ right now, at least in the public eye. I suspect Corollary 2 applies as well, although I’m not sure what qualifies as a small effect in climate science.

    Any thoughts?

    Like

    • I think climate science has some unique challenges that fall outside of this structure, though you’re right about #5 and 6. So much of what they do is based on models of the future, so it’s really hard to call that a “finding”. The Signal and the Noise has a really good chapter on the pitfalls of predictive modeling for climate science, and I think it gets it pretty right. There’s a danger in saying “if the model is not exactly right then it’s worthless” AND in saying “well the model says this will happen so clearly we should do x”.

      Like

  3. Pingback: So Why ARE Most Published Research Findings False? A Way Forward | graph paper diaries

Comments are closed.