Calling BS Read-Along Week 9: Predatory Publishing and Scientific Misconduct

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro and want the full series index, click here or if you want to go back to Week 8 click here.

Welcome back to Week 9 of the Calling Bullshit Read-Along! This week our focus is on predatory publishing and scientific misconduct. Oh boy. This is a slight change of focus from what we’ve been talking about up until now, and not for the better. In week one we established that in general, bullshit is different than lying in that it is not solely attempting to subvert the truth. Bullshit may be characterized by a (sometimes reckless) disregard for truth, but most bullshitters would be happy to stick to the truth if it fit their agenda. The subjects of this weeks readings are not quite so innocent, as most of our focus is going to be on misconduct by people who should have known better. Of course sometimes the lines between intentional and unintentional misconduct are a little less clear than one would hope, but for our purposes the outcome (less reliable research) is the same.  Let’s take a look.

To frame the topic this week, we start with a New York Times article “A Peek Inside the Strange World of Fake Academia“, which takes a look at, well, Fake Academia. In general “fake academia” refers to conferences and journals set up with very little oversight (one man runs 17 of them) or review but high price tags. The article looks at a few examples that agreed to publish abstracts created using the iPhone autocomplete feature or “featuring” keynote speakers who never agreed to speak. Many of these are run by a group called OMICS International, which has gotten in to legal trouble over their practices. However, some groups/conferences are much harder to classify. As the article points out, there’s a supply and demand problem here. More PhDs need publication credits than can get their work accepted by legitimate journals or conferences, so anyone willing to loosen the standards can make some money.

To show how bad the problem of “pay to play” journals/conferences are, the next article (by the same guys who brought us Retraction Watch) talks about a professor who decided to make up some scientists just to see if there was any credential checking going on at these places. My favorite of these was his (remarkably easy) quest to get Borat Sagdiyev (a senior researcher at the University of Kazhakstan) on the editorial board of the journal Immunology and Vaccines. Due to the proliferation of journals and conferences with low quality control, these fake people ended up with surprisingly impressive sounding resumes.  The article goes on to talk about researchers who make up co-authors, and came to the troubling conclusion that fake co-authors seemed to help publication prospects. There are other examples provided of “scientific identify fraud”: researchers finding their data has been published by other scientists (none of whom are real), researchers recommending that made up scientists review their work (the email addresses route back to themselves), and the previously mentioned pay-for-publication journals. The article wraps up with a discussion of even harder to spot chicanery: citation stuffing and general metrics gaming. As we discussed in Week 7 with Jevin West’s article, attempting to rank people in a responsive system will create incentives to maximize your ranking. If there is an unethical way of doing this, at least some people will find it.

That last point is also the focus of one of the linked readings “Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition“. The focus of this paper is on the current academic climate and its negative effect on research practices. To quote Goodhart’s law “when a measure becomes a target, it ceases to be a good measure”. It covers a lot of ground including increased reliance on performance metrics, decreased access to funding, and an oversupply of PhDs.  My favorite part of the paper was this table:

That’s a great overview of how the best intentions can go awry. This is all a set up to get to the meat of the paper: scientific misconduct. In a high stakes competitive environment, the question is not if someone will try to game the system, but how often it’s already happening and what you’re going to do about it. Just like in sports, you need to acknowledge the problem (*cough* steroid era *cough*) then come up with a plan to address it.

Of course the problem isn’t all on the shoulders of researchers, institutions or journals. Media and public relations departments tend to take the problem and run with it, as this Simply Statistics post touches on. According to them, the three stories that seem to get the most press are:

  1. The exaggerated big discovery
  2. Over-promising
  3. Science is broken

Sounds about right to me. They then go on to discuss how the search for the sensational story or the sensational shaming seems to be the bigger draw at the moment. If everyone focuses on short-term attention to a problem rather than the sometimes boring work of making actual tiny advancements or incremental improvements, what will we have left?

With all of this depressing lead in, you may be wondering how you can tell if any study is legitimate. Well, luckily the Calling Bullshit overlords have a handy page of tricks dedicated to just that! They start with the caveat that any paper anywhere can be wrong, so no list of “things to watch for” will ever catch everything. However, that doesn’t mean that every paper is at equal risk, so there are some ways to increase your confidence that the study you’re seeing is legitimate:

  1. Look at the journal As we discussed earlier, some journals are more reputable than others. Unless you really know a field pretty well, it can be pretty tough to tell a prestigious legitimate journal from a made up but official sounding journal. That’s where journal impact factor can help…it gives you a sense of how important the journal is to the scientific community as a whole. There are different ways of calculating impact factors, but they all tend to focus on how often the articles published in the various journals end up being cited by others, which is a pretty good way of figuring out how others view the journal. Bergstorm and West also give a link to their Google chrome extension the Eigenfactorizer, which color codes journals that appear in PubMed searches based on their Eigenfactor ranking.  I downloaded this and spent more time playing around with it than I probably want to admit, and it’s pretty interesting. To give you a sense of how it works, I typed in a few key words from my own field (stem cell transplant/cell therapies) and took a look at the results. Since it’s kind of a niche field, it wasn’t terribly surprising to see that most of the published papers are in yellow or orange journals. The journals under those colors are great and very credible, but most of the papers have little relevance to anyone not in the hem malignancies/transplant world. A few recent ones on CAR-T therapy showed up in red journals, as that’s still pretty groundbreaking stuff. That leads us to the next point…
  2. Compare the claim to the venue As I mentioned, the most exciting new thing going in the hematologic malignancies world right now is CAR-T and other engineered cell therapies. These therapies hold promise for previously incurable leukemias and lymphomas, and research institutions (including my employer) are pouring a lot of time, money and effort in to development. Therefore it’s not surprising that big discoveries are getting published in top tier journals, as everyone’s interested in seeing where this goes and what everyone else is doing. That’s the normal pattern. Thus, if you see a “groundbreaking” discovery published in a journal that no one’s ever heard of, be a little skeptical. It could be that the “establishment” is suppressing novel ideas, or it could be that the people in the field thought something was off with the research. Spoiler alert: it’s probably that second one.
  3. Are there retractions or questions about the research? Just this week I got pointed to an article about 107 retracted papers from the journal Tumor Biology due to a fake peer review scandal, the second time this has happened to this journal in the last year. No matter what the field, retractions are worth keeping an eye on.
  4. Is the publisher predatory? This can be hard to figure out without some inside knowledge, so check out the resources they link to.
  5. Preprints, Google Scholar, and finding out who the authors are Good tips and tricks about how to sort through your search results. Could be helpful the next time someone tells you it’s a good idea to eat chocolate ice cream for breakfast.

Whew, that’s a lot of ground covered. While it can be disappointing to realize how many instances/ways  of committing scientific misconduct there are, it’s worth noting that we currently have more access to more knowledge than at nearly any other time in human history. In week one, we covered that the more opportunities for communication there are, the more bullshit we will encounter. Science is no different.

At the same time, the sciences have a unique opportunity to lead the way in figuring out how to correct for the bullshit being created within it’s ranks and to teach the public how to interpret what gets reported. Some of the tools provided this week do point in a hopeful direction (not to mention the existence of this class!) are a great step in the right direction.

Well, that’s all I have for this week! Stay tuned for next week when we cover some more ethics, this time from the perspective of the bullshit-callers as opposed to the bullshit producers.

Week 10 is up! Read it here.

2 thoughts on “Calling BS Read-Along Week 9: Predatory Publishing and Scientific Misconduct

  1. In paragraph 5, please correct “it’s” by removing the apostrophe so that it accurately represents the I tended possessive instead if the contraction of “it is” which does not make sense in the sentence.

    BTW, I absolutely love the material on the course and the reading companion! This is important, valuable work that I believe all academicians and librarians can benefit from reading.

    Like

    • Done.

      How is it I can get a Chrome extension that color codes journals by impact factor but not one that I can get to auto-highlight every time I use the word “it’s” to force me to double check it? My Ctrl+F strategy appears to have some holes in it.

      Agree about the readings from the course! I’ve learned something every week I’ve done this.

      Like

Comments are closed.