How the Sausage Gets Made and the Salami Gets Sliced

Ever since James the Lesser pointed me to this article about some problems with physics,  I’ve been thinking a lot about salami slicing. For those of you who don’t know, salami slicing (aka using the least publishable unit) is the practice of taking one data set and publishing as many papers as possible from it. Some of this is done through data dredging, and some if it is just done by breaking up one set of conclusions in to a series of much smaller conclusions in order to publish more papers. This is really not a great practice, as it can give potentially shaky conclusions more weight (500 papers can’t be wrong!) and multiply the effects of any errors in data gathering.  This can then have other effects like increasing citation counts for papers or padding resumes.

A few examples:

  1. Physics: I’ve talked about this before, but the (erroneous) data set mentioned here resulted in 500 papers on the same topic. Is it even possible to retract that many?
  2. Nutrition and obesity research: John Ioannidis took a shot at this practice in his paper on obesity research, where he points out that the PREDIMED study (a large randomized trial looking at the Mediterranean diet)  has resulted in 95 published papers.
  3. Medical findings: In this paper, it was determined that nearly 17% of papers on specific topics had at least some overlapping data.

To be fair, not all of this is done for bad reasons. Sometimes grants or other time pressures encourage researchers to release their data in slices rather than in one larger paper. The line between “salami slicing” and “breaking up data in to more manageable parts” can be a grey one….this article gives a good overview of some case studies and shows it’s not always straightforward. Regardless, it’s worth keeping in mind if you see multiple studies supporting the same conclusion that you should at least check for independence among the data sources. This paper breaks down some of the less obvious problems with salami slicing:

  1. Dilution of content/reader fatigue More papers mean a smaller chance anyone will actually read all of them
  2. Over-representation of some findings Fewer people will read these papers, but all the titles will make it look like there are lots of new findings
  3. Clogging journals/peer review Peer reviewers and journal space is still a limited resource. Too many papers on one topic does take resources from other topics
  4. Increasing author fatigue/sanctions An interesting case that this is actually bad for the authors in the long run. Publishing takes a lot of work, and publishing two smaller papers is twice the work of one. Also, duplicate publishing increases the chance you’ll be accused of self-plagiarism and be sanctioned.

Overall, I think this is one of those possibilities many lay readers don’t even consider when they look at scientific papers. We assume that each paper equals one independent event, and that lots of papers means lots of independent verification. With salami slicing, we do away with this element and increase the noise. Want more? This quick video give a good overview as well:

One thought on “How the Sausage Gets Made and the Salami Gets Sliced

  1. Speaking of salami, I am reminded of the Bismark quote about sausages and politics. Or perhaps now we should say sausages and scientific papers. “Laws are like sausages. It’s better not to see them being made.”


Comments are closed.