Cornell Food and Brand Lab: an Update

After mentioning the embattled Brian Wansink and the Cornell Food and Brand Lab in my post last week, a friend passed along the most recent update on this story. Interestingly it appears Buzzfeed is the news outlet doing the most reporting on this story as it continues to develop.

A quick review:
Brian Wansink is the head of the Cornell Food and Brand Lab, which publishes all sorts of interesting research about the psychology behind eating and how we process information on health. Even if you’ve never heard his name, you may have heard about his work….studies like “people eat more soup if their bowl refills so they don’t know how much they’ve eaten”  or “kids eat more vegetables when they have fun names” tend to be from his lab.

About a year ago, he published a blog post where he praised one of his grad students for taking a data set that didn’t really show much and turning it in to 5 publishable papers.  This turned in to an enormous scandal as many people quickly pointed out that a feat like that almost certainly involved lots of data tricks that would make the papers results very likely to be false.  As the scrutiny went up, things got worse as now people were pouring over his previous work.

Not only did this throw Wansink’s work in to question, but a lot of people (myself included) who had used his work in their work now had to figure out whether or not to retract or update what they had written. Ugh.

So where are we now?
Well as I mentioned, Buzzfeed has been making sure this doesn’t drop. In September, they reported that the aforementioned “veggie with fun names” study had a lot of problems. Worse yet, Wansink couldn’t produce the data when asked.   What was incredibly concerning is that this particular paper is part of a program Wansink was piloting for school lunches. With his work under scrutiny, over $20 million in research and training grants may have gone towards strategies that may not actually be effective. To be clear, the “fun veggie name study” wasn’t the only part of this program, but it’s definitely not encouraging to find out that parts of it are so shaky.

To make things even worse, they are now reporting that several of his papers that allegedly were done on three different topics in three different years sent to three different sample populations show the exact same number of survey respondents: 770. Those papers are being reviewed.

Finally, the report he has a 4th paper being retracted, this one on WWII veterans and cooking habits. An interview with the researcher who helped highlight the issues with the paper is up here at Retraction Watch, and some of the problems with the paper are pretty amazing. When asked where he first noted problems, he said: “First there is the claim that only 80% of people who saw heavy, repeated combat during WW2 were male.”  Yeah, that seems a little off. Wansink has responded to the Buzzfeed report to say that this was due to a spreadsheet error.

Overall, the implications of this are going to be felt for a while. While only 4 papers have been retracted so far, Buzzfeed reports that 8 more have planned corrections, and over 50 are being looked at. With such a prolific lab and results that are used in so many places, this story could go on for years. I appreciate the journalists keeping up on this story as it’s an incredibly important cautionary tale for members of the scientific community and the public alike.

Retractions, while sometimes necessary, are never as good as the real thing

Since starting this blog, I’ve become quite the fan of the website Retraction Watch.

One of the more interesting ongoing stories has been the number of retractions from Dipak Das, the UCONN researcher who faces massive misconduct charges for fabricating data in his research about the health benefits of red wine.

His current retraction count stands at 13 papers, with 145 counts of misconduct being investigated.

While the role of his work in the field is contested, one can’t debate that his results were widely reported and certainly helped with the public perception that red wine is good for you.  Thus, I found it interesting that Jezebel was running an article at the same time about the further proof that red wine is good for you.  In the background they mention some of the studies that Das did, that have since been retracted.  Not that this is necessarily their fault….recently it was found that only a quarter of retracted articles in online databases carry a retraction notice, and this drops to 5% if you look at downloadable PDFs.

People have complained about this with newspapers for years….large headlines, little tiny retractions…but with the ever increasing retraction rate and the centrality of the internet, this is liable to get worse before it gets better.

Paranoia is just good sense if people really are out to get you

Yesterday I posted about retractions in scientific journals, and the assertion that they are going up.  I actually woke up this morning thinking about that study, and wishing I could see more data on how it’s changed year to year (yes, I’m a complete nerd…but what do you ponder while brushing your teeth????).  Anyway, that brought to mind a post I did a few weeks ago, on how conservatives trust in the scientific community has gone steadily down.

It occurred to me that if you superimposed the retraction rate of various journals over the trust in the scientific community rates, it could actually be an interesting picture.   It turns out PubMed actually has a retraction rate by year available here.  For purposes of this graph I figured that would be a representative enough sample.

I couldn’t find the raw numbers for the original public trust study, so these are eyeballed from the original graph in blue, with the exact numbers from the PubMed database in green.  
So it looks like a decreasing trust in the scientific community may actually be a rational thing*.  
It’s entirely possible, by the way, that the increased scrutiny of the internet led to the higher retraction rate…but that would still have given people more reasons not to blindly trust.  As the title of this post suggests, skepticism isn’t crazy if you actually should be skeptical.
Speaking of trust, I obviously had to manipulate the axes a bit to get this all to fit.  Still not sure I got it quite right, but if anyone wants to check my work, the raw data for the retraction rate is here and the data for the public trust study is here.  These links are included earlier as well, just wanted to be thorough.  
*Gringo requested that I run the correlation coefficients.  Conservatives r = -0.81 Liberals r = 0.52 Moderates r = 0.  I can’t stand by these numbers since my data points were all estimates based on the original chart, but they should be about correct.