A Loss for so Many

I was greatly saddened to hear late on Monday that a long time friend of mine, Carolyn Scerra, died on Monday of ovarian cancer. She was 35, and leaves behind a husband and a two year old daughter.

Carolyn was a high school science teacher, and she had promoted my Intro to Internet Science series and given me feedback based on her experiences in the classroom. A year ago, before her illness had made its ugly appearance, I got to speak to her class in person and see her at work. A fire alarm went off half way through my presentation, and we actually finished most of it in the parking lot. We laughed as she held her laptop up so they could see my slides and I talked about lizard people, while other classes looked on in confusion. Through it all she kept the class orderly, calm, and engaged. We had a great discussion about science education and how to support kids and science teachers, and it was a great day despite the interruptions. She was great at what she did, and I was honored to be part of it.

When she got sick in November, she ended up at my workplace for her treatment. I was able to see her a few times during some of her hospitalizations and chemo treatments, and we still talked about science. I would tell her about the latest clinical trials we were working on and we would talk about genetics research and cancer, some of which I turned in to a post. For many people that would not have been a soothing conversation, but it was for Carolyn. She liked to think about the science behind what was going on and where the science was going, even as the best science was failing her. When another friend taught her how to paint, she started painting representations of how the chemotherapy looked in her blood and would interact with the cancer cells. That’s the kind of person she was.

This is a huge loss for so many, and I will truly miss her. Science has lost an advocate, a community has lost an amazing person, kids lost a great teacher, her family has lost a daughter/sister/cousin, and her husband and daughter have lost a wife and mother. A fundraiser has been set up for her family here.

May peace find all of them.

Calling BS Read-Along Week 12: Refuting Bullshit

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro and want the full series index, click here or if you want to go back to Week 11 click here.

Well guys, we made it! Week 12, the very last class. Awwwwwwwe, time flies when you’re having fun.

This week we’re going to take a look at refuting bullshit, and as usual we have some good readings to guide us. Amusingly, there’s only 3 readings this week, which puts the course total for “readings about bullshit”  at an order of magnitude higher than the count for “readings about refuting bullshit”.  I am now dubbing this the “Bullshit Assignment Asymmetry Principle: In any class about bullshit, the number of readings dedicated to learning about bullshit will be an order of magnitude higher than the number of readings dedicated to refuting it”. Can’t refute what you can’t see.

Okay, so first up in the readings is the short-but-awesome “Debunking Handbook” by John Cook and Stephan Lewandowsky. This pamphlet lays out a compelling case that truly debunking a bad fact is a lot harder than it looks and must be handled with care. When most of us encounter an error, we believe throwing information at the problem will help. The Debunking Handbook points out a few issues:

  1. Don’t make the falsehood familiar A familiar fact feels more true than an unfamiliar one, even if we’re only familiar with it because it’s an error
  2. Keep it simple Overly complicated debunkings confuse people and don’t work
  3. Watch the worldview Remember that sometimes you’re arguing against a worldview rather than a fact, and tread lightly
  4. Supply an alternative explanation Stating “that’s not true” is unlikely to work without replacing with an alternative

They even give some graphic/space arranging advice for those trying to put together a good debunking. Check it out.

The next paper is a different version of calling bullshit that starts to tread in to the academic trolling territory we discussed a few weeks ago, but stops short by letting everyone be in on the joke. It’s the paper “Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction“, and it answers the age old question of “what happens when you put a dead fish in an MRI machine”. As it turns out, more than you’d think. It turns out they discovered statistically significant brain activity, even after death.

Or did they?

As the authors point out, when you are looking at 130,000 voxels, there’s going to be “significant” noise somewhere, even in a dead fish. Even using a p-value of .001, you still will get some significant voxel activity and some of those will almost certainly be near each other, leading to the “proof” that there is brain activity. There are statistical methods that can be used to correct for this, and they are widely available, but often underused.

By using traditional methods in such an absurd circumstance, the authors are able to call out a bad practice while not targeting anyone individually. Additionally, they make everyone a little more aware of the problem (reviewers and authors) in a memorable way. They also followed the debunking schema above and immediately provided alternative methods for analysis. Overall, a good way of calling bullshit with minimal fallout.

Finally, we have one more paper “Athletics:  Momentous sprint at the 2156 Olympics?” and its corresponding Calling Bullshit Case Study. This paper used a model to determine that women would start beating men in the 100 meter dash on an Olympic level in 2156. While the suspicion appears to be that the authors were not entirely serious and meant this to be a critique of modeling in general, some of the responses were pretty great. It turns out this model also proves that by 2636 races will end before they begin. I, for one, am looking forward to this teleportation breakthrough.

Yet again here we see a good example of what is sometimes called “highlighting absurdity by being absurd”. Saying that someone is extrapolating beyond the scope of their model sounds like a nitpicky math argument (ask me how I know this), but pointing out the techniques being used can prove ridiculous things makes your case pretty hard to argue with.

Ultimately, a lot of calling bullshit in statistics or science gets down to a lot of the same things we have to consider when confronting any other bad behavior in  life. Is it worth it? Is this the hill to die on? Is the problem frequent? Are you attacking the problem or the person? Do you know the person? Is anyone listening to the person/do they have a big platform? Is there a chance of making a difference? Are you sure you are not guilty of the same thing you’re accusing someone else of? Can humor get the job done?

While it’s hard to set any universal rules, these are about as close as I get:

  1. Media outlets are almost always fair game They have a wide reach and are (at least ostensibly) aiming to inform, so they should have bullshit called whenever you see it, especially for factual inaccuracies.
  2. Don’t ascribe motive I’ve seen a lot of people ruin a good debunking by immediately informing the person that they shared some incorrect fact because they are hopelessly biased/partisan/a paid shill/sheeple. People understandably get annoyed by that, and they react more defensively because of it. Even if you’re right about the fact in question, if you’re wrong about their motive that’s all they’ll remember. Don’t go there.
  3. Watch what you share Seriously, if everyone just did this one, we wouldn’t be in this mess.
  4. Your field needs you Every field has its own particular brand of bullshit, and having people from within that field call bullshit helps immensely.
  5. Strive for improvement Reading things like the debunking handbook and almost any of the readings in this course will help you up your game. Some ways of calling bullshit simply are more effective than others, and learning how to improve can be immensely helpful.

Okay, well that’s all I’ve got!

Since this is the end of the line for the class, I want to take this opportunity to thank Professors Bergstrom and West for putting this whole syllabus and class together,  for making it publicly available, and for sharing the links to my read-along. I’d also like to thank all the fun people who have commented on Twitter, the blog or sent me messages….I’m glad people enjoyed this series!

If you’d like to keep up with the Calling Bullshit class, they have  twitter,  facebook, and a  mailing list.

If you’d like to keep up with me, then you can either subscribe to the blog in the sidebar, or follow me on Twitter.

Thanks everyone and happy debunking!

Final Exam Rollercoasters

Last week I managed to take what I think is the last exam of my current degree program. I only have a practicum left, and since those are normally papers and projects, I’m feeling pretty safe in this assumption.

Now as someone who has gotten off and on the formal education carousel more than a few times, wracked up a few degrees and has been in the workforce for over a decade, you’d think I’d have learned how to control my emotions around test taking.

You’d be wrong.

I literally have the exact same reaction to tests that I had in first grade, though I use more profanity now. Every time I get near a test, my emotions go something like this:

I should note that this reaction is entirely unrelated to the following variables:

  1. How much I like the class
  2. How well I am doing prior to the test
  3. How much I have studied
  4. How much the test is worth
  5. How I actually do on the test

The following things are also true:

  1. Every time a test is put in front of me, I have a dreamlike moment where I believe I have sat down in the wrong class and that’s why nothing looks familiar. For language related tests, I believe all the words are in a different language.
  2. I have doubted every grade I have ever been given, believing that both good grades and bad grades are mistakes the professor is about to correct.
  3. The question “how do you think you did?” is completely flummoxing for me. I struggle to answer something other than “I have envisioned scenarios everywhere between a 20% and 100%, and they all feel equally plausible at the moment.”

Once I realized this pattern wasn’t going to stop, I actually felt much better. Now when I get the test I merely do one of those CBT type things where I go “ah yes, this is the part where I believe the test is written in Chinese. It’ll pass in a few minutes, just slog on until then”. It’s not that bad if you know it’s coming.

(I did fine by the way, thanks for asking)

Calling BS Read-Along Week 11: Fake News

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro and want the full series index, click here or if you want to go back to Week 10 click here.

Guys, guys, it’s week 11! We’re down to our second to last read-along, and this week we’re tackling the topic that has recently skyrocketed in popularity public awareness: fake news. To give you a sense of how new this term is to public discussion, check out the Google trends history:

Now Google trends isn’t always the best way of figuring out how popular a search term is (the x-axis is a comparison of the term to its own peak of popularity) but it does let us know the interest in this term really took off after the US election in November and has not settled back down. Apparently when online fake news started prompting real life threats of nuclear war, people took notice.

But what is fake news exactly, and is it really a new phenomena? That’s the question our first reading sets out to answer. The article “Before Fake News Came False Prophecy” uses some British history to frame our current debate, and makes the assertion that “prophecy” about ones political opponents was the old time version of fake news. I hadn’t previously connected the idea of prophecy to the idea of fake news, but it really is the historic equivalent: guarantees that dark and terrible things will happen (or may already be happening) if your enemies are allowed to be (or remain) in charge. As the article says “Prophecy didn’t describe the world as it was, but as it was to be—or as it might become. That fantasy was more powerful than any lived reality. People killed and died for fantasies. People didn’t act politically because of what they had lost but because of what, in their most potent fantasy, they feared losing.”

With that framing, fake news becomes not just a tool of misinformation, but actually something that’s playing off our imagination. It blurs the line between “this is true because it happened” and “this is true because it might happen”.

Okay, so fake news is bad, but what is it really? The next reading from Factcheck.org actually takes that on a bit before they go in to the ever important “how to spot it” topic. They quote the guy who started Snopes and point out that “fake news” or fake stories made up by websites trying to make money is really a subset of “bad news” which is (as he puts it) “shoddy, unresearched, error-filled, and deliberately misleading reporting that does a disservice to everyone”.  I think the “this is just one branch of a tree of bad” point is an important point to keep in mind, and I’ll circle back to it later. That being said, there is something a little bit different about entirely fictitious stories, and there are some red flags you should look for. Factcheck gives a list of these,  such as anonymous authors, lots of exclamation points, links to sources that don’t support the story, and quoting “Fappy the Anti-Masturbation Dolphin” as a source. They also caution that you should always check the date on stories, as sometimes people attempt to connect true older stories to current events as though they were more recent. They also highlight people who don’t realize known satirists are satire (see the website Literally Unbelievable for a collection of people who don’t know about the Onion).

So why are we so worried about fake news? Can’t we just ignore it and debunk as needed? Well….maybe, but some of this is a little more organized than you may think. The next reading “The Agency” is a long but chilling New York Times investigation in to, some real world accounts of some rather scary fake news moments.  They start with a bizarre case of a reported but non-existent chemical plant explosion in Louisiana. This story didn’t just get reported to the media but was texted to people who lived nearby the plant, posted on Twitter with doctored local Louisiana news reports and the whole thing started trending on Twitter and getting picked up nationally while the actual chemical plant employees were still pulling themselves out of bed and trying to figure out what was going on. While no one really identified a motivation for that attack, the NYTs found links to suggest it was orchestrated by a group from Russia that employs 400 people to do nothing but spread fake news. This group works 12 hour days trolling the internet causing chaos in comments sections on various sites all over the place for purposes that aren’t clear to almost anyone, but with an end result of lots of aggravation for everyone.

Indeed, this appears to be part of the point. Chen’s investigation suggests that after the internet was used to mobilize protests in Russia few years ago, the government decided to hit back. If you could totally bog down political websites and comments sections with angry dreck, normal people wouldn’t go there. At best, you’d convince someone that the angry pro-government opinions were the majority and that they should play along. Failing that, you’d cut off a place where people might have otherwise gathered to express their discontent. Chen tracks the moves of the Russian group in to US events, which ultimately ends up including a campaign against him. The story of how they turned him from a New York Times reporter in to a neo-Nazi CIA recruiter is actually so bizarre and incredible I cannot do it justice, so go read the article.

Not every fake news story is coming out of a coordinated effort however, as yet another New York Times article discusses. Some of it is just from regular people who discovered that this is a really good way of making money. Apparently bullshit is a fairly lucrative business.

Slight tangent: After the election, a close family member of mine was reading an article on the “fake news” topic, and discovered he had actually gone to college with one of the people interviewed. The guy had created a Facebook group we had heard of that spewed fake and inflammatory political memes and was now making (allegedly) a six figure monthly salary to do so. The guy in question was also fairly insane (like “committed assault over a minor dorm dispute” type insane), and had actually expressed no interest in politics during college. In real life, he had trouble making friends or influencing people, but on the internet he turned his knack for conflict and pissing people off in to a hefty profit. Now I know this is just a friend of a friend story and you have now real reason to believe me, but I think the fundamental premise of “these news stories/memes might have creators who you wouldn’t let spend more than 5 minutes in your living room” is probably a good thing to keep in mind.

So why do people share these things? Well, as the next reading goes in to, the answer is really social signaling. When you share a news story that fits your worldview, you proclaim allegiance to your in-group. When you share a fake news story, you also probably enrage your out-group. By showing your in-group that you are so dedicated to your cause that you’re willing to sacrifice your reputation with your out-group, you increase your ties with them. The deeper motivations here are why simply introducing contradictory facts doesn’t always work (though they sometimes do – more on the most recent research on the “backfire effect” here), particularly if you get snarky about it. People may not see it as a factual debate, but rather a debate about their identity. Yikes. The article also mentions three things you can personally do to help 1) Correct normal errors, but don’t publicly respond to social signalling. 2) Make “people who value truth” your in-group and hold yourself to high standards 3) Leave room for satire, including satire of your own beliefs.

I endorse this list.

Finally, we take a look at how technology is adding to the problem, not just by making this stuff easier to share, but sometimes by promoting it. In the reading “Google’s Dangerous Identity Crisis“, we take a look at Google’s “featured” search results. These are supposed to be used to highlight basic information like “what time does the Superbowl start” but can also end up highlighting things like “Obama is planning a coup”. The identity crisis in question is whether Google exists simply to index sites on the web or whether it is verifying some of those sites as more accurate than others. The current highlighting feature certainly looks like an endorsement of a fact, but it’s really just an advanced algorithm that can be fooled pretty easily. What Google can or should be doing about that is up for debate.

Whew, okay, that was a lot of fake news, and a lot of depressing stuff. What would I add to this whole mess? Well, I really liked the list given in the CNN article. Not boosting people’s signalling, watching your own adherence to truth and keeping a sense of humor are all good things. They other thing I’ve found very effective is to try to have more discussions about politics with people I know and trust. I think online news stories (fake or not) are frequently like junk food: easy to consume and easy to overdo it with. Even discussions with friends who don’t agree with me can never match the quick hit vitriolic rush of 100 Twitter hot takes.

The second thing I’d encourage is to not let the “fake news” phenomena distract from the “bad news” issues that can be perpetuated by even respectable news sources. The FactCheck article quoted the guy from Snopes.com on this topic, and I think it’s important. Since the rise of the “fake news” phenomena, I’ve had a few people tell me that fact checking traditional media is no longer as important. That seems horribly off to me. Condemning fake news should be part of a broader movement to bring more accuracy to all of our news.

Okay, that’s all I’ve got for today. Check back in next week for the last class!

Mistakes Were Made, Sometimes By Me

A few weeks ago, I put out a call asking for opinions on how a blogger should correct errors in their own work. I was specifically interested in errors that were a little less clear cut than normal: quoting a study that later turned out to be less convincing it initially appeared (failed to replicate), studies whose authors had been accused of misconduct, or studies that had been retracted.

I got a lot of good responses, so thanks to everyone who voted/commented/emailed me directly. While I came to realize there is probably not a perfect solution, there were a few suggestions I think I am going to follow up on:

  1. Updating the individual posts (as I know about them) It seemed pretty unanimous that updating old posts was the right thing to do. Given that Google is a thing and that some of my most popular posts are from over a year ago, I am going to try to update old posts if I know there are concerns about them. My one limitation is not always indexing well what studies I have cited where, so this one isn’t perfect. I’ll be putting a link up in the sidebar to let people know I correct stuff.
  2. Creating a “corrected” tag to attach to all posts I have to update. This came out of jaed’s comment on my post and seemed like a great idea. This will make it easier to track which type of posts I end up needing to update.
  3. Creating an “error” page to give a summary of different errors, technical or philosophical, that I made in individual posts, along with why I made them and what the correction was. I want to be transparent about the type of error that trip me up. Hopefully this will help me notice patterns I can improve upon. That page is up here, and I kicked it off with the two errors I caught last month. I’m also adding it to my side bar.
  4. Starting a 2-4 times a year meta-blog update Okay, this one isn’t strictly just because of errors, though I am going to use it to talk about them. It seemed reasonable to do a few posts a year mentioning errors or updates that may not warrant their own post. If the correction is major, it will get its own post, but this will be for the smaller stuff.

If you have any other thoughts or want to take a look at the initial error page (or have things you think I’ve left off), go ahead and meander over there.

Calling BS Read-Along Week 10: The Ethics of Calling Bullshit

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro and want the full series index, click here or if you want to go back to Week 9 click here.

Wow, week 10 already? Geez, time flies when you’re having fun. This week the topic is “the ethics of Calling Bullshit” and man is that a rich topic. With the advent of social media, there are more avenues than ever for both the perpetuation and correction of bullshit than ever before. While  most of us are acutely aware of the problems that arise with the perpetuating of bullshit, are there also concerns with how we go about correcting bullshit? Spoiler alert: yes. Yes there are. As the readings below will show, academia has been a bit rocked by this new challenge, and the whole thing isn’t even close to being sorted out yet. There are a lot more questions than answers raised this week.

Now, as a blogger who frequently blogs about things I think are kinda bullshit, I admit I have a huge bias in the “social media can be a huge force for good” direction. While I doubt this week’s readings will change my mind on that, I figured pointing out how biased I am and declaring my intention to fairly represent the opposing viewpoint might help keep me honest. We’ll see how it goes.

For the first reading, we’re actually going to take a look at a guy who was trolling before trolling was a thing and who may have single handedly popularized the concept of “scientific trolling”: Alan Sokal.  Back in 1994, long before most of the folks on 4chan were born, Sokal became famous for having created a parody paper called “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity” that and getting it published in the journal Social Text as a serious work. His paper contained claims like “physical reality is a social construct” and that quantum field theory is the basis for psychoanalysis. Unfortunately for Social Text, they published it in a special “Science Wars” edition of their journal unchallenged. Why did he do this? In his own words: “So, to test the prevailing intellectual standards, I decided to try a modest (though admittedly uncontrolled) experiment: would a leading North American journal of cultural studies….publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors’ idealogical preconceptions?” When he discovered the answer was yes, he published his initial response here.

Now whether you consider this paper a brilliant and needed wake up call or a cheap trick aimed at tarring a whole swath of people with the same damning brush depends largely on where you’re sitting. The oral history of the event is here (for subscribers only, I found a PDF copy here), does a rather fair job of getting a lot of perspectives on the matter. On the one hand, you have the folks who believe that academic culture needed a wake up call, and that they should be deeply embarrassed that no one knew the difference between a serious paper and one making fun of the whole field of cultural studies. On the other hand, you have those who felt that Sokal exploited a system that was acting in good faith and  gave its critics an opportunity to dismiss everything that comes out of that field. Both sides probably have a point. Criticizing the bad parts of a field while encouraging people to maintain faith in the good parts is an incredibly tough game to play. I got a taste of this after my first presentation to a high school class, when some of the kids walked away declaring that the best course action was to never believe anything scientific. Whether you agree with Sokal or not, I will suspect every respectable journal editor has been on the look out for hoaxes a little more vigilantly ever since that incident.

Next up is an interesting example of a more current scientific controversy that appears to be spinning way out of control: nano-imaging.  I’ll admit, I had no idea this feud was even going on, but this article reads more like a daytime TV plot than typical science reporting. There are accusations of misconduct, anonymous blog postings, attacks and counter attacks, all over the not particularly well known controversy of whether or not you can put stripes on nanoparticles. While the topic may be unfamiliar to most of us, the debate over how the argument is being approached is pretty universal. If you have a problem with someone else’s work and believe traditional venues for resolution are too slow, what do you do? Alternatively, what do you make of a critic who is mostly relying on social media to voice their concerns? These are not simple questions. As we’ve seen in many areas of life (most recently the airline industry), traditional venues do at times love to cover up their issues, silence critics and impede progress. On the other hand, social media is easily abused and sometimes can enable people with an agenda to spread a lot of criticism with minimal fact checking. From the outside, it’s hard to know what’s what. I had no opinion on the “stripes on nanoparticles” debate, and I have no way of judging who has the better evidence. I’m left with going with my gut on who’s sounding more convincing, which is completely the opposite of how we’re supposed to evaluate evidence. I’m intrigued for all the wrong reasons.

Going even further down the rabbit hole of “lots of questions not many answers”, the next reading is from Susan Fiske “Mob Rule or Wisdom of the Crowds” where she explains exactly how bad the situation is getting in psychology. She explains (though without names or sources) many of the vicious attacks she’s seen on people’s work and how concerning the current climate is. She sees many of the attacks as personal vendettas more focused on killing people’s careers than improving science, and calls the criticizers “methodological terrorists”. Her basic thesis is that hurting people is not okay, drives good people out of the field, and makes things more adversarial than they need to be.

Fiske’s letter got a lot of attention, and had some really good response opinions posted as well. One is from a researcher, Daniel Lakens, who wrote about his experience being impolitely called out on an error in his work. He realized that the criticism stung and felt unfair, but the more he thought about it the more true he realized it was. He changed his research practices going forward, and by the time a meta-analysis showed that the original criticism was correct, he wasn’t surprised or defensive. So really what we’re talking about here is a setup that looks like this:

Yeah, this probably should have had a z-axis for the important/unimportant measure, but my computer wasn’t playing nice.

It is worth noting that (people being people and all) it is very likely we all think our own critiques are more polite and important than they are, and that our critics are less polite  and their concerns less important than they may be.

Lakens had a good experience in the end, but he also was contacted privately via email. Part of Fiske’s point was that social media campaigns can get going, and then people feel attacked from all sides. I think it’s important that we don’t underestimate the social media effect here either, as I do think it’s different from a one on one conversation. I have a good friend who has worked in a psychiatric hospital for years, and he tells me that one of the first things they do when a patient is escalating is to get everyone else out of the room. The obvious reason for this is safety, but he said it is also because having an audience tends to amp people up beyond where they will go on their own. A person alone in a room will simply not escalate as quickly as someone who has a crowd watching. With social media of course, we always have a crowd watching. It’s hard to dial things back once they get going.

Some fields have felt this acutely. Andrew Gelman responds to Fiske’s letter here by giving his timeline of how quickly the perspective on the replication crisis changed, fueled in part by blogs and Twitter. From something that was barely talked about in 2011, to something that is pretty much a given now, we’ve seen people come under scrutiny they’ve never had before. Again, this is an issue shared by many fields….just ask your local police officer about cell phone cameras….but the idea that people were caught off guard by the change is pretty understandable. Gelman’s perspective however is that this was a needed ending to an artificially secure spot. People were counting on being able to cut a few corners with minimal criticism, then weren’t able to anymore. It’s hard to feel too much sympathy for that.

Finally we have an article that takes a look at PubPeer, a site that allows users to make anonymous post-publication comments on published articles. This goes about as well as you’d expect: some nastiness, some usefulness, lots of feathers ruffled. The site has helped catch some legitimate frauds, but has also given people (allegedly) an outlet to pick on their rivals without fear of repercussion or disclosing conflicts of interest. The article comes out strongly against the anonymity provided and calls the whole thing “Vigilante Science”. The author goes particularly hard after the concept that anonymity allows people to speak more freely than they would otherwise, and points out that this also allows people to be much meaner, more petty, and possibly push an agenda harder than they could otherwise.

Okay, so we’ve had a lot of opinions here, and they’re all over the graph I made above. If you add in the differences in perception of tone and importance of various criticisms, you can see easily why even well meaning people end up all over the map on this one. Additionally, it’s worth nothing that there actually are some non-well meaning people exploiting the chaos in all of this, and they complicate things too. Some researchers really are using bad practices and then blaming others when they get called out. Some anonymous commenters really are just mouthing off or have other motivations for what they’re saying.

As I said up front, it should not come as a shock to anyone that I tend to fall on the side of appreciating the role of social media in science criticism. However, being a blogger, I have also received my fair share of criticism  from anonymous sources and have a lot of sympathy for the idea that criticism is not always productive as well. The graph I did a few paragraphs ago really reflects my three standards for criticism I give and receive. There’s no one size fits all recommendation for every situation, but in general I try to look at these three things:

  1. Correct/incorrect This should be obvious, but your criticism should be correct. If you’re going to take a shot at someone else’s work, for the love of God make sure you’re right. Double points if you have more than your own assertion to back you up. On the other hand, if you screw up, you can expect some criticism (and you will screw up at some point). I’m doing a whole post on this later this week.
  2. Polite/impolite In general, polite criticism is received better than impolite criticism. It’s worth noting of course that “polite” is not the same as “indirect”, and that frequently people confuse “direct” for “rude”. Still, politeness is just….polite. Particularly if you’ve never raised the criticism before, it’s probably best to start out polite.
  3. Important/unimportant How important is it that the error be pointed out? Does it change the conclusions or the perspective?

These three are not necessarily independent variables. A polite note about a minor error is almost always fine. On the other hand it can be hard to find a way of saying “I think you’ve committed major fraud” politely, though if you’re accusing someone of that you DEFINITELY want to make sure you have your ducks in a row.  I think the other thing to consider is how easy the criticism is to file through other means. If you create a system where people have little recourse, where all complaints or criticisms are dismissed or minimized, people will start using other means to make complaints. This was part of Sokal’s concern in the first reading. How was a physicist supposed to make a complaint with the cultural studies department and actually be listened to? I’m no Sokal, but personally I started this blog because I was irritated with the way numbers and science were getting reported in the media, and putting all my thoughts in one place seemed to help more than trying to email journalists who almost never seemed to update anything.

When it comes to the professional realm, I think similar rules apply. We’re all getting used to the changes social media has brought, and it is not going away any time soon. We’re headed in to a whole new world of ethics where many good people are going to disagree. Whether you’re talking about research you disagree with or just debating with your uncle at Thanksgiving, it is worth thinking about where your lines are and what battles you want to fight and how you want to fight them.

Okay, that wraps up a whole lot of deep thoughts for the week, see you next week for some Fake News!

Week 11 is up! Get your fake news here!

State Level Representation: Graphed

I got in to an interesting email discussion this past weekend about a recent Daily Beast article “The Republican Lawmaker Who Secretly Created Reddit’s Women-Hating ‘Red Pill’“, that ended up sparking a train of thought mostly unrelated to the original topic (not uncommon for me). The story is an investigation in to a previously anonymous user who started an infamous subreddit, and the Daily Beast’s discovery that he was actually an elected official in the New Hampshire House of Representatives.

Given that I am originally from New Hampshire and all my family still lives there, I was intrigued by the story both for the “hey! that’s my state!” factor and the “oh man, the New Hampshire House of Representatives is really hard to explain to a national audience” level. Everyone I was emailing with either lives in New Hampshire or grew up there (as I did), so the topic quickly switched to how unusual the New Hampshire state legislature is, and how it’s hard for a national news outlet to truly capture that. For starters, the NH state House of Representatives has nearly as many seats (400) as the US House of Representatives (435), and double the number of seats of the next closest state (Pennsylvania with 200), all while having a state population of a little over 1 million people. Next is the low pay. For their service, those 400 people make a whopping $200 dollars for a two year term. Some claim this is not the lowest paying gig in the state level representation game, since other states like New Mexico pay no salary, but a quick look at this page shows that those state pay a daily per diem that would quickly go over $200. New Hampshire has no per diem, meaning most members of the House will spend more in gas money than they make during their term.

As you can imagine, this set up does not pull from a random sample of the population.

This conversation got me thinking about how often state level politicians get quoted in news articles, and got me wondering about how we interpret what those officials do. Growing up in NH gave me the impression that most state level representatives didn’t have much power, but in my current state (Massachusetts) they actually do have some clout and frequently move on to higher posts.

This of course got me curious about how other states did things. When lawmakers from individual states make the news, I suspect most of us assume that they operate much the same way as lawmakers in our own state do and that could lead to confusion about how powerful/not powerful the person we’re talking about really is. Ballotpedia breaks state legislatures down in to 3 categories: full time or close (10 states), high part-time (23 states), low part-time (17 states). A lot of that appears to have to do with the number of people you are representing. I decided to do a few graphs to illustrate.

First, here is the size of each states “lower house” vs the number of people each lower house member represents:

Note: Nebraska doesn’t have a lower house, at least according to Wikipedia. NH and CA are pretty clear outliers in terms of size and population, respectively.

State senates appear much less variable:

So next time you read an article about a state level representative doing something silly, keep this graph in mind. For some states, you are talking about a fairly well compensated person with lots of constituents, who probably had to launch a coordinated campaign to get their spot and may have higher ambitions. For other states, you’re talking about someone who was willing to show up.

Here’s the data  if you’re in to that sort of thing. I got the salary data here, the state population data here and the number of seats in the house here. As always, please update me if you see any errors!

Calling BS Read-Along Week 9: Predatory Publishing and Scientific Misconduct

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro and want the full series index, click here or if you want to go back to Week 8 click here.

Welcome back to Week 9 of the Calling Bullshit Read-Along! This week our focus is on predatory publishing and scientific misconduct. Oh boy. This is a slight change of focus from what we’ve been talking about up until now, and not for the better. In week one we established that in general, bullshit is different than lying in that it is not solely attempting to subvert the truth. Bullshit may be characterized by a (sometimes reckless) disregard for truth, but most bullshitters would be happy to stick to the truth if it fit their agenda. The subjects of this weeks readings are not quite so innocent, as most of our focus is going to be on misconduct by people who should have known better. Of course sometimes the lines between intentional and unintentional misconduct are a little less clear than one would hope, but for our purposes the outcome (less reliable research) is the same.  Let’s take a look.

To frame the topic this week, we start with a New York Times article “A Peek Inside the Strange World of Fake Academia“, which takes a look at, well, Fake Academia. In general “fake academia” refers to conferences and journals set up with very little oversight (one man runs 17 of them) or review but high price tags. The article looks at a few examples that agreed to publish abstracts created using the iPhone autocomplete feature or “featuring” keynote speakers who never agreed to speak. Many of these are run by a group called OMICS International, which has gotten in to legal trouble over their practices. However, some groups/conferences are much harder to classify. As the article points out, there’s a supply and demand problem here. More PhDs need publication credits than can get their work accepted by legitimate journals or conferences, so anyone willing to loosen the standards can make some money.

To show how bad the problem of “pay to play” journals/conferences are, the next article (by the same guys who brought us Retraction Watch) talks about a professor who decided to make up some scientists just to see if there was any credential checking going on at these places. My favorite of these was his (remarkably easy) quest to get Borat Sagdiyev (a senior researcher at the University of Kazhakstan) on the editorial board of the journal Immunology and Vaccines. Due to the proliferation of journals and conferences with low quality control, these fake people ended up with surprisingly impressive sounding resumes.  The article goes on to talk about researchers who make up co-authors, and came to the troubling conclusion that fake co-authors seemed to help publication prospects. There are other examples provided of “scientific identify fraud”: researchers finding their data has been published by other scientists (none of whom are real), researchers recommending that made up scientists review their work (the email addresses route back to themselves), and the previously mentioned pay-for-publication journals. The article wraps up with a discussion of even harder to spot chicanery: citation stuffing and general metrics gaming. As we discussed in Week 7 with Jevin West’s article, attempting to rank people in a responsive system will create incentives to maximize your ranking. If there is an unethical way of doing this, at least some people will find it.

That last point is also the focus of one of the linked readings “Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition“. The focus of this paper is on the current academic climate and its negative effect on research practices. To quote Goodhart’s law “when a measure becomes a target, it ceases to be a good measure”. It covers a lot of ground including increased reliance on performance metrics, decreased access to funding, and an oversupply of PhDs.  My favorite part of the paper was this table:

That’s a great overview of how the best intentions can go awry. This is all a set up to get to the meat of the paper: scientific misconduct. In a high stakes competitive environment, the question is not if someone will try to game the system, but how often it’s already happening and what you’re going to do about it. Just like in sports, you need to acknowledge the problem (*cough* steroid era *cough*) then come up with a plan to address it.

Of course the problem isn’t all on the shoulders of researchers, institutions or journals. Media and public relations departments tend to take the problem and run with it, as this Simply Statistics post touches on. According to them, the three stories that seem to get the most press are:

  1. The exaggerated big discovery
  2. Over-promising
  3. Science is broken

Sounds about right to me. They then go on to discuss how the search for the sensational story or the sensational shaming seems to be the bigger draw at the moment. If everyone focuses on short-term attention to a problem rather than the sometimes boring work of making actual tiny advancements or incremental improvements, what will we have left?

With all of this depressing lead in, you may be wondering how you can tell if any study is legitimate. Well, luckily the Calling Bullshit overlords have a handy page of tricks dedicated to just that! They start with the caveat that any paper anywhere can be wrong, so no list of “things to watch for” will ever catch everything. However, that doesn’t mean that every paper is at equal risk, so there are some ways to increase your confidence that the study you’re seeing is legitimate:

  1. Look at the journal As we discussed earlier, some journals are more reputable than others. Unless you really know a field pretty well, it can be pretty tough to tell a prestigious legitimate journal from a made up but official sounding journal. That’s where journal impact factor can help…it gives you a sense of how important the journal is to the scientific community as a whole. There are different ways of calculating impact factors, but they all tend to focus on how often the articles published in the various journals end up being cited by others, which is a pretty good way of figuring out how others view the journal. Bergstorm and West also give a link to their Google chrome extension the Eigenfactorizer, which color codes journals that appear in PubMed searches based on their Eigenfactor ranking.  I downloaded this and spent more time playing around with it than I probably want to admit, and it’s pretty interesting. To give you a sense of how it works, I typed in a few key words from my own field (stem cell transplant/cell therapies) and took a look at the results. Since it’s kind of a niche field, it wasn’t terribly surprising to see that most of the published papers are in yellow or orange journals. The journals under those colors are great and very credible, but most of the papers have little relevance to anyone not in the hem malignancies/transplant world. A few recent ones on CAR-T therapy showed up in red journals, as that’s still pretty groundbreaking stuff. That leads us to the next point…
  2. Compare the claim to the venue As I mentioned, the most exciting new thing going in the hematologic malignancies world right now is CAR-T and other engineered cell therapies. These therapies hold promise for previously incurable leukemias and lymphomas, and research institutions (including my employer) are pouring a lot of time, money and effort in to development. Therefore it’s not surprising that big discoveries are getting published in top tier journals, as everyone’s interested in seeing where this goes and what everyone else is doing. That’s the normal pattern. Thus, if you see a “groundbreaking” discovery published in a journal that no one’s ever heard of, be a little skeptical. It could be that the “establishment” is suppressing novel ideas, or it could be that the people in the field thought something was off with the research. Spoiler alert: it’s probably that second one.
  3. Are there retractions or questions about the research? Just this week I got pointed to an article about 107 retracted papers from the journal Tumor Biology due to a fake peer review scandal, the second time this has happened to this journal in the last year. No matter what the field, retractions are worth keeping an eye on.
  4. Is the publisher predatory? This can be hard to figure out without some inside knowledge, so check out the resources they link to.
  5. Preprints, Google Scholar, and finding out who the authors are Good tips and tricks about how to sort through your search results. Could be helpful the next time someone tells you it’s a good idea to eat chocolate ice cream for breakfast.

Whew, that’s a lot of ground covered. While it can be disappointing to realize how many instances/ways  of committing scientific misconduct there are, it’s worth noting that we currently have more access to more knowledge than at nearly any other time in human history. In week one, we covered that the more opportunities for communication there are, the more bullshit we will encounter. Science is no different.

At the same time, the sciences have a unique opportunity to lead the way in figuring out how to correct for the bullshit being created within it’s ranks and to teach the public how to interpret what gets reported. Some of the tools provided this week do point in a hopeful direction (not to mention the existence of this class!) are a great step in the right direction.

Well, that’s all I have for this week! Stay tuned for next week when we cover some more ethics, this time from the perspective of the bullshit-callers as opposed to the bullshit producers.

Week 10 is up! Read it here.

The Bullshit Two-Step

I’ve been thinking a lot about bullshit recently, and I’ve started to notice a bit of a pattern in the way bullshit gets relayed on social media. These days, it seems like bullshit is turning in to a multi-step process that goes a little something like this: someone posts/publishes something with lots of nuances and caveats. Someone else translates that thing for more popular consumption, and loses quite a bit of the nuance.  This happens with every share until finally the finished product is almost completely unrecognizable. Finally the story encounters someone who doesn’t agree with it, who then points out there should be more caveats. The sharer/popularizers promptly point at the original creator, and the creator throws their hands up and says “but I clarified those points in the original!!!!”. In other words:

The Bullshit Two-Step: A dance in which a story or research with nuanced points and  specific parameters is shared via social media. With each share some of the nuance or specificity is eroded, finally resulting in a story that is almost total bullshit but that no one individually feels responsible for. 

Think of this as the science social media equivalent of the game of telephone.

This is a particularly challenging problem for people who care about truth and accuracy, because so often the erosion happens one word at a time. Here’s an example of this happening with a Census Bureau statistic I highlighted a few years ago. Steps 1 and 2 are where the statistic started, step 4 is how it ended up in the press:

  1. The Census Bureau reports that half of all custodial (single) parents have court ordered child support.
  2. The Census Bureau also states (when talking about just the half mentioned in #1) that “In 2009, 41.2 percent of custodial parents received the full amount of child support owed them, down from 46.8 percent in 2007, according to a report released today by the U.S. Census Bureau. The proportion of these parents who were owed child support payments and who received any amount at all — either full or partial — declined from 76.3 percent to 70.8 percent over the period.
  3. That got published in the New York Times as “In 2009, the latest year for which data are available, only about 41 percent of custodial parents (predominantly women) received the child support they were owed. Some biological dads were deadbeats. ” No mention that this only covered half of custodial parents.
  4. This ended up in Slate (citing the Times) as “…. in a substantial number of cases, the men just quit their families. That’s why only 41 percent of custodial parents receive child support.” The “full amount” part got lost, along with all those with no court mandate who may or may not be getting money.

As you can see, very little changed between each piece, but a lot changed by the end. We went from “Half of all custodial parents receive court ordered child support. Of that half, only 41% have received the full amount this year.” to “only 41% of custodial parents receive child support at all”. We didn’t get there all at once, but we got there.  No one’s fully responsible, but no one’s innocent either. It’s the bullshit two-step.

I doubt there’s any one real source for this….sometimes I think these are legitimate errors in interpretation, sometimes people were just reading quickly and missed the caveat, sometimes people are just being sloppy. Regardless, I think it’s interesting to track the pathway and see how easy it is to lose meaning one or two words at a time. It’s also a good case for only citing primary sources for statistics, as it makes it harder to carry over someone else’s error.

 

Calling BS Read-Along Week 8: Publication Bias

Welcome to the Calling Bullshit Read-Along based on the course of the same name from Carl Bergstorm and Jevin West  at the University of Washington. Each week we’ll be talking about the readings and topics they laid out in their syllabus. If you missed my intro and want the full series index, click here or if you want to go back to Week 7 click here.

Well hello Week 8! How’s everyone doing this week? A quick programming note before we get going: the videos for the lectures for the Calling Bullshit class are starting to be posted on the website here. Check them out!

This week we’re taking a look at publication bias, and all the problems that can cause. And what is publication bias? As one of the readings so succinctly puts it, publication bias  “arises when the probability that a scientific study is published is not independent of its results.” This is a problem because it not only skews our view of what the science actually says, but also is troubling because most of us have no way of gauging how extensive an issue it is.  How do you go about figuring out what you’re not seeing?

Well, you can start with the first reading, the 2005 John Ioannidis paper “Why Most Published Research Findings are False“.  This  provocatively titled yet stats heavy paper does a deep dive in to the math behind publication and why our current research practices/statistical analysis methods may lead to lots of false positives reported in the literature. I find this paper so fascinating/important I actually did a seven part deep dive in to it a few months ago, because there’s a lot of statistical meat in there that I think is important. If that’s TL;DR for you though, here’s the recap: the statistical methods we use to control for false positives and false negatives (alpha and beta) are insufficient to capture all the factors that might make a paper more or less likely to reach an erroneous conclusion.  Ioannidis lays out quite a few factors we should be looking at more closely such as:

  1. Prior probability of a positive result
  2. Sample size
  3. Effect size
  4. “Hotness” of field
  5. Bias

Ioannidis also flips the typical calculation of “false positive rate” or “false negative rate” to one that’s more useful for those of us reading a study: positive predictive value. This is the chance that any given study with a “positive” finding (as in a study that reports a correlation/significant difference, not necessarily a “positive” result in the happy sense) is actually correct. He adds all of the factors above (except hotness of field) in to the typical p-value calculation, and gives an example table of results. (1-beta is study power which includes sample size and effect size, R is his symbol for probability of a positive result, u is bias factor):

Not included is the “hotness” factor, where he points out that multiple research teams working on the same question will inevitably produce more false positives than just one team will. This is likely true even if you only consider volume of work, before you even get to corner cutting due to competition.

Ultimately, Ioannidis argues that we need bigger sample sizes, more accountability aimed at reducing bias (such as telling others your research methods up front or trial pre-registration), and to stop rewarding researchers only for being the first to find something (this is aimed at both the public and at journal editors). He also makes a good case that fields should be setting their own “pre-study odds” numbers and that researchers should have to factor in how often they should be getting null results.

It’s a short paper that packs a punch, and I recommend it.

Taking the issues a step further is a real life investigation contained in the next reading “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy” from Turner et al in the New England Journal of Medicine. They reviewed all the industry sponsored antidepressant trials that had pre-registered with the FDA, and then reviewed journals to see which ones got published. Since the FDA gets the results regardless of publication, this was a chance to see what was made it to press and what didn’t. The results were disappointing, but probably not surprising:

Positive results that showed the drugs worked were almost always published, negative results that showed no difference from placebo  often went unpublished. Now the study authors did note they don’t know why this is, they couldn’t differentiate between the “file drawer” effect (where researchers put negative findings in their drawer and don’t publish them) and journals that rejected papers with null results. It seems likely both may be a problem. The study authors also found that the positive papers were presented as very positive, whereas some of the negative papers had “bundled” their results.

In defense of the anti-depressants and their makers, the study authors did find that a meta-analysis of all the results generally showed the drugs were superior to a placebo. Their concern was the magnitude of the effect may have been overstated. By not having many negative results to look it, the positive results are never balanced out and it appears the drugs are much more effective than they actually are.

The last reading is “Publication bias and the canonization of false facts.“by Nissen et al, a pretty in depth look at the effects of publication bias on our ability to distinguish between true and false facts. They set out to create a model of how we move an idea between theory and “established fact” through scientific investigation and  publication, and then test what publication bias would do to that process. A quick caveat from the end of the paper I want to give up front: this model is supposed to represent the trajectory of investigations in to “modest” facts, not highly political or big/sticky problems. Those beasts have their own trajectory, much of which has little to do with publication issues. What we’re talking about here is the type of fact that would get included in a textbook with no footnote/caveat after 12 or so supportive papers.

They start out by looking at the overwhelming bias towards publishing “positive” findings. Those papers that find a correlation, reject the null hypothesis, or find statistically significant differences are all considered “positive” findings. Almost 80% of all published papers are “positive” findings, and in some fields this is as high as 90%. While hypothetically this could mean that researchers just pick really good questions, the Turner et al paper and the Ioannidis analysis suggest that this is probably not the full story. “Negative” findings (those that fail to reject the null or find no correlation or difference) just aren’t published as often as positive ones. Now again, it’s hard to tell if this is the journals not publishing or researchers not submitting, or a vicious circle where everyone blames everyone else, but here we are.

The paper goes on to develop a model to test how often this type of bias may lead to the canonization of false facts. If negative studies are rarely published and almost no one knows how many might be out there, it stands to reason that at least some “established facts” are merely those theories whose counter-evidence is sitting in a file drawer. The authors base their model on the idea that every positive publication will increase belief, and negative ones will decrease it, but they ALSO assume we are all Bayesians about these things and constantly updating our priors. In other words, our chances of believing in a particular fact as more studies get published probably look a bit like that line in red:

This is probably a good time to mention that the initial model was designed only to look at publication bias, they get to other biases later. They assumed that the outcomes of studies that reach erroneous conclusions are all due to random chance, and that the beliefs in question were based only on the published literature.

The building of the model was pretty interesting, so you should definitely check that out if you like that sort of thing. Overall though, it is the conclusions that I want to focus on. A few things they found:

  1. True findings were almost always canonized
  2. False findings were canonized more often if the “negative” publication rate was low
  3. High standards for evidence and well designed experiments are not enough to overcome publication bias/reporting negative results

That last point is particularly interesting to me. We often ask for “better studies” to establish certain facts, but this model suggests that even great studies are misleading if we’re seeing a non-random sample. Indeed, their model showed that if we have a negative publication rate of under 20%, false facts would be canonized despite high evidence standards. This is particularly alarming since the antidepressant study found around a 10% negative publication rate.

To depress us even further, the authors then decided to add researcher bias in to the mix and put some p-hacking in to play. Below is their graph of the likelihood of canonizing a false fact vs the actual false positive rate (alpha). The lightest line is what happens wehn alpha = .05 (a common cut off), and each darker line shows what happens if people are monkeying around to get more positive results than they should:

Figure 8 from “Research: Publication bias and the canonization of false facts”

Well that’s not good.

On the plus side, the paper ends by throwing yet another interesting parameter in to the mix. What happens if people start publishing contradictory evidence when a fact is close to being canonized? While it would be ideal if negative results were published in large numbers up front, does last minute pushback work? According to the model, yes, though not perfectly. This is a ray of hope because it seems like in at least some fields, this is what happens. Negative results that may have been put in the file drawer or considered uninteresting when a theory was new can suddenly become quite interesting if they contradict the current wisdom.

After presenting all sorts of evidence that publishing more negative findings is a good thing, the discussion section of the paper goes in to some of the counterarguments. These are:

  1. Negative findings may lead to more true facts being rejected
  2. Publishing too many papers may make the scientific evidence really hard to wade through
  3. Time spent writing up negative results may take researchers away from other work

The model created here predicts that #1 is not true, and #2 and #3 are still fairly speculative. On the plus side, the researchers do point to some good news about our current publication practices that may make the situation better than the model predicts:

  1. Not all results are binary positive/negative They point out that if results are continuous, you could get “positive” findings that contradict each other. For example, if a correlation was positive in one paper and negative in another paper, it would be easy to conclude later that there was no real effect even without any “negative” findings to balance things out.
  2. Researchers drop theories on their own Even if there is publication bias and p-hacking, most researchers are going to figure out that they are spending a lot more time getting some positive results than others, and may drop lines of inquiry on their own.
  3. Symmetry may not be necessary The model assumes that we need equal certainty to reject or accept a claim, but this may not be true. If we reject facts more easily than we accept them, the model may look different.
  4. Results are interconnected The model here assumes that each “fact” is independent and only reliant on studies that specifically address it. In reality, many facts have related/supporting facts, and if one of those supporting facts gets disproved it may cast doubt on everything around it.

Okay, so what else can we do? Well, first recognize the importance of “negative” findings. While “we found nothing” is not exciting, it is important data. They call on journal editors to consider the possible damage of considering such papers uninteresting. Next, they point to new journals springing up dedicated just to “negative results” as a good trend. They also suggest that perhaps some negative findings should be published as pre-prints without peer review. This wouldn’t help settle questions, but it would give people a sense of what else might be out there, and it would settle some of the time commitment problems.

Finally a caveat which I mentioned at the beginning but is worth repeating: this model was created with “modest” facts in mind, not huge sticky social/public health problems. When a problem has a huge public interest/impact (like say smoking and lung cancer links) people on both sides come out of the woodwork to publish papers and duke it out. Those issues probably operate under very different conditions than less glamorous topics.

Okay, over 2000 words later, we’re done for this week! Next week we’ll look at an even darker side of this topic: predatory publishing and researcher misconduct. Stay tuned!

Week 9 is up! Read it here.