Short Takes: Perception vs Reality vs Others

I was going to do a normal “what I’m reading” column this week, but I thought so much about the first two links I just decided to turn it in to a short takes. I’m seeing a lot of interesting parallels between these two articles, so I wanted to highlight a few things.

The first link was a Medium post called “How to Change a Mind“, an excerpt from an upcoming book called “Stop Being Reasonable: How We Really Change Our Minds“. It tells the story of a woman named Missy, and how she got her husband Dylan to leave a cult. The whole story is worth reading, but it the ultimate conclusion is worth pondering. Dylan didn’t leave because she was able to point out some of the ridiculousness in what the cult believed (though she tried), but because one of the leaders ended up offering a large and objectively unfair critique of Missy ending with an encouragement to leave her.

This was the proverbial straw that broke the camel’s back. Dylan knew his wife had been nothing but kind and supportive, and the attempt to cast her in a different light caused him to doubt the leaders in a way he never had. As the article says “Dylan did not need to lose his faith in what his elders were saying; he needed to lose his faith in them.” And lose it he did. He spent two days straight Googling every critique of the group that was out there, then severed his ties. He describes his faith in them like a faucet that just got suddenly shut off.

The article does a good job of contextualizing this, and pointing out the lessons here for all of us. While most of us have never joined a cult, many of us take the word of others for granted on many topics. We have faith in certain sources, and barring any challenges will continue to believe those things. Maybe the topic is history, chemistry, math or some other topic we are aware of but didn’t study much personally. Even something as simple as another person’s name is mostly taken on faith. The point is, we can’t check every single thing that comes across our path, so we all have short cuts and rubrics to decide what information we believe and what we don’t. The point of this story is that the “who” part of that rubric can at times be more important than the “what”.

Given that, it was interesting that this next link landed in my inbox this morning “The Dangers of Fluent Lectures“. The article is based on a study that compared Harvard freshmen who took a physics class with lots of well polished lectures (passive learning) and those who took a class that made students work through problems on their own before explaining the answers to them (active learning). The results were interesting. Those who sat through the nicely polished lecture believed they learned more, but those who sat through the active lecture actually learned more:

There’s a couple theories about why this happens, but I think at least some of it has to do with the first article. Feeling that you are in the presence of someone hyper-competent could end up giving you the impression that you are more competent than you are. The active learning forces students to focus on their own deficiencies, while the passive learning lets them ignore that and focus on the professor. As the study authors say “novice students are poor at judging their actual learning and thus rely on inaccurate metacognitive cues such as fluency of instruction when they attempt to assess their own learning.” Again, it’s not always what you believe, it’s who.

Now there’s a couple caveats with this study: it’s not clear what would have happened if they had tried this study on 4th year students who were doing more advanced work, or if they had tried this at a state school rather than Harvard. They also mentioned that the kids in the study weren’t given any warning about teaching methods up front. In a later version of the study, they spent a few minutes in the first lecture teaching kids about active learning methods and the proof that they help students learn more. The students subsequently rated those classes as more effective, and said they felt better about the learning methods.

As always, we continue to be poor judges of our own objectivity.

Short Takes: Anti-Depressants, Neurogenesis, and #Marchforourlives

Three good articles, three different topics.

First up, the New York Times profiles people who are on anti-depressants long term and find they have trouble quitting. It’s an interesting article both because it impact a lot of people (7% of US adults have been on anti-depressants for 5+ years) and because it’s an interesting insight in to the limitations of our clinical trial/drug approval system. Basically, drugs get approved based off of a timeframe that can reasonably be done in a clinical trial: 6 to 9 months or so. In this case later studies went out as far as 2 years, but no further. This has caused issues when trying to get long term users back off. Some studies have reported 50-70% of longterm users reporting serious withdrawal symptoms, with many continuing on the medications just to avoid the withdrawal. I don’t really see a clear way around this….trials can’t go on forever…..but it is an unfortunate limitation of our current system.

Next up, Slate Star Codex does a somewhat unsettling review about adult neurogenesis.  He goes through dozens of highly cited papers talking about how useful/involved neurogenesis is in so many many things in our lives, just to follow it up with the new study that shows it probably doesn’t exist. Uuuuuuugh. Apparently a lot of the confusion started because it definitely exists in rats, and things kinda snowballed from there. It sounds like just another scientific squabble, but in the words of SSC “We know many scientific studies are false. But we usually find this out one-at-a-time. This – again, assuming the new study is true, which it might not be – is a massacre. It offers an unusually good chance for reflection.” Yikes.

Finally, some interesting stats about the March For Our Lives that took place recently, and who actually participated. Contrary to what I’d heard, this march actually had a higher average age (49) than many we’ve seen, and fewer than 10% of participants were under 18. Most interesting (to me) is that the first time protesters there were more likely to say they were motivated to march because of Trump (42%) than gun rights (12%).

Short Takes: Gerrymandering, Effect Sizes, Race Times and More

I seem to have a lot of articles piling up that I have something to say about, but not enough for a full post. Here’s 4 short takes on 4 current items:

Did You Hear the One About the Hungry Judges?
The AVI sent me an article this week about a hungry judge study I’ve heard referenced multiple times in the context of willpower and food articles. Basically, the study shows that judges rule in favor of prisoners requesting parole 65% of the time at the beginning of the day and 0% of the time right before lunch. The common interpretation is that we are so driven by biological forces that we override our higher order functioning when they’re compromised. The article rounds up some of the criticisms of the paper, and makes a few of its own…namely that an effect size that large could never have gone unnoticed. It’s another good example of “this psychological effect is so subtle we needed research to tease it out, but so large that it noticeably impacts everything we do” type research, and that should always raise an eyebrow. Statistically, the difference in rulings is as profound as the difference between male and female height. The point is, everyone would know this already if it were true. So what happened here? Well,this PNAS paper covers it nicely but here’s the short version: 1) the study was done in Israel  2) This court does parole hearings by prison, 3 prisons a day with a break in between each 3) prisoners who have legal counsel go first 4) lawyers often represent multiple people, and they chose the order of their own cases 5) the original authors lumped “case deferred” and “parole denied” together as one category. So basically the cases are roughly ordered from best to worst up front, and each break starts the process over again. Kinda makes the results look a little less impressive, huh?

On Inter-Country Generalization and Street Harassment
I can’t remember who suggested it, but I saw someone recently suggest that biology or nutrition papers in PubMed or other journal listings should have to include a little icon/picture at the top that indicated what animal the study was done on. They were attempting to combat the whole “Chemical X causes cancer!” hoopla that arises when we’re overdosing mice on something. I would like to suggest we actually do the same thing with countries, maybe use their flags or something. Much like with the study above, I think tipping people off that we can’t make assumptions things are working the same way they work in the US or whatever country you hail from. I was thinking about that when I saw this article from Slate with the headline “Do Women Like Being Sexually Harassed? Men in a New Survey Say Yes“. The survey has some disturbing statistics about how often men admit to harassing or groping women on the street (31-64%) and why they do it (90% say “it’s fun”), but it’s important to note it surveyed men exclusively in the Middle East and Northern Africa. Among the 4 countries, results and attitudes varied quite a bit, making it pretty certain that there’s a lot of cultural variability at play here. While I thought the neutral headline was a little misleading on this point, the author gets some points for illustrating the story with signs (in Arabic) from a street harassment protest in Cairo. I only hope other stories reporting surveys from other countries do the same.

Gerrymandering Update: Independent Commissions May Not be That Great (or Computer Models Need More Validating)
In  my last post about gerrymandering, I mentioned that some computer models showed that independent commissions did a much better job of redrawing districts than state legislatures did. Yet another computer model is disputing this idea, showing that they aren’t. To be honest I didn’t read the working paper here and I’m a little unclear over what they compared to what, but it may lend credibility to the Assistant Village Idiot’s comment that those drawing district maps may be grouping together similar types of people rather than focusing on political party. That’s the sort of thing that humans of all sorts would do naturally and computers would call biased. Clearly we need a few more checks here.

Runner Update: They’re still slow and my treadmill is wrong
As an update to my marathon times post, I recently got sent this websites report that  showed that US runners for all distances are getting slower. They sliced and diced the data a bit and found some interesting patterns: men are slowing down more than women and slower runners are getting even slower. However, even the fastest runners have slowed down about 10% in the last two decades. They pose a few possible reasons: increased obesity in the general population, elite runners avoiding races due to the large numbers of slower runners, or in general leaving to do ultras/trail races/other activities. On a only tangentially related  plus side, I thought I was seriously slowing down in my running until I discovered that my treadmill was incorrectly calibrated to the tune of over 2 min/mile.  Yay for data errors in the right direction.