What I’m Reading: November 2018

Happy post-Thanksgiving everyone! Hope yours was lovely. I went mostly computer free so if you’ve emailed me or sent me something recently, I promise it’s not off my radar. I didn’t get much reading in, but I did get sent two interesting pop culture graphics that are worth a gander.

First up, a visual representation of how accurate “based on a true story” movies are. Shows not only how often it’s inaccurate, but where those inaccuracies take place and how inaccurate they are. For example, here’s Selma (the highest rated) vs Imitation Game (one of the lowest rated). Bright red means false, light red means false-ish, grey is unknown, light blue is true-ish, dark blue is true. :

Check out the actual site, as you can click on each bar to see exactly what the scene was that got the rating. I was interested to see what they called “unknown”, and it appears that those are mostly things like conversations between two characters who definitely spoke, and almost certainly about that topic, but no specific record of or reference to the conversation exists.

Next up, from John: Are pop lyrics getting more repetitive? Using the same algorithm used to compress digital photos in to smaller file sizes, this guy tries to measure how repetitive the lyrics in the Billboard Top 100 songs for the last few decades. Not only is this an interesting project, but he spells out his methodology, assumptions, the outliers and his step by step process REALLY nicely. He shows examples of songs ranked highly repetitive, why he chose to use a log scale for his axis, and how his algorithm would evaluate a regular paragraph of text. Seriously, if scientific papers in general had methodology sections this robust we wouldn’t have a replication crisis.

So what was the most repetitive song in the 15,000 he looked at? Around the World by Daft Punk. Considering that song is just the phrase “Around the World” repeated 100+ times, this makes sense. He breaks down the most repetitive songs by decade, which I thought might be of interest to folks here. Remember, these are only songs that made it to the Billboard Hot 100:

1960s top 3:

  • Chain of Fools (Part 1) – Jimmy Smith, 1968 (92% size reduction)
  • Jingo – Sanata, 1969  (85% size reduction)
  • Any Way You Want It – The Dave Clark Five, 1964 (83% reduction)

(Note to my Dad – You Really Got Me by the Kinks was #5 for the decade at 81%)

1970s top 3:

  • Let’s All Chant – The Michael Zager Band, 1978 (88% size reduction)
  • Keep it Comin’ Love – KC and the Sunshine Band, 1977 (87%)
  • Who’d She Coo? – Ohio Players, 1976 (86%

1980s top 3:

  • Pump Up the Jam – Technotronic, 1989 (85%)
  • Funkytown – Lipps Inc. 1980 (85%)
  • Got My Mind Set On You – George Harrison, 1987 (80%)

1990s top 3:

  • Around the World – Daft Punk, 1997 (98%)
  • The Rockafeller Skank – Fatboy Slim, 1998 (95%)
  • Send Me On My Way – Rusted Root, 1995 (85%)

2000s top 3:

  • Better Off Alone – Alice Deejay, 2000 (84%)
  • Thong Song – Sisqo, 2000 (81%)
  • Dance With Me – 112, 2001 (81%)

2010s top 3:

  • Get Low – Dillon Francis & DJ Snake, 2015 (90%)
  • Barbra Streisand – Duck Sauce, 2011 (89%)
  • Feliz Navidad – Jose Feliciano, 2017 (89%)

Overall, songs did get more repetitive, both overall and the top 10 from each year. In 1960 the average song on the Top 100 was 46% compressible, while in 2015 it was 56% compressible. Interestingly, the top 10 songs are always more repetitive than the rest of them by about 2-6% or so.

There’s also a lot of interesting breakdowns by artist. I learned that the Guess Who was particularly repetitive for the 70s, and that country is much less repetitive than pop music. Apparently this even applies within artists, as Taylor Swift showed a sharp rise in repetitive lyrics after she switched from country to pop.

Anyway, go check it out, the graphics are great!

 

5 Things About the Many Analysts, One Data Set Paper

I’ve been a little slow on this, but I’ve been meaning to get around to the paper “Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. This paper was published back in August, but I think it’s an important one for anyone looking to understand why science can often be so difficult.

The premise of this paper was simple, but elegant: give 29 teams the same data set and the same question to answer, then see how everyone does their analysis and if all of those analyses yield the same results. In this case, the question was “do soccer referees give red cards to dark skinned players more than light skinned players”. The purpose of the paper was to highlight how seemingly minor choices in data analysis can yield different results, and all participants had volunteered for this study with full knowledge of what the purpose was. So what did they find? Let’s take a look!

    1. Very few teams picked the same analysis methods. Every team in this study was able to pick whatever method they thought best fit the question they were trying to answer, and boy did the choices vary. First, the choice of analysis method varied: Next, the choice of covariates varied wildly. The data set had contained 14 covariates, and the 29 teams ended up coming up with 21 different combinations to look at:
    2. Choices had consequences As you can imagine, this variability produced some interesting consequences. Overall 20 of the 29 teams found a significant effect, but 9 didn’t. The effect sizes they found also varied wildly, with odds ratios running from .89 to 2.93. While that shows a definite trend in favor of the hypothesis, it’s way less reliable than the p<.05 model would suggest.
    3. Analytic choices didn’t necessarily predict who got a significant result. Now because all of these teams signed up knowing what the point of the study was, the next step in this study was pretty interesting. All the teams methods (but not their results) were presented to all the other teams, who then rated them. The highest rated analyses gave a median odds ratio of 1.31, and the lower rated analyses gave a median odds ratio of…..1.28. The presence of experts on the team didn’t change much either. Teams with previous experience teaching or publishing on statistical methods generated odds ratios with a median of 1.39, and the ones without such members had a median OR of 1.30. They noted that those with statistical expertise seemed to pick more similar methods, but that didn’t necessarily translate in to significant results.
    4. Researchers beliefs didn’t influence outcomes. Now of course the researchers involved in this had self-selected in a to a study where they knew other teams were doing the same analysis they were, but it’s interesting to note that those who said up front they believed the hypothesis was true were not more likely to get significant results than those who were more neutral. Researchers did change their beliefs over the course of the study however, as this chart showed:While many of the teams updated their beliefs, it’s good to note that the most likely update was “this is true, but we don’t know why”, followed by “this is true, but may be caused by something we didn’t captured in this data set (like player behavior)”.
    5. They key differences in analysis weren’t things most people would pick up on. At one point in the study, the teams were allowed to debate back and forth and look at each others analysis. One researcher noted that those teams that had included league and club as covariates were the ones who got non-significant results. As the paper states “A debate emerged regarding whether the inclusion of these covariates was quantitatively
      defensible given that the data on league and club were
      available for the time of data collection only and these
      variables likely changed over the course of many players’
      careers”. This is a fascinating debate, and one that would likely not have happened had these papers just been analyzed by one team. This choice was buried deep in the methods section, and I doubt under normal circumstances anyone would have thought twice about it.

That last point gets to why I’m so fascinated by this paper: it shows that lots of well intentioned teams can get different results even if no one is trying to be deceptive. These teams had no motivation to fudge their results or skew anything, and in fact were incentivized in the opposite direction. They still got different results however, for reasons that were so minute and debatable, they had to take multiple teams to discuss them. This shows nicely Andrew Gelman’s Garden of Forking Paths, how small choices can lead to big changes in outcomes. With no standard way of analyzing data, tiny boring looking choices in analysis can actually be a big deal.

The authors of the paper propose more group approaches may help mitigate some of these problems and give us all a better sense of how reliable results really are. After reading this, I’m inclined to agree. Collaborating up front also takes the adversarial part out, as you don’t just have people challenging each others research after the fact. Things to ponder.

Book Recommendation: Bad Blood

Well, my audit went well last week. The inspector called us “the most boring audit he’d ever had”, which quite frankly is what you want to hear from a regulator. Interest = violations = citations = sad BS King.

As someone who has now dealt with quite a few inspectors over the years, I am always interested to see how exactly they choose to go about surveying everything given the time constraints. This particular inspector had an interesting tactic: he ran down the list of regulations we should be following, and asked us verbally if we followed it or not. Everything tenth one or so, he would suddenly pivot and ask us to provide proof. He mentioned afterwards that he put a lot of weight on how quickly we were able to produce what he asked for. From what I can tell, his theory was that if you produce proof for random questions easily and without hesitation, you probably prepared for everything fairly well. Not a bad theory. Luckily for me, our preparation strategy had been to read through every standard, then prepare a response for it. Thus, we were boring, and my sanity is restored.

I was thinking about all this as I sat down to relax this weekend and picked up the book “Bad Blood: Secrets and Lies in a Silicon Valley Startup” by John Carreyrou. This book covers the rise and fall of Theranos and its founder Elizabeth Holmes, a topic I’ve mentioned on this blog before. To say I couldn’t put it down is a near literal statement: I started it at 5pm last night and finished it by noon today. The book converges on many of my interests: health, medicine, technology, data, and how very smart people can be deceived in to believing something that isn’t true. It also doesn’t hurt that the companies founder is a woman about my age who was once touted as being the first self-made female billionaire in a field I have actually worked in.

For those unfamiliar with Theranos, I’ll give the short version. Theranos was a company started in 2003 by then 19 year old Stanford drop out Elizabeth Holmes. Her vision was to create a blood analyzer that could run regular lab tests on just a few drops of blood, so patients could use a finger stick (like with home glucose monitoring) rather than get their blood drawn the conventional way. Ten years in, the company was worth almost $10 billion, but there was an issue: their product didn’t really work the way they claimed, and the company was using extreme tactics to cover this up. Eventually, in a bid to get somebody to pay attention to this, the story was brought to the attention of a Wall Street Journal reporter (John Carreyrou, who wrote the book) and he managed to untangle the web. Despite the highlights all being pretty well publicized at the time, I found the details and timeline reconstruction to be a fascinating read.

What interested me most about the book was that my characterization in my blog post 2 years ago was a little bit wrong. I had snarked that Carreyrou was one of the first to question them, but as I read the book I discovered that actually a lot of people had questioned Theranos, even during its prime. It actually restored my faith in humanity to see how many people had attempted to raise concerns about what they saw. Many of these people were young, with student debt, or marketing people unfamiliar with science, or simply people with ethics who just got uncomfortable, and many of them only stopped pushing when they were on the receiving end of some downright frightening legal (and sometimes not so legal) intimidation tactics. Additionally, many people who were deceived really couldn’t be blamed. In one particularly bizarre anecdote, Carreyrou mentions that a fellow Wall Street Journal reporter had gone to a meeting with Theranos and they had promised to show him how the machine worked. It turns out the machine didn’t work, but they’d written a program to hide any error messages with a progress screen, and then when he left the room they swapped out his sample and ran it on a regular analyzer they had in another room. Not really his fault for not picking up on that. She got her deal with Walgreens by performing a similar slight of hand. Since the initial WSJ articles, Theranos has paid out millions in lawsuits claiming that they intentionally deceived investors, and Holmes and Ramesh Balwani (her #2 guy and former boyfriend) are under indictment.

Throughout the book, Carreyrou returns to two related but slightly different central points:

  1. Holmes and her investors wanted to believe she was the next Steve Jobs or Bill Gates.
  2. Healthcare doesn’t work like other tech sector products. Claiming your technology works before it’s ready could kill someone.

It was interesting for me to reflect that if Holmes hadn’t entered the healthcare realm, she might have actually succeeded. While the biographies of people like Steve Jobs are actually littered with the stories of broken promises, many of the people who flipped on Holmes stated that they were compelled to resign their jobs or talk to reporters because they feared the shoddy work was going to kill someone.

So if this was so obvious, how did Theranos get to $10 billion? And how did they end up with people like Henry Kissinger, George Schulz and James Mattis on the board? A few lessons I gleaned:

  1. Watch out for the narrative, ask for data. One of the few things everyone agrees upon in this story was that Holmes was a compelling CEO. She could spin a strong narrative to anyone who asked, and was kind and easy to work with as long as you let her stick to the story. Throughout the story though, anyone who asked for proof of anything she said was met with responses ranging from frosty to belligerent. This is what initially reminded me of my inspection. We were able to provide proof just as readily as we were able to provide verbal confirmation, which is why our inspector ended up believing us.
  2. Look for real experts. After Carreyrou published his first article about the concerns with the company, he notes that Theranos issued quite a few heavily worded denials and legal threats to the Wall Street Journal. Luckily for him, he noted that post-publication several other media outlets jumped in and started asking questions. He noted that one of the reasons they were so quick to pounce is that a quick look at Theranos’s board and investors revealed that no one involved really knew anything about biotech. While names like Henry Kissinger are impressive, people quickly started noting that the board was mostly military men and diplomats. The lack of any medical leadership seemed out of place. Additionally, some investing groups (like Google Health) that specialize in biotech had passed on Theranos. This was enough to cause other news outlets to turn up the heat on Holmes, as the lack of real experts struck everyone as suspicious.
  3. Look at the history. In an interview he gave, Carreyrou pointed out that it wasn’t the initial investors in Theranos who screwed up, as early investors are often gambling on half-baked ideas. The people who failed their due diligence were those who invested a decade in. He notes that those people should have been pushing harder for financial statements and peer reviewed studies, and that didn’t happen. For Theranos not to have peer reviewed studies in their first year was understandable. To still be lacking them in their tenth year was a very bad sign.
  4. Apply the right standards to the right industry. Healthcare isn’t the same as a cell phone. There are laws, and regulating bodies that can and will shut you down. A 1% product failure rate can kill people. Don’t get so excited by the idea of “disruption” that you ignore reality.

Come to think of it, with a few tweaks these are all pretty good life lessons about how to avoid bad actors in your personal life as well. I really do recommend this book, if only as a counter-narrative to the whole “everyone said we couldn’t do it, but we proved the naysayers wrong!” thing. Sometimes naysayers are right.

Although maybe not forever. As an interesting end note: according to this article, Holmes is currently fundraising in Silicone Valley for another start up.

Voter Turnout vs Does My Vote Count

Welp, we have another election day coming up. I’ll admit I’ve been a little further removed from this election cycle than most people, for two reasons:

  1. We are undergoing a massive inspection at work tomorrow (gulp) and have been swamped preparing for it. Any thoughts or prayers for this welcome.
  2. I live in a state where most of the races are pretty lopsided.

For point #2, we have Democratic Senator Elizabeth Warren currently up by 22 points, and Republican Governor Charlie Baker currently up by almost 40 points. My rep for the House of Representatives is running unopposed. The most interesting race in our state was actually two Democrats with major streets/bridges named after their families duking it out, but that got settled during the primaries. I’ll vote anyway because I actually have strong feelings about some of our ballot questions, but most of our races are the very definition of “my vote doesn’t make a difference”.

However, I still think there are interesting reasons to vote even if your own personal vote counts minimally. In an age of increasing market segmentation and use of voter files, the demographics that show they consistently vote will always be more catered to by politicians. I mentioned this a while ago in my post about college educated white women. As a group they are only 10% of the voting public, but they are one of the demographics most likely to actually vote, and thus they get more attention than others.

This shows up in some interesting ways. For example, according to Pew Research, during the election Gen Xers and younger will be the majority of eligible voters, yet will not make up the majority of actual voters:

There are race based differences as well. Black voters and white voters vote at similar rates, but Hispanic and Asian voters vote less often.  Additionally, those with more education and those who are richer tend to vote more often.  While that last link mentions that it’s not clear that extra voters would change election results, I still think it’s likely that if some groups with low turnout turned in to groups with high turnout, we may see some changes in messaging.

While this may be mixed for some people who don’t tend to vote with their demographic,  it does seem like getting on the electoral radar is probably a good thing.

So go vote Tuesday!