Short Takes: Gerrymandering, Effect Sizes, Race Times and More

I seem to have a lot of articles piling up that I have something to say about, but not enough for a full post. Here’s 4 short takes on 4 current items:

Did You Hear the One About the Hungry Judges?
The AVI sent me an article this week about a hungry judge study I’ve heard referenced multiple times in the context of willpower and food articles. Basically, the study shows that judges rule in favor of prisoners requesting parole 65% of the time at the beginning of the day and 0% of the time right before lunch. The common interpretation is that we are so driven by biological forces that we override our higher order functioning when they’re compromised. The article rounds up some of the criticisms of the paper, and makes a few of its own…namely that an effect size that large could never have gone unnoticed. It’s another good example of “this psychological effect is so subtle we needed research to tease it out, but so large that it noticeably impacts everything we do” type research, and that should always raise an eyebrow. Statistically, the difference in rulings is as profound as the difference between male and female height. The point is, everyone would know this already if it were true. So what happened here? Well,this PNAS paper covers it nicely but here’s the short version: 1) the study was done in Israel  2) This court does parole hearings by prison, 3 prisons a day with a break in between each 3) prisoners who have legal counsel go first 4) lawyers often represent multiple people, and they chose the order of their own cases 5) the original authors lumped “case deferred” and “parole denied” together as one category. So basically the cases are roughly ordered from best to worst up front, and each break starts the process over again. Kinda makes the results look a little less impressive, huh?

On Inter-Country Generalization and Street Harassment
I can’t remember who suggested it, but I saw someone recently suggest that biology or nutrition papers in PubMed or other journal listings should have to include a little icon/picture at the top that indicated what animal the study was done on. They were attempting to combat the whole “Chemical X causes cancer!” hoopla that arises when we’re overdosing mice on something. I would like to suggest we actually do the same thing with countries, maybe use their flags or something. Much like with the study above, I think tipping people off that we can’t make assumptions things are working the same way they work in the US or whatever country you hail from. I was thinking about that when I saw this article from Slate with the headline “Do Women Like Being Sexually Harassed? Men in a New Survey Say Yes“. The survey has some disturbing statistics about how often men admit to harassing or groping women on the street (31-64%) and why they do it (90% say “it’s fun”), but it’s important to note it surveyed men exclusively in the Middle East and Northern Africa. Among the 4 countries, results and attitudes varied quite a bit, making it pretty certain that there’s a lot of cultural variability at play here. While I thought the neutral headline was a little misleading on this point, the author gets some points for illustrating the story with signs (in Arabic) from a street harassment protest in Cairo. I only hope other stories reporting surveys from other countries do the same.

Gerrymandering Update: Independent Commissions May Not be That Great (or Computer Models Need More Validating)
In  my last post about gerrymandering, I mentioned that some computer models showed that independent commissions did a much better job of redrawing districts than state legislatures did. Yet another computer model is disputing this idea, showing that they aren’t. To be honest I didn’t read the working paper here and I’m a little unclear over what they compared to what, but it may lend credibility to the Assistant Village Idiot’s comment that those drawing district maps may be grouping together similar types of people rather than focusing on political party. That’s the sort of thing that humans of all sorts would do naturally and computers would call biased. Clearly we need a few more checks here.

Runner Update: They’re still slow and my treadmill is wrong
As an update to my marathon times post, I recently got sent this websites report that  showed that US runners for all distances are getting slower. They sliced and diced the data a bit and found some interesting patterns: men are slowing down more than women and slower runners are getting even slower. However, even the fastest runners have slowed down about 10% in the last two decades. They pose a few possible reasons: increased obesity in the general population, elite runners avoiding races due to the large numbers of slower runners, or in general leaving to do ultras/trail races/other activities. On a only tangentially related  plus side, I thought I was seriously slowing down in my running until I discovered that my treadmill was incorrectly calibrated to the tune of over 2 min/mile.  Yay for data errors in the right direction.

 

 

A Pie Chart Smorgasbord

This past week I was complaining about pie charts to a friend of mine, and I was trying to locate this image to show what I was complaining about:

Source.

I knew I had come across this on Twitter, and in finding the original thread, I ALSO discovered all sorts of people defending/belittling the lowly pie chart. Now I generally fall in the anti-pie chart camp, but these made me happy. I sourced what I could find a source on, but will update if anyone knows who I should credit.

First, we have the first and best use for a pie chart:

No other chart represents that data set quite as well.

Sometimes though, you feel like people are just using them to mess with you:

Source.

Sometimes the information they convey can be surprising:

But sometimes the conclusions are just kind of obvious:

And you have to know how to use them correctly:

They’re not all useless, there are some dramatic exceptions:

If you want more on pie charts, try these 16, uh, creative combinations, or read why they’re just the worst here.

Premature Expostulation

In my last post, I put out a call for possible names for the phenomena of people erroneously asserting that some ideological opponent hadn’t commented on a story without properly verifying that this was true. Between Facebook and the comments section I got a few good options, but the overall winner was set up by bluecat57 and perfected by the Assistant Village Idiot: Premature Expostulation. I have to admit, expostulation was one of those words I only sort of knew what it meant, but the exact definition is great for this situation “to reason earnestly with someone against something that person intends to do or has done; remonstrate:” Therefore, the definition for this phrase is:

Premature Expostulation: The act of claiming definitively that a person, group or media outlet has not reported on, responded to or comment on an event or topic, without first establishing whether or not this is true. 

Premature expostulation frequently occurs in the context of a broader narrative (they NEVER talk about thing X, they ALWAYS prioritize thing Y) , though it can also occur due to bad search results, carelessness, inattention, or simply different definitions of what “covered the story” means. If someone is discussing a news outlet they already don’t like or you are not familiar with, be alert.  It’s easy to miss a statement from someone if you don’t frequent what they write or don’t keep up with them.

To note, premature expostulation is a specific claim of fact NOT subjective opinion. The more specific the claim, the more likely it is (if proven wrong) to be premature expostulation. Saying a story was “inadequate” can cause endless argument, but is mostly a matter of opinion. If you say that a news outlet “stayed silent” however, showing that they ran even one story can disprove the claim.

I think there’s a lot of reasons this happens, but some of the common ones I see seem to be:

  • Search algorithm weirdness/otherwise just missing it. Some people do quick searches or scans and just simply miss it. I have speculated that there’s some sort of reverse inattentional blindness thing going on where you’re so convinced you’ll see something if it’s there that you actually miss it.
  • Attributing a group problem to an individual. I can’t find it right now, but I once saw a great video of a feminist writer who was on a panel get questioned by an audience member why she had hypocritically stayed silent on a particular issue it seems she should have commented on. It turns out she actually had written columns on the issue and offered to send them to him. Poor kid had no idea what to do. Now I suspect at the time there were feminist writers being breathtakingly hypocritical over this issue, but that didn’t mean all of them were.  Even if there were hundreds of feminist writers being hypocritical, you still should double check that the one you’re accusing is one of them before you take aim.
  • Attributing an individual problem to a group Sometimes a prominent figure in a group is so striking that people end up assuming everyone in the group acts exactly as the one person they know about does.
  • Assuming people don’t write when you’re not reading When I had a post go mini-viral a few months ago, I got a huge influx of new people who had never visited this blog. I got many good comments/criticisms, but there were a few that truly surprised me. At least a few people decided that the biggest problem I had was that I never took on big media outlets and that I only picked on small groups, or that I was never talked about statistics that might challenge something liberals said. Now regular readers know this is ridiculous. I do that stuff all the time. For whatever reason though, some people assumed that the one post they read of mine somehow represented everything I’d ever written. That’s a personal anecdote, but we see this happen with other groups as well. During the gay marriage debate I once had a friend claim that Evangelicals never commented on straight divorce. Um, okay. No. You just don’t listen to them until they comment on something you are upset by, then you act like that’s all they ever say.
  • The emotional equivalency metric If someone doesn’t feel the same way you do, they must not have seen the story the way you have. Therefore they can’t have covered the story until they mirror your feelings.

I’m sure there are other ways this comes up as well, feel free to leave me your examples.

 

Sharing Your Feelings

Yesterday morning during some random Twitter scrolling, I saw two interesting tweets in my feed that seemed a bit related. The first was one complaining about a phenomena that has been irritating the heck out of me recently :

//platform.twitter.com/widgets.js

If the embed doesn’t work, here’s the link. The first shot is some text from a Pacific Standard article about Lisa Durden’s firing. In it, the author claims that “In contrast to other free speech-related controversies on college campuses, there has been almost no media coverage of Durden’s ouster.” The Google news search however shows a different story….in fact many media outlets have covered the story.

Now this type of assertion always seems a little surprising to me for two reasons:

  1. We have absolutely unprecedented access to what people and news outlets are/are not reporting on, and any claim like this should be easy to verify.
  2. It’s an easy claim to modify in a way that makes it a statement of opinion, not fact. “there has been far less media outrage” would seem to preserve the sentiment without being a statement of fact.

Once I started thinking about it, I felt like I heard this type of assertion made quite frequently. Which of course got me wondering if that sort of hyper-attention was part of the phenomena. I think everyone knows the feeling of “I heard one reference to this issue/unusual word/obscure author and now I have seen it 5 places in two days”. I got to wondering….could a related (but opposite) phenomena happen when it came to people you disagreed with saying things? Were people purposefully ignoring or discounting reporting from outlets that didn’t fit their narrative, or were they actually not hearing/registering things that were getting said?

I started wondering further when in one recent case, a writer for the Federalist actually Tweeted out the links to her search results that “proved” the New York Times wasn’t covering a story about NSA abuses under Obama. However, the NYTs had actually covered the story (they broke it actually), and clicking on her links shows that their story was among the results she had been scanning over. She issued a correction Tweet a few hours later when someone pointed that out, which makes me doubt she was really trying to deceive anyone. So what made her look at the story and not see it?

Well, this brings me to the second Tweet I saw, which was about a new study about the emotional drivers of political sharing across social networks. I don’t have access to the full text of the paper, but two interesting findings are making headlines:

  1. For the issues studied (gun control, same-sex marriage, climate change), including moral-emotional language in your headline increased sharing by 20%
  2. This sharing increase occurred almost exclusively in your in-group. Liberals and conservatives weren’t sharing each others stories.

I’m speculating wildly here, but I wonder if this difference in the way we share stories contributes to perceptions that the other side is “not talking” about something. When something outrages my liberal (or conservative) friends, the same exact article will show up in my news feed 10 times. When the opposing party comments on it/covers it, they almost never share the same exact story, they comment/share different ones. They only comment on the same story when they oppose the coverage.

For example, in the NSA case above, the story that got Mollie Hemingway looking at search results was titled “Obama intel agency secretly conducted illegal searches on Americans for years.”. The ones she missed in the NYTs results was “N.S.A. Halts Collection of Americans’ Emails About Foreign Targets” and “How Trump’s N.S.A. Came to End a Disputed Type of Surveillance“. Looking at those 3 headlines, it’s easy to see why you could miss they were all talking about the same thing. At the same time, if you’re going to claim that a story isn’t being reported, you need to double check that it’s not just your feelings on the story that aren’t being mirrored.

And also lest I be a hypocrite here, I should talk about the time I committed this error because I failed to update my information. Back in February I made that error, claiming that TED didn’t update their webpage to reflect the controversy with Amy Cuddy’s research. I was right the first time I claimed it and wrong the second time. I could have sworn I rechecked it, but I either didn’t recheck when I thought I did, or I simply didn’t see the correction that got added. Was it because I was looking for a more dramatic correction, bold letters or some other sort of red flag? Yeah, I’d say that was part of it. TED does not appear nearly as concerned about the controversy as I am, but that doesn’t mean they failed to talk about it.

I need a name for this one I think.

Statisticians and Gerrymandering

Okay, I just said I was blogging less, but this story was too interesting to pass without comment. A few days ago it was announced that the Supreme Court had agreed to hear a case about gerrymandering, or the practice of redrawing voting district lines to influence the outcome of elections. This was a big deal because previously the court has only heard these cases when the lines had something to do with race, but had no comment on redraws that were based on politics. The case they agreed to hear was from Wisconsin, and a lower court found that a 2011 redistricting plan was so partisan that it potentially violated the rights of all minority party voters in the affected districts.

Now obviously I’ll leave it to better minds to comment on the legal issues here, but I found this article on how statisticians are getting involved in the debate quite fascinating. Obviously both parties want the district lines to favor their own candidates, so it can be hard to cut through the noise and figure out what a “fair” plan would actually look like. Historically, this came down to just two parties bickering over street maps, but now with more data available there’s actually a chance that both gerrymandering and the extent of gerrymandering can be measured.

One way of doing this is called the “efficiency gap” and is the work of Eric McGhee and Nicholas Stephanopolous, who explain it here. Basically this measures “wasted” votes, which they explain like this:

Suppose, for example, that a state has five districts with 100 voters each, and two parties, Party A and Party B. Suppose also that Party A wins four of the seats 53 to 47, and Party B wins one of them 85 to 15. Then in each of the four seats that Party A wins, it has 2 surplus votes (53 minus the 51 needed to win), and Party B has 47 lost votes. And in the lone district that Party A loses, it has 15 lost votes, and Party B has 34 surplus votes (85 minus the 51 needed to win). In sum, Party A wastes 23 votes and Party B wastes 222 votes. Subtracting one figure from the other and dividing by the 500 votes cast produces an efficiency gap of 40 percent in Party A’s favor.

Basically this metric highlights unevenness across the state. If one party is winning dramatically in one district and yet losing in all the others, you have some evidence that those lines may not be fair. If this is only happening to one party and never to the other, your evidence grows. Now there are obvious responses to this….maybe some party members really are clustering together in certain locations….but it does provide a useful baseline measure. If your current plan increases this gap in favor of the party in power, then that party should have to offer some explanation. The author’s proposal is that if the other party could show a redistricting plan that had a smaller gap, the initial plan would be considered unconstitutional.

To help with that last part, two mathematicians have created a computer algorithm that draws districts according to state laws but irrespective of voting histories. They then compare these hypothetical districts “average” results to the proposed maps to see how far off the new plans are. In other words, they basically create a normal distribution of results, then see how the current proposals line up. To give context, of the 24,000 maps they drew for North Carolina, all were less gerrymandered than the one the legislature came up with. When a group of retired judges tried to draw new districts for North Carolina, they were less gerrymandered than 75% of the computer models.

It’s interesting to note that some of the most gerrymandered states by this metric are actually not the ones being challenged. Here are all the states with more than 8 districts and how they fared in 2012. The ones in red are the ones facing a court challenge. The range is based on plausible vote swings:

Now again, none of these methods may be perfect, but they do start to point the way towards less biased ways of drawing districts and neutral tests for accusations of bias. The authors note that the courts currently employ simple mathematical tests to evaluate if districts have equal populations: +/- 10%.  It will be interesting to see if any of these tests are considered straightforward enough for a legal standard. Stay tuned!

What I’m Reading: June 2017

Happy Father’s Day folks! As summer approaches I’m probably going to be blogging just once a week for a bit as I fill my time with writing papers for my practicum/vacation/time on the beach. Hopefully some of those distractions will be more frequent than others. I figured that means it’s a great time to put up some links to other stuff.

First up, after 12 weeks of doing my Calling Bullshit read-along, I got a chance to interview the good professors for this piece for Misinfocon. Check it out! Also, they got a nice write up in the New Yorker in a piece about problems with big data. I have to say, reading a New Yorker writer’s take on a topic I had just attempted to write about was definitely one of the more humbling experiences of my life. Whatever, I was an engineering major, y’all should be glad I can even string a sentence together (she said bitterly).

I don’t read Mother Jones often, but I’ve seen some great stuff from them lately calling their own team out on the potential misuses of science they let fly. This piece about the World Health Organization’s decision to declare RoundUp a possible carcinogen raises interesting questions about certain data that wasn’t presented to the committee making the decision. It turns out there was a large study that suggested RoundUp was safe that was actually not shown to the committee, for reasons that continue to be a bit murky. While the reasons may or may not be valid, it’s hard to imagine that if that had been Monsanto’s data and it showed a safety issue anyone would have let that fly.

Speaking of calling out errors (and after spending some time mulling over my own) I picked up Megan McArdle’s book “The Up Side of Down: Why Failing Well is the Key to Success“. I just started it, but in the first few chapters she makes an interesting point about the value of blogging for development: unlike traditional media folks, bloggers can fail at a lower level and faster than regular journalists. By essentially working in public, they can get criticism faster and react more quickly, which over time can make their (collective) conclusions (potentially) better. This appears to be why so many traditional media scandals (she highlights the Dan Rather incident) were discovered and called out by bloggers. It’s not that the bloggers were more accurate, but that their worst ideas were called out faster and their best ones could more quickly rise to the top. Anyway, so far I’d recommend it.

This post about how the world-wide rate of population growth is slowing was interesting to me for two reasons: 1) I didn’t actually know the rate of growth had slowed that much 2) it’s a great set of charts to show the difference between growth and rate of growth and why extrapolation from visuals can sometimes be really hard.

I also learned interesting things from this Economist article about world wide beer consumption. Apparently beer consumption is falling, and with it the overall consumption of alcohol. This seems to be driven by economic development in several key countries like China, Brazil and Russia. The theory is that when countries start to develop, people immediately start using their new-found income to buy beer. When development continues, they start becoming more concerned about health and actually buy less beer and move on to more expensive types of alcohol. I never thought about this before, but it makes sense.

On a “things I was depressed to learn” note, apparently we haven’t yet figured out the best strategy for evacuating high-rises during fires. Most fire safety efforts for high rises are about containing and controlling the blaze, but if that fails there’s not a great strategy for how to evacuate or even who should evacuate. You would assume everyone should just take the stairs, but they point out that this could create a fluid mechanics problem for getting firefighters in to the building. Huh.

This post on why women are underrepresented in philosophy provides a data set I thought was interesting: percent of women expressing interest in a field as a college major during their freshman year vs percent of women receiving a PhD in that field 10 years later, with a correlation of .95. I’d be interested to see if there’s some other data points that could be worked in there (like % of women graduating with a particular undergrad degree) to see if the pattern holds, but it’s an interesting data point all on its own. Note: there’s a small data error in the calculations that I pointed out in the comments, and the author acknowledged. Running a quick adjustment I don’t think it actually changes the correlation numbers, which is why I’m still linking. Update: the author informs me he got a better data set that fixed the error and confirmed the correlation held. 

Born to Run Fact Check: USA Marathon Times

I’ve been playing/listening to a lot of Zombies, Run! lately, and for a little extra inspiration I decided to pull out my copy of “Born to Run” and reread it. Part way through the book I came across a statistic I thought was provocative enough that I decided to investigate it. In a chapter about the history of American distance running, McDougall is talking about the Greater Boston track club and says the following:

“…by the early ’80s, the Greater Boston Track club had half a dozen guys who could run a 2:12 marathon. That’s six guys, in one amateur club, in one city. Twenty years later, you couldn’t find a single 2:12 marathoner anywhere in the country.”

Now this claim seemed incredible to me. Living in Boston, I’d imagine I’m exposed to more marathon talk every year than most people, and I had never heard this. I had assume that like most sports, those who participated in the 70s would be getting trounced by today’s high performance gear/nutrition/coached/sponsored athletes. Marathoning in particular seems like it would have benefited quite a bit from the entry of money in to the sport, given the training time required.

So what happened?

Well, the year 2000 happened, and it got everyone nervous.

First some background In order to make the US Olympic marathon team, you have to do two things 1) finish as one of the top 3 in a one off qualifying race 2) be under the Olympic qualifying time.  In 1984, pro-marathoners were allowed to enter the Olympics. In 1988, the US started offering a cash prize for winning the Olympic trials. Here’s how the men did, starting from 1972:

I got the data from this website and the USATF. I noted a few things on the chart, but it’s worth spelling it out: the winners from 1976 and 1984  would have qualified for every team except 2008 and 2012. The 1980  winner would have qualified for every year except 2012, and that’s before you consider that the course was specifically chosen for speed after the year 2000 disaster.

So it appears to be relatively well supported that the guys who were running marathons for fun in the 70s really would keep pace with the guys today, which is pretty strange. It’s especially weird when you consider how much marathoning has taken off with the general public in that time. The best estimates I could find say that 25,000 people in the US finished a marathon in 1976, and by 2013 that number was up to about 550,000. You would think that would have swept up at least a few extra competitors, but it doesn’t look like it did. All that time and popularity and the winning time was 2 minutes faster for a 26 mile race.

For women it appears to be a slightly different story. Women got their start with marathoning a bit later than men, and as late as 1967 had to dodge race officials when they ran. Women’s marathoning was added to the Olympics in 1984, and here’s how the women did:

A bit more of a dropoff there.

If you’ve read Born to Run, you know that McDougall’s explanation for the failure to improve has two main threads: 1) that shoe companies potentially ruined our ability to run long distances and 2) that running long distances well requires you to have some fun with your running and should be built on community. Both seem plausible given the data, but I wanted to compare it to a different running event to see how it stacked up. I picked the 5000 m run since that’s the most commonly run race length in the US. The history of winning times is here, and the more recent times are here. It turns out the 5k hasn’t changed much either:

So that hasn’t changed much either….but there still wasn’t a year where we couldn’t field a team. Also complicating things is the different race strategies employed by 5000m runners vs marathon runners. To qualify for the 5k, you run the race twice in a matter of a few days. It is plausible that 5k runners don’t run faster than they have to in order to qualify. Marathon runners on the other hand may only run a few per year, especially at the Olympic level. They are more likely to go all out. Supporting this theory is how the runners do when they get to the Olympics. The last man to win a 5000m Olympic medal for the US is Paul Chelimo. He qualified with a 13:35 time, then ran a 13:03 in the Olympics for the silver medal. Ryan Hall on the other hand (the only American to ever run a sub 2:05 marathon), set the Olympic trials record in 2008 running a 2:09 marathon. He placed 10th in the Olympics with a 2:12.  Galen Rupp won the bronze in Rio in 2016 with a time 1 minute faster than his qualifying time. I doubt that’s an unusual pattern….you have far more control over your time when you’re running 3 miles than when you’re running 26.  To further parse it, I decided to pull the data from the Association of Road Racing Statisticians website and get ALL men from the US who had run a sub 2:12 marathon. Since McDougall’s original claim was that there were none to be found around the year 2000, I figured I’d see if this was true. Here’s the graph:

So he was exaggerating. There were 5.

Pedantry aside, there was a remarkable lack of good marathoners in those years, though it appeared the pendulum started to swing back. McDougall’s book came out in 2009 and was credited with a huge resurgence in interest in distance racing, so he may have partially caused that 2010-2014 spike. Regardless, it does not appear that Americans have recaptured whatever happened in the early 80s, even with the increase in nearly every resource that you would think would be helpful. Interestingly enough, two of the most dominate marathoners in the post-2000 spike (Khalid Khannouchi and Meb Keflezighi) came here in poverty as immigrants when they were 29 and 12, respectively. Between the two of them they are actually responsible for almost a third of the sub-2:12 marathons times posted between 2000 and 2015. It seems resources simply don’t help marathon times that much. Genetics may play a part, but it doesn’t explain why the US had such a drop off. As McDougall puts it “this isn’t about why other people got faster; it’s about why we got slower.”

So there may be something to McDougall’s theory, or there may be something about US running in general. It may be that money in other sports siphoned off potential runners, or it may be that our shoes screwed us or that camaraderie and love of the sport was more important than you’d think. Good runners may run fewer races these days, just out of fear that they’ll get injured. I don’t really know enough about it, but the stagnation is a little striking. It does look like there was a bit of an uptick after the year 2000 disaster….I suspect seeing the lack of good marathon runners encouraged a few who may have focused on other sports to dive in.

As an interesting data point for the camaraderie/community influence point, I did discover that women can no longer set a marathon world record in a race where men also run.  From what I can tell, the governing bodies decided that being able to run with a faster field/pace yourself with men was such an advantage that it didn’t count. The difference is pretty stark (2:15 vs 2:17), so they may have a point. The year Paula Radcliffe set the 2:15 record in London, she was 16th overall and presumably had plenty of people to pace herself with. Marathoning does appear to be a sport where your competition is particularly important in driving you forward.

My one and only marathon experience biases me in this direction. In 2009 I ran the Cape Cod Marathon and finished second to last. At mile 18 or so, I had broken out in a rash from the unusually hot October sun, had burst in to tears and was ready to quit. It was at that moment that I came across another runner, also in tears due to a sore knee. We struck up a conversation and laughed/talked/yelled/cried at each other for the remaining 7 miles to the finish line. Despite my lack of bragging rights for my time I was overjoyed to have finished, especially when I realized over 400 people (a third of entrants)  had dropped out. I know for a fact I would not have made it if I hadn’t run in to my new best friend at that moment of despair, and she readily admitted the same thing. McDougall makes the point that this type of companionship running is probably how our ancestors ran, though for things like food and safety as opposed to a shiny medal with the Dunkin Donuts logo. Does this sort of thing make a difference at the Olympic level? Who knows, but the data and anecdote does suggest there’s some interesting psychological stuff going on when you get to certain distances.

Race on folks, race on.