Are You Rich?

A few weeks ago the New York Times put up a really interesting interactive “Are You Rich?” calculator that I found rather fascinating. While I always appreciate “richness” calculators that take metro region in to account (a surprising number don’t), I think the most interesting part is when they ask you to define “rich” before they give you the results.

This is interesting because of course many people use the word “rich” to simply mean “has more than I do”, so asking for a definition before giving results could surprise some people. In fact, they include this graph that shows that about a third of people in the 90th percentile for incomes still say they are “average”:

Now they include some interesting caveats here, and point out that not all of these people are delusional. Debt is not taken in to account in these calculations, so a doctor graduating med school with $175,000 in debt might quite rightfully feel their income was not the whole story.  Everyone I know (myself included) who finishes up with daycare and moves their kid in to public school jokes about the massive “raise” you get when you do that. On the flip side, many retirees have very low active income but might have a lot in assets that would give them a higher ranking if they were included.

That last part is relevant for this graph here, showing perceived vs actual income ranking. The data’s from Sweden, but it’s likely we’d see a similar trend in the US:

The majority of those who thought they were better off than they were are below 25th percentile, but we don’t know what they had in assets.

For the rest of it, someone pointed out on Twitter that while “rich people who don’t think they’re rich” get a lot of flack, believing you’re less secure than you are is probably a good thing. It likely pushes you to prepare for a rainy day a bit more. A country where everyone thought they were better off than they were would likely be one where many people made unwise financial decisions.

Interesting to note that the Times published this in part because finding out where you are on the income distribution curve is known to change your feelings about various taxation plans. In the Swedish study that generated the graph above, they found that those discovering they were in the upper half tended to be less supportive of social welfare taxation programs after they got the data. One wonders if some enterprising political candidate is eventually going to figure out how to put in kiosks at rallies or in emails to help people figure out if they benefit or not.

Blue Zones Update: the Response

After my post last week about the pre-print paper calling the “Blue Zones” (aka areas with unusual longevity) in to question, an anonymous commenter stopped by to drop the link to the Blue Zones groups response. I thought their response was rather formidable, so I wanted to give it a whole post. They had three major points, all of which I was gratified to see I had raised in my initial read through:

  1. Being designated as a Blue Zone is no small matter, and they have well published criteria. Some places that come to their attention are invalidated. There also are some pretty extensive papers published on how they validated each of the existing 5 regions, which they linked to. Sardinia here, here and here. Okinawa had a paper written on it literally called “They really are that old: a validation study of centenarian prevalence in Okinawa“. Dr Poulain (who did much of the age validation for Sardinia) wrote a paper 10 years ago called “On the age validation of supercentenarians” where he points out that the first concerns about validating supercentenarian ages were raised in 1873. This book has more information about their methods, but notably starts with mentioning 5 regions they were unable to certify. Basically they responded with one big link dump saying “yeah, we thought of that too”. From what I can tell there actually is some really fascinating work being done here, which was very cool to read about. In every place they not only looked at individuals records, crosschecking them with numerous sources, doing personal interviews with people and their families, and then calculating overall population metrics to looks for evidence of fraud. In Okinawa, they mention asking people about the animal for the year of their birth, something people would be unlikely to forget or want to change. It seems pretty thorough to my eye, but I was also struck that none of the papers above were included as references in the original paper. I have no idea if he knew about them or not, but given that he made statements like “these findings raise serious questions about the validity of an extensive body of research based on the remarkable reported ages of populations and individuals.”, it seems like a gap not to include work that had been done.
  2. Supercentenarians are not the focus of the Blue Zones. Again, they publish their criteria, and this is not a focal point. They have focused much more heavily on reaching 90 or 100, particularly with limited chronic disease. As I was scanning through the papers they linked to, I noticed an interesting anecdote about an Okinawan man who for a time was thought to have lived to 120. After he got in the Guinness book of world records, it came out that he had likely been given the name of an older brother who died, and thus was actually “only” 105. This is interesting because it’s a case where his age is fraudulent, but the change wouldn’t impact the “Blue Zone” status.
  3. Relative poverty could be correlated with old age. I raised this point in my initial post, and I was glad to see they echoed it here. Again, most of the way modernity raises life expectancy is by eliminating child mortality and decreasing accidents or repairing congenital defects. Those are the things that will kill you under 55. Over 55, it’s a whole new set of issues.

Now I want to be clear, no one has questioned the fundamental mathematical findings of the paper that in the US the supercentenarian records are probably shaky before birth registration. What’s being questioned is if that finding it’s generalizable to specific areas that have been heavily studied. This is important because in the US “old age” type benefits kick in at 65 and there is no level after that. So basically a random 94 year old claiming to be 111 might get a social bump out of the whole thing, but none of the type of benefits that might have caused people to really look in to it. Once we start getting to things like Blue Zones or international attention though, there’s actually whole groups dedicated to looking in to things. One person faking their age won’t cause much of an issue, but if your claim is that a dozen people in one town are faking their ages, that’s going to start to mess up population curves and show other anomalies. The poorness of the regions actually helps with this case as well. If you’re talking to people in relative poverty with high illiteracy, it’s hard to argue that they could have somehow been criminal masterminds in their forgeries. One or two people can get away with things, but a group deception can be much harder.

I’m still going to keep an eye on this paper, and my guess is it will be published somewhere with some of the suggestions of generalizability toned down, and more references to previous work at validating ages added.

Life Expectancy and Record Keeping

Those of you who follow any sort of science/data/skepticism news on Twitter will have almost certainly have heard of the new pre-print taking the internet by storm this week: “Supercentenarians and the oldest-old are concentrated into regions with no birth certificates and short lifespans“.

This paper is making a splash for two reasons:

  1. It is taking on a hypothesis that has turned in to a cottage industry over the years.
  2. The statistical reasoning makes so much sense it makes you feel a little silly for not questioning point #1 earlier.

Of course #2 may be projection on my part, because I have definitely read the whole “Blue Zone” hypothesis (and one of the associated books) and never questioned the underlying data. So let’s go over what happened here, shall we?

For those of you not familiar with the whole “Blue Zone” concept, let’s start there. The Blue Zones were something popularized by Dan Buettner who wrote a long article about them for National Geographic magazine back in 2005. The article highlighted several regions in the world that seemed to have extraordinary longevity: Sardinia (Italy), Okinawa (Japan) and Loma Linda (California, USA). All of these areas seemed to have a very above average number of people living to be 100. They studied their habits to see if they could find anything the rest of us could learn. In the original article, that was this:

This concept proved so incredibly popular that Dan Buettner was able to write a book, then follow up books, then a whole company around the concept. Eventually Ikaria (Greece) and Nicoya Peninsula (Costa Rica) were added to the list.

As you can see the ultimate advice list obtained from these regions looks pretty good on its face. The idea that not smoking, making good family and social connections, daily activity and fruits and vegetables are good certainly isn’t turning conventional wisdom on it’s head. So what’s being questioned?

Basically the authors of the paper didn’t feel that alternative explanations for longevity had been adequately tested, specifically the hypothesis that maybe not all of these people were as old as they said they were or that otherwise bad record keeping was inflating the numbers. While many of the countries didn’t have clean data sets, they were able to pull some data sets from the US, and discovered that the chances of having people in your district live until they were 110 fell dramatically once state wide birth registration was introduced:

Now this graph is pretty interesting, and I’m not entirely sure what to make of it.  There seems to be a peak at around -15 years before implementation, which is interesting, with some notable fall off before birth registration is even introduced. One suspects birth registration might be some proxy for expanding records/increased awareness of birth year. Actually, now that I think about it, I bet we’re catching some WWI and WWII related things in here. I’m guessing the fall off before complete birth registration had something to do with the draft around those wars, where proving your age would have been very important. The paper notes that the years 1880 to 1900 have the most supercentenarians born in those years, and there was a draft in 1917 for men 21-30. Would be interesting to see if there’s a cluster of men at birth years just prior to 1887. Additionally the WWII draft start in 1941 went up to 45, so I wonder if there’s a cluster at 1897 or just before. Conversely, family lore says my grandfather exaggerated his age to join the service early in WWII, so it’s possible there are clusters at the young end too.

The other interesting thing about this graph is that it focused on supercentenarians, aka those who live to 110 or beyond. I’d be curious to seem the same data for centenarians (those who live to 100) to see if it’s as dramatic. A quick Google suggests that being a supercenetarian is really rare (300ish in the US out of 320 million) but 72,000 or so centenarians. Those living to 90 or over number well over a million. It’s much easier to overwhelm very rare event data with noise than more frequent data. I have the Blue Zone book on Kindle, so I did a quick search and noticed that he mentioned “supercenterians” 5 times, all on the same page. Centenarians are mentioned 167 times.

This is relevant because if we saw a drop off in all advanced ages when birth registrations were introduced, we’d know that this was potentially fraudulent. However, if we see that only the rarest ages were impacted, then we start to get in to issues like typos or other very rare events as opposed to systematic misrepresentation. Given the splash this paper has made already, I suspect someone will do that study soon. Additionally, the only US based “Blue Zone”, Loma Linda California, does not appear to have been studied specifically at all. That also may be worth looking at to see if the pattern still holds.

The next item the paper took a shot at was the non-US locations, specifically Okinawa and Sardinia. From my reading I had always thought those areas were known for being healthy and long lived, but the paper claims they are actually some of the poorest areas with the shortest life expectancies in their countries. This was a surprise to me as I had never seen this mentioned before. But here’s their data from Sardinia:

The Sardinian provinces are in blue, and you’ll note that there is eventually a negative correlation between “chance of living to 55” and “chance of living to 110”. Strange. In the last graph in particular there seem to be 3 provinces in particular that are causing the correlation to go negative, and one wonders what’s going on there. Considering Sardinia as a whole has a population of 1.6 million, it would only take a few errors to produce that rate of longevity.

On the other hand, I was a little surprised to see the author cite Sardinia as having on of the lowest life expectancies. Exact quote “Italians over the age of 100 are concentrated into the poorest, most remote and shortest-lived provinces,”. In looking for a citation for this, I found on Wiki this report (in Italian). It had this table:

If I’m using Google translate correctly, Sardegna is Sardinia and this is a life expectancy table from 2014. While it doesn’t show Sardinia having the highest life expectancy, it doesn’t show it having the lowest either. I tried pulling the Japanese reports, but unfortunately the one that it looks the most useful is in Japanese. As noted though, the paper hasn’t yet gone through peer review, so it’s possible some of this will be clarified.

Finally, I was a little surprised to see the author say “[these] patterns are difficult to explain through biology, but are readily explained as economic drivers of pension fraud and reporting error.” While I completely agree about errors, I do actually think there’s a plausible mechanism that would cause poor people who didn’t live to 55 as often to have longer lifespans. Deaths under 55 tend to be from things like accidents, suicide, homicide and congenital anomalies….external forces. The CDC lists the leading causes of death by age group here:

Over 55, we mostly switch to heart disease and cancer. A white collar office worker with a high stress job and bad eating habits may be more likely to live to 55 than a shepherd who could get trampled, but once they’re both 75 the shepherd may get the upper hand.

I’m not doubting the overall hypothesis by the way….I do think fraud or errors in record keeping can definitely introduce issues in to the data. Checking outliers to make sure they aren’t errors is key, and having some skepticism about source data is always warranted. After writing most of this post though, I decided to check back in on the Blue Zones book to see if they addressed this.  To my surprise, the book claims that at least in Sardinia, this was actually done. On page 25 and 26, they mention specifically how much doubt they faced and how one doctor personally examined about 200 people to help establish their truthfulness about their age. Dr Michel Poulain (a Belgian demographer) apparently was nominated by a professional society specifically to go to Sardinia to check for signs of fraud. According to the book, he visited the region ten times to review records and interview people. I have no idea how thorough he was or how his methods hold up, but his work seems at odds with the idea that someone just blindly pulled ages out of a database or the papers claim that “These results may reflect a neglect of error processes as a potential generative factor in remarkable age records”. Interestingly, I’d imagine WWI and WWII actually help with much of the work here. Since I’d imagine most people have very vivid memories of where they were and what they were doing during the war years, those stories might go far to establishing age.

Basically, it seems like sporadic exaggeration, error or fraud might give mistaken impressions about how many supercenteranian people there are overall, but I do wonder if having an unusual cluster brings enough scrutiny that we don’t have to worry as much that something was missed. In the Blue Zone book, they mention the group that brought attention to the Sardinians had helped debunk 3 other similar claims. Also, as mentioned, the paper doesn’t mention if the one US blue zone was one of the ones to get late birth registration, but I do know the Seventh Day Adventists are one of the most intensely studied groups in the country.

Anyway, given the attention and research that has been paid to these areas, I’d imagine we’re going to hear some responses soon.  Dr Poulain appears to still be active, and one suspects he will be responding to this questioning of his work. This post is getting my “things to check back in on” tag. Stay tuned!

 

 

Beard Science

As long as I’ve been alive, my Dad has had a full beard [1].

When I was a kid, this wasn’t terribly common. Over the years this has become surprisingly more common, and now the fact that his is closely trimmed is the uncommon part.

With the sudden increase in the popularity of beards, studying how people perceive bearded vs clean shaven men has gotten more popular. Some of this research is about how women perceive men with beards, and there’s actually a “peak beard” theory that suggests that women’s preferences for beards goes up as the number of men with beards goes down and vice versa.

This week though, someone decided to study a phenomena that has always fascinated me: small children’s reaction to men with beards. Watching my Dad (a father of 4 who is pretty good with kids) over the years, we have noted that kids do seem a little unnerved by the beard. Babies who have never met him seem to cry more often when handed to him, and toddlers seem more frightened of him. The immediacy of these reactions have always suggested that there’s something about his appearance that does it, and the beard is the obvious choice.

Well, some researchers must have had the same thought because a few weeks ago a paper “Children’s judgements [sic] of facial hair are influenced by biological
development and experience” was published that looks at children’s reactions to bearded men. The NPR write up that caught my eye is here, and it led with this line “Science has some bad news for the bearded: young children think you’re really, really unattractive.”. Ouch.

I went looking for the paper to see how true that was, and found that the results were not quite as promised (shocking!). The study had an interesting set up. They had 37 male volunteers get pictures taken of themselves clean shaven, then had them all grow a beard for the next 4-8 weeks and took another picture. This of course controls for any sort of selection bias, though to note the subjects were all of European decent. Children were then shown the two pictures of the same man and asked things like “which face looks best?” and “which face looks older?”. The results are here:


So basically the NPR lead in contained two slight distortions of the findings: kids never ranked people as “unattractive”, they just picked which face they thought looked best, and young kids actually weren’t the most down on beards, tweens were.

Interestingly, I did see a few people on Twitter note that their kids love their father with a beard, and it’s good to note the study actually looked at this too. The rankings used to make the graph above were done purely on preferences about strangers, but they did ask kids if they had a father with a beard. For at least some measures in some age groups, having exposure to beards made kids feel more positively about beards. For adults, having a father or acquaintances with beards in childhood resulted in finding beards more attractive in adulthood. It’s also good to note that the authors did use the Bonferri correction to account for multiple comparisons, so they were rigorous in looking for these associations.

Overall, some interesting findings. Based on the discussion, the working theory is that early on kids are mostly exposed to people with smooth faces (their peers, women) so they find smooth faces preferable. Apparently early adolescence is associated with an increased sensitivity to sex specific traits, which may be why the dip occurs at age 10-13. They didn’t report the gender breakdown so I don’t know if it’s girls or boys changing their preference, or both.

No word if anyone’s working on validating this scale:

[1] Well, this isn’t entirely true, there were two exceptions. Both times he looked radically different in a way that unnerved his family, but I was fascinated to note that some of his acquaintances/coworkers couldn’t figure out what was different. The beard covers about a third of his face. This is why eye witness testimony is so unreliable.

What I’m Reading: July 2019

As always, Our World in Data provides some interesting numbers to think about, this time with food supply and caloric intake by country.

This article on chronic lyme disease and the whole “medical issue as personal identity” phenomena was REALLY good and very thought provoking.

Ever want to know where the Vibranium from Black Panther would land on the periodic table of elements? Well, now there’s a paper out to help guide your thinking. More than just a fun paper to write up, the professors involved here actually asked their students this on an exam to see how they would reason through it. I’m in favor of questions like this (provided kids know to have watched the movie) as I think it can engage some different types of critical thinking in a way that can be more fun than traditional testing.

I mentioned to someone recently that I have a white noise app on my phone, but after testing it out I found that brown noise tends to be more effective in helping me sleep than white noise. They asked what the difference was, and I found this article that explains different color noises. In my experience the noises that tend to be loudest and most likely to interfere with sleep tend to hang out at the low end of the spectrum, YMMV.

A new study “Surrogate endpoints in randomised controlled trials: a reality check” gives an interesting word of warning to the cancer world. It’s common in clinical trials to use surrogate endpoints like “progression free survival” or “response rate”to figure out if drugs are working. This is done because overall survival can take a long time to get and researchers/patients/drug companies want results faster and it seems like if the surrogate markers are good the drugs can’t possibly hurt.

Unfortunately, it appears this isn’t the case. A new drug venetoclex was studied and patients on it were eventually found to have better progression free survival, but eventually twice as many deaths as those treated with regular treatment. Ugh. The lead author has a great Twitter thread on his paper here, where he suggests this means that either the drug is a “double edge sword” with both better efficacy and higher toxicity than alternatives, or that it’s a “wolf in sheep’s clothing” that makes things look good for a while but causes changes that means relapse is swift and deadly. Lots to think about here.

Finally, SSC has a good post up on bias arguments here. I especially like his points about when they are relevant.

The Evangelical Voter Turnout that (Maybe) Wasn’t

There was an interesting graph in a recent New York Times article  that got Twitter all abuzz:

Visually, this graph is pretty fascinating, showing an increasingly motivated white Evangelical group, whose voter participation rates must put every other group to shame. I was so taken aback by this I actually did share it with a few people as part of a point about voter turnout.

After sharing though, I started to wonder how this turnout rate compared to other religious groups, so I went looking for the source data. A quick Google took me to this Pew Research page, which contained this table:

Two things surprised me about this:

  1. Given the way the data is presented, it appears the Evangelical question was asked by itself as a binary yes/no, as opposed to being part of a list of other options.
  2. The question was not simply “are you Evangelical” but “are you Evangelical/born again”.

Now from researching all sorts of various things for this blog, I happen to know that one of the most common ways of calculating how many white Evangelicals there are in the population is to ask people their denominational affiliation from a menu of choices, then classify those denominations in to Evangelical/Catholic/etc. That’s what PPRI (the group that got the 15% number) does.

For the voting block question however, they were only asked if they were a “White born-again or evangelical Christian?

Now to get too far in to the theological nuances, but there are plenty of folks I know who would claim the “born again” label who don’t go to traditionally “Evangelical” churches.  In fact, according to Mark Silk over at Religion News (who noted this discrepancy at the time), he’s been involved with research that “found that 38.6 percent of mainline Protestants and 18.4 percent of Catholics identified as “born again or evangelical.” So yes, the numbers may be skewed. It’s also worth noting that Pew Research puts the number of Evangelical Protestants at 25%, in a grouping that categorizes historically black groups separately (and thus is presumably mostly white).

So is the Evangelical turnout better than other groups? Well, it might still be. However, it’s good to know that this graph isn’t strictly comparing apples to apples, but rather slightly different questions given to different groups for different purposes. As we know slight changes in questions can yield very different results, so it’s worth noting. Caveat emptor, caveats galore on this one.

Fentanyl Poisoning and Passive Exposure

The AVI sent along this article this week, which highlights the rising concern about passive fentanyl exposure among law enforcement.  They have a quote from a rehab counselor who claims that just getting fentanyl on your skin would be enough to addict you, and that merely entering a room where it’s in the air could cause instant addiction. Given that it’s Reason Magazine, they then promptly dispute the idea that this is actually happening.

I was interested in this article in part because my brother’s book contained the widely reported anecdote about the police officer who overdosed just by brushing fentanyl off of a fellow police officer. This anecdote has been seriously questioned since. Tim expressed concerns afterwards that had he realized this he would have left it out. I’ll admit that since my focus was mostly on his referenced scientific studies, I didn’t end up looking up various anecdotes he included.

This whole story indicates an interesting problem in health reporting. STAT news has more here, but there’s a couple things I noted. First, the viral anecdote really was widely reported, so I’m not surprised my brother heard about it. It has never technically been disproven….outside experts have said “it almost certainly couldn’t have happened this way” but neither the police officer nor the department have commented further. This makes it hard for the “probably not” articles to gain much traction.

Second, the “instant addiction” part was being pushed by a rehab counselor, not toxicologists who actually study how drugs interact with our body. Those experts point out that it took years to create a fentanyl patch that would get the drug to be absorbed through the skin, so the idea that skin contact is as effective as ingesting or breathing it in seems suspect.

Third, looking at the anecdotes, we realize these stories are NOT being reported by the highest risk groups. Pharmacists would be far more likely to accidentally brush away fentanyl than police officers, yet we do not hear these stories arising in hospital pharmacies. Plenty of patients have been legally prescribed fentanyl and do not suffer instant addiction. The fact that the passive exposure risk seems to only be arising in those who are around fentanyl in high stress circumstances suggests other things may be complicating this picture.

While this issue itself may be small in the grand scheme of things, it’s a good anecdote to support the theory that health fake news may actually cause the most damage. While political fake news seems to have most of our attention, fake or overblown stories about health issues can actually influence public policy or behavior. As the Reason article points out, if first responders delay care to someone who has overdosed because they are taking precautions against a risk that turns out to be overblown, the concern will hurt more people than it helps. Sometimes an abundance of precaution really can have negative outcomes.