5 Nutrition and/or Food Research Blogs I Like to Read

I’ve written a bit here over the years about nutrition research and my own general journey with weight management, but I realized I’ve only really referred in passing to the people who I read when I want to catch up on the field. I figured this was a pretty good time to do a post on that.

  1. For all things vegan: Anyone who followed my old old blog knows that I actually spent several years as a vegan. I eventually gave it up, but I still like to read up what’s going on in the world of plant based nutrition. Ginny Messina (aka the Vegan RD) is a registered dietitian who is a vegan primarily for ethical reasons. As such, she uses her dietitian training to help vegans be as healthy as possible, while also helping lead the charge for veganism to be more evidenced based when they stray out of ethics and in to nutrition claims. She writes critiques of other vegans work if she feels they overstate the evidence, and she even coauthored a book called “Even Vegans Die“. Overall a pretty awesome example of someone who advocates for a particular diet while also adhering to evidence.
  2. For the ancestral health crowd: If you’re paleo or just interested in how our evolutionary history influences how we think about food, Stephan Guyenet is a must read. A neuroscientist who specializes in obesity related research, his research focus is on why we overeat and what we can do about it. His book The Hungry Brain is one of the most well balanced science based nutrition books I’ve ever read, and has received quite a bit of praise for being honest and evidence based.
  3. For deep dives in to the science: There are not many bloggers that I read that make me go “holy crap did this person dig deep in to this paper”, but CarbSane is one blogger who gets that reaction from me on nearly every post. She doesn’t just read papers and give you the gist, she posts tables, cites other literature, and is basically a blog equivalent of a nutritional biochemistry class. She is probably the person most responsible for making me aware of the problem of misreprecitation in nutrition science, because she has the patience, knowledge and wherewithal to figure out exactly what commonly cited papers do and do not say. Oh, and she’s lost over 100 lbs too, so she actually has a good eye for what is and isn’t useful for real people to know. For a taste of what she does, try her posts on the “Biggest Loser Regain Study” that made headlines.
  4. For weight management and health policy: There’s really a bit of a tie here, as I really like both Yoni Freedhoff’s Weighty Matters blog and Darya Rose’s Summer Tomato for this topic.  Freedhoff is a Canadian MD who runs a weight loss center, and he blogs from the medical/health policy perspective. His book “The Diet Fix” covers evidence based ways of making any diet more effective, and he encourages people to take the approach (vegan, low carb, paleo, etc etc) that they enjoy the most. Darya Rose is a neuroscientist who also gives advice about how to make your “healthstyle” more practical and easier to stick to,  and her book “The Foodist” is on the same topic. I like them because they both continuously emphasize that anything too difficult or complicated is ultimately going to be tough to maintain. It’s all about making things easier on yourself.
  5. For those in the greater New Hampshire area: Okay, this ones pretty region specific, but the Co-op News blog from the Hanover Co-op has a special place in my heart. An online version of a newsletter that’s been going since 1936, it features frequent posts from my (dietitian) cousin and my (incredible chef) uncle. It’s a great place to check out if you need advice on anything from using up summer vegetables to figuring out if macaroni and cheese is okay to eat. It also serves to remind me that I should invite myself over to their house more often. That food looks awesome.

Bonus round: if your looking for some one off reads, this week I read this takedown of the science in the vegan documentary “What the Health” and enjoyed it. I also liked this paper that reviewed the (now infamous) Ancel Keys “7 Countries Study” and shed some light on what the original study did and did not say.

Of course if you have a favorite resource, I’d love to hear it!

Dietary Variability and Fasting Traditions

This is one of those posts that started with a conversation with friends then sort of spiraled in to way too much time with Google, then I realized there’s a stats tie in and a post was born. Bear with me.

Some background:  Ramadan started this week, so I’ve been thinking a lot about dietary traditions in different cultures. In the book Antifragile, there is a moment where author Nicholas Nassim Taleb takes a surprising detour in to the world of human health and nutrition. As an economist/statistician who is best known for making predictions about the stability of financial markets, this seems like an odd road to go down. His take on diet is, unsurprisingly, unique: every Wednesday and Friday, he is vegan. Apparently in the Greek Orthodox tradition on Wednesdays, Fridays, Lent (48 days) and in the lead up to Christmas (40 days), you give up all animal products and oil. I am not clear how widely this is followed, but the US Greek Orthodox website calendar confirms this is the general set up. Since the thesis of the book is that some things actually improve when subject to disorder/inconsistency, Taleb wonders if the much touted benefits of the Mediterranean diet are due to the overall consumption, or the inherent variability in the diet due to the religious practices in the area.

Research tie in: I was interested by this point, as I’d definitely heard about the Mediterranean diet and its health benefits, but I’d never heard that this tradition was so common in that area. When it came back up last week I decided to ask a few other people if they’d ever heard of it. It was hardly a scientific poll, but out of the dozen or so people I asked, everyone knew the Mediterranean diet was supposed to be very healthy but no one had heard of the Wednesday/Friday fasting tradition. I even asked a few vegetarian and vegan friends, and they were similarly surprised. Given that two days a week plus all of Lent works out to over a third of the year, this seemed relevant.

Of course I am not sure what this might prove, but it did strike me as an interesting example of a time an average might be lying to you. The Greek Orthodox adherents who spawned the interest in the Mediterranean diet didn’t have one way of eating…they really had 2: normal days and fasting days. (Note: It appears not many American Greek Orthodox still follow the fasting calendar, but since Crete got on the map 70 years ago with the 7 countries study, it’s likely those who kicked this whole Mediterranean craze off were following it). By hearing only the average recommendations, it seems like some information got lost. Given that food recall questionnaires and epidemiological reports tend to only come up with one set of recommendations, I decided to take a look around and see if I could find other examples of populations whose “average” consumption might be deceptive. While many religions have a tradition of fasting, I’m only including the ones where the duration is substantial according to my own arbitrary standards. I’m also not including traditions that prohibit or discourage certain foods all the time, as that’s not the type of variability I was interested in.

Greek Orthodox I was curious if Taleb’s question had been addressed by any research, and it actually has been. This group noticed the same gap he did, and decided to follow a bunch of people on the island of Crete for 1 year. They all were eating the same famous Mediterranean diet, but those who followed the fasting traditions had better health markers after the holy days. This gives some credibility to the idea that something about the fasting that effects the health outcomes, though it could be that those who follow the fasting traditions are different in some other way.

Muslims This paper shows some interesting effects of Ramadan (no eating during daylight hours for 28-30 days) on health outcomes, but reaches no direct conclusions. Many of the studies didn’t include things like smoking status, so it’s hard to tell if there’s any benefit. Still, changing your eating patterns dramatically for a full month every year is probably enough to throw your “average” consumption a bit.

Ethiopian Orthodox According to this NPR story, the Ethiopian Orthodox Church observes a 40 day vegan fast prior to Christmas, where they only eat one meal a day.

Vacations and Holidays On the flip side, there are also occasions where people seem to consistently overeat in such a way that may change their “average”. Vacations appear to be correlated with weight gain that doesn’t immediately disappear, as does the holiday season. Interestingly, neither of these gains are that much (a little less than a pound overall for each), but if those persist after each holiday season and vacation, you could eventually see a real increase. Regardless, few of us call our vacation or holiday eating “typical”, but since holiday eating and vacations actually can take up a lot of days (November, December, 2 week vacation or so), this very well might skew our perception on what’s “typical”.

I’d be interested to hear any other examples anyone has.

 

Blood Sugar Model Magik?

An interesting new-to-me study came on my radar this week “Personalized Nutrition by Prediction of Glycemic Responses” published by Zeevi et al in 2015. Now, if you’ve ever had the unfortunate experience of talking about food with me in real life, you probably know I am big on  quantifying things and particularly obsessed with blood sugar numbers. The blood sugar numbers thing started when I was pregnant with my son and got gestational diabetes. 4 months of sticking yourself with a needle a couple of times a day will do that to a person.

Given that a diagnosis of gestational diabetes is correlated with a much higher risk of an eventual Type 2 diabetes diagnosis, I’ve been pretty interested in what effects blood sugar numbers. One of those things is the post-prandial glucose response (PPGR) or basically how high your blood sugar numbers go after you eat a meal. Unsurprisingly, chronically high numbers after meals tend to correlate with overall elevated blood sugar and diabetes risk. To try and help people manage this response the glycemic index was created, which attempted to measure what an “average” glucose response to particular foods. This sounds pretty good, but the effects of using this as a basis for food choices in non-diabetics have been kind of mixed. While it appears that eating all high glycemic index foods (aka refined carbs) is bad, it’s not clear that parsing things out further is very helpful.

There are a lot of theories about why glycemic index may not work that well: measurement issues (it measures an area under a curve without taking in to account the height of the spike), the quantities of food eaten (watermelon has a high glycemic index, but it’s hard to eat too much of it calorie-wise), or the effects of mixing foods with each other (the values were determined by having people eat just one food at a time). Zeevi et al had yet another theory: maybe the problem was taking the “average” response. Given that averages can often hide important information about the population they’re describing, they wondered if individual variability was mucking about with the accuracy of the numbers.

To test this theory, they recruited 800 people, got a bunch of information about them, and hooked them up to a continuous glucose monitor and had them log what they ate. They discovered that while some foods caused a similar reaction in everyone (white bread for example), some foods actually produced really different responses (pizza or bananas for example). They then used factors like BMI, activity level, gut microbiome data to build a model that they hoped would predict who would react to what food.

To give this study some real teeth, they then took the model they built and applied it to 100 new study participants. This is really good because it means they tested if they overfit their model….i.e. tailored it too closely to the original group to get an exaggerated correlation number. They showed that their model worked just as well on the new group as the old group (r=.68 vs r=.70). To take it a step further, they recruited 26 more people, got their data, then feed them a diet predicted to be either “good” or “bad” for them.  They found overall that eating the “good” diet helped keep blood sugar in check as compared to just regular carbohydrate counting.

The Atlantic did a nice write up of the study here, but a few interesting/amusing things I wanted to note:

  1. Compliance was high Nutrition research has been plagued by self reporting bias and low compliance to various diets, but apparently that wasn’t a problem in this study. The researchers found that by emphasizing to people what the immediate benefit to them would be (a personalized list of “good” and “bad” foods, people got extremely motivated to be honest. Not sure how this could be used in other studies, but it was interesting.
  2. They were actually able to double blind the study Another chronic issue with nutrition research is the inability to blind people to what they’re eating. However, since people didn’t know what their “good” foods were, it actually was possible to do some of that for this study. For example, some people were shocked to find that their “good” diet had included ice cream or chocolate.
  3. Carbohydrates  and fat content were correlated with PPGR, but not at the same level for everyone At least for glucose issues, it turns out the role of macronutrients was more pronounced in some people than others. This has some interesting implications for broad nutrition recommendations.
  4. Further research confirmed the issues with glycemic index  In the Atlantic article, some glycemic index proponents were cranky because this study only compared itself to carb counting, not the glycemic index. Last year some Tufts researchers decided to focus just on the glycemic index response and found that inter-person variability was high enough that they didn’t recommend using it.
  5. The long term effects remain to be seen It’s good to note that the nutritional intervention portion of this study was just one week, so it’s not yet clear if this information will be helpful in the long run. On the one hand, it seems like personalized information could be really helpful to people…it’s probably easier to avoid cookies if you know you can still have ice cream. On the other hand, we don’t yet know how stable these numbers are. If you cut out cookies entirely but keep ice cream in your diet, will your body react to it the same way in two years?

That last question, along with “how does this work in the real world” is where the researchers are going next. They want to see if people getting personalized information are less likely to develop diabetes over the long term. I can really see this going either way. Will people get bored and revert to old eating patterns? Will they overdo it on foods they believe are “safe”? Or will finding out you can allow some junk food increase compliance and avoid disease? As you can imagine, they are having no trouble recruiting people. 4,000 people (in Israel) are already on their waiting list, begging to sign up for future studies. I’m sure we’ll hear more about this in the years to come.

Personally, I’m fascinated by the whole concept. I read about this study in Robb Wolf’s new book “Wired to Eat“, in which he proposes a way people can test their own tolerance for various carbohydrates at home. Essentially you follow a low to moderate carbohydrate paleo (no dairy, no legumes, no grain) plan for 30 days, then test your blood glucose response to a single source of carbohydrates every day for 7 days. I plan on doing this and will probably post the results here. Not sure what I’ll do with the results, but like I said, I’m a sucker for data experiments like this.

6 Examples of Correlation/Causation Confusion

When I first started blogging about correlation and causation (literally my third and fourth post ever), I asserted that there were three possibilities whenever two variables were correlated. Now that I’m older and wiser, I’ve expanded my list to six:

  1. Thing A caused Thing B (causality)
  2. Thing B caused Thing A (reversed causality)
  3. Thing A causes Thing B which then makes Thing A worse (bidirectional causality)
  4. Thing A causes Thing X causes Thing Y which ends up causing Thing B (indirect causality)
  5. Some other Thing C is causing both A and B (common cause)
  6. It’s due to chance (spurious or coincidental)

The obvious conclusion is that years spent blogging about statistics directly correlates to the number of possible ways of confusing correlation and causation you recognize.

Anyway, I’ve talked about this a lot over the years, and this lesson is pretty fundamental in any statistics class…though options #3 and #4 up there aren’t often covered at all. It’s easily forgotten, so I wanted to use this post to pull together an interesting example of each type.

  1. Smoking cigarettes cause lung cancer (Thing A causes Thing B): This is an example I use in my Intro to Internet Science talk I give to high school students. Despite my continued pleading to be skeptical of various claims, I like to point out that sometimes disbelieving a true claim also has consequences. For years tobacco companies tried to cast doubt on the link between smoking and lung cancer, often using “correlation is not causation!” type propaganda.
  2. Weight gain in pregnancy and pre-eclampsia (Thing B causes Thing A): This is an interesting case of reversed causation that I blogged about a few years ago. Back in the 1930s or so, doctors had noticed that women who got pre-eclampsia (a potentially life threatening condition) also had rapid weight gain. They assumed the weight gain was causing the pre-eclampsia, and thus told women to severely restrict their weight gain. Unfortunately it was actually the pre-eclampsia causing the weight gain, and it is pretty likely the weight restrictions did more harm than good.
  3. Dating and desperation (Thing A causes Thing B which makes Thing A worse): We’ve all had that friend. The one who strikes out with everyone they try to date, and then promptly doubles down on their WORST behaviors. This is the guy who stops showering before he takes girls out because “what’s the point”. Or the girl who gets dumped after bringing up marriage on the third date, so she brings it up on the first date instead. This  is known as “bidirectional causality” and is less formally known as “a vicious cycle”. In nature this can cause some really amusing graph behavior, as in the case of predators and prey.  An increase in prey can cause an increase in predators, but an increase in predators will cause a decrease in prey. Thus, predator and prey populations can be both positively AND negatively correlated, depending on where you are in the cycle.
  4. Vending machines in Schools and obesity (Thing A causes Thing X causes Thing Y which then causes Thing B): One obvious cause of obesity is eating extra junk food. One obvious source of extra junk food is vending machines. One obvious place to find vending machines is in many schools. So remove vending machines from schools and reduce obesity, right? No, sadly, not that easy.  In a longitudinal study that surprised even the authors, it was found that kids who moved from schools without vending machines to those with vending machines don’t gain weight. What’s interesting is that you can find a correlation between kids who were overweight and eating food from vending machines, but it turns out the causal relationship is convoluted enough that removing the vending machines doesn’t actually fix the original end point.
  5. Vitamins and better health (Some other Thing C is causing Thing A and Thing B):This one is similar to #4, but I consider it more applicable when it turns out Thing A and Thing B weren’t even really connected at all. Eating a bag of chips out of a vending machine every day CAN cause you to gain weight, even if removing the vending machine doesn’t help you lose it again. With many vitamin supplements on the other hand, initial correlations are often completely misleading. Many people who get high levels of certain vitamins (Thing A) are actually just those who pay attention to their health (Thing C), and those people tend to have better health outcomes (Thing B).  Not all vitamins should be tarred with the same brush though, this awesome visualization shows where the evidence stands for 100 different supplements.
  6. Spurious Correlations (spurious or due to chance): There’s a whole website of these, but my favorite is this one:  NicCage

Causal inference, not for the faint of heart.

 

Does egg = cigarette?

Oh CNN, your headlines make me sad sometimes.

Is eating egg yolks as bad as smoking?

No.  No it is not.  The study you’re reporting on does in fact claim that eating egg yolks accelerate heart disease about 2/3rds as much as smoking does, but acceleration of heart disease is not actually the health problem smoking is most known for.  But you know that.  Sigh.

Not that I’m buying the study anyway.  They asked people, who already had heart disease, to report their egg yolk consumption over the course of their lives.  How accurately can you recall your average egg yolk consumption over the course of your life?  Additionally, people who have heart disease have most likely been told to cut down on consumption of saturated fat and cholesterol.  Those still eating more eggs have likely heard this advice, and disregarded it.  What are the chances that they’re disregarding other advice as well?  Lastly, it does not appear the study asked about the consumption of any other food, meaning egg consumption could actually just be co-occuring with the consumption of something else that was even worse.  Surveys that ask only about very specific foods tend to see what they want to see.

So basically, another correlation/causation issue here, combined with those terrible consumption recollection surveys, with a sprinkle obnoxious headline writing.   Yeehaw.

Moral obligations and Lazy Truth

I was going to include this in a Friday link post, but I really felt it deserved it’s own spotlight.  

There’s a new gmail gadget called “Lazy Truth” that promises to send you a fact check email every time you receive a (forwarded) email it deems to be of dubious content.

I haven’t tried it, so I’m not sure what it’s set up to flag, or how accurate the “fact check” email is, but I was immediately intrigued.  I’ve actually been working on a much longer post that covers just this topic, so it’s something I’ve been giving a lot of thought.

I’ve been mulling over the rise of Facebook/email/Twitter lately, and wondering…..for those of us who value our integrity and our truthfulness, and do not believe ends justify means, what exactly are the moral implications of hitting forward or share on information that we could have easily proven to be false if we’d checked?

I was wondering if I was the only one worried about this, when I came across a blog post from Dr Michael Eades.  He’s a pro-low carb physician, who spends much of his time critiquing nutritional research.  In a post about the book “The China Study”, he describes finding what he consider a great critique of it on another person’s blog.  Then this:

…. I had fallen victim to the confirmation bias.  My bias was that Dr. Campbell was wrong, so I was more than happy to uncritically accept evidence confirming his error without lifting a finger to double check said evidence myself.  I knew that if a blogger somewhere had come out with a long post describing an analysis of the China study demonstrating the validity of all of Dr. Campbell’s notions of the superiority of the plant-based diet, I would’ve been all over it looking for analytical errors.  But since Ms. Minger’s work accorded with my own beliefs, my confirmation bias ensured that I accepted it at face value. 

Once the fact that I had succumbed to my confirmation bias settled in around me, I became suffused with angst.  I had tweeted and retweeted Ms. Minger’s analysis a number of times, giving the impression that I had at least minimally checked it out and had approved it.  I had emailed it to a number of people, many of whom, I’m sure, had forwarded it on.  I’m sure I played a fairly large role in the rapid dissemination of the anti Campbell/China study info.

In the end, he went back and realized that the post was good, but his panic attack was intriguing to me.  How many of us have had this same panic?  How many of us should have?  How many lousy graphs rip through Facebook like wildfire because no one bothers to double check if they’re even valid?  Is the liar the person who created the graph, or do those who share it share some blame?

I don’t pretend I have an answer for this.  I feel most of the people interested enough to read this blog probably do not fall in the category of those who would easily share skewed information without thinking about it, but I am hoping for some thoughts/feedback from you all.

Are we so used to hearing politicians of all stripes seamlessly repeat bad data that we’ve come to view it as acceptable?  Is this just a fact of life?  Is it possible that we will be saved by widgets like the one above?   Does religion matter, or is this an overall moral issue? Does confrontation work with this sort of thing?  Or is this something I just have to learn to live with?

More thoughts on the soda ban

Yesterday I found out the soda ban is potentially hitting a bit closer to home.

For those of you not familiar with Cambridge, MA, it’s affectionately known as “The People’s Republic” (and even has a communist bar of the same name).  Thus the proposed ban was pretty unsurprising.

Coincidentally, Ben Goldacre put up a new post yesterday publicizing a paper he coauthored to try to push governments in the UK to actually conduct trials of their policies before implementing them.

Best quote:

We also show that policy people need to have a little humility, and accept that they don’t necessarily know if their great new idea really will achieve its stated objectives. We do this using examples of policies which should have been great in principle, but turned out to be actively harmful when they were finally tested.

Contrast this to the Mayor of Cambridge’s statement on the soda ban:

“As much free will as you can have in a society is a good idea,” Davis said Tuesday. “… But with a public health issue, you look at those things that are dangerous for people, that need government regulation.”

Is no one interested in finding out if this idea will actually work before implementing it?  The leading researchers in the field seem to think it won’t.   I tend to agree with them.  You know what though?  I’m game.  Let’s put it to a randomized trial.  There are those who think the constitutionality of this should be worked out first, but I think a well run trial could open the door for an opt in system rather than a mandatory one.

Hey, maybe if politicians stayed a little more open to testing their ideas, you wouldn’t wind up with cartoons like this one: