Saturday, 9 April 2016

15 Reasons Not to Trust That Latest Nutritional Study



Nutritional studies are often the best we’ve got. Without them, we’d be plucking anecdotes from a swirling vortex of hearsay, old wives’ tales, and prejudices. Some actionable information would definitely emerge, but we wouldn’t have the broader vision and clarity of thinking offered by the scientific method. Most of them are deeply flawed, though. And to know which ones are worth incorporating into your vision of reality and which only obfuscate and further muddy the waters, you have to know what to watch out for.
Today, I’m going to discuss many of the reasons you shouldn’t trust the latest nutritional study without looking past the headlines.

1. Industry distorts the research.

Last year, Marion Nestle looked at 152 industry-funded nutrition studies. Out of 152, 140 had favorable results for the company who funded it. An earlier analysis of milk, soda, and fruit juice nutrition studies found that those sponsored by milk, soda, and juice companies were far more likely to report favorable results than independent studies. The same things happens in cardiovascular disease trials and orthopedics trials.

2. Ego distorts the research.

People become wedded to their theories. Imagine spending 30 years conducting research to support your idea that saturated fat causes heart disease. How hard will you hold on to that hypothesis? How devastating would opposing evidence be to your sense of self-worth? Your research is your identity. It’s what you do. It’s how you respond when chit chatting at cocktail parties. You’re the “saturated fat” guy. Everything’s riding on it being true.
Scientists are used to being the smartest person in their respective rooms. It’s not easy to relinquish that or admit mistakes. Heck, that goes for everyone in the world. Scientists are not immune.

3. Correlation masquerading as causation.

In day-to-day life, correlative events imply causation. You cut someone off, they honk at you. A man holds a door open, you thank him. You flip a light switch, the light turns on. We’re used to causation explaining correlations. So when two variables are presented together in a nutritional study, especially when it seems plausible (meat causes colon cancer) or reaffirms popular advice (saturated fat causes heart disease), we’re likely to assume the relationship is causal. Correlations provoke interesting hypotheses and tests of those hypotheses, but they’re very often spurious. Everything we eat is associated with cancer if we look hard enough. Does that actually tell us anything useful?

4. No control group.

If you want to know the effects of an experimental intervention, you need a group of people that don’t receive the intervention. That’s the control group. Without a control group to compare against the group that received the experimental intervention, a clinical trial doesn’t mean much. You can’t truly know that the experimental variable caused the change without it.

5. Fake controls.

The presence of a control group doesn’t make it a good study. The control group has to be a real control. Take this paper from last year claiming that oatmeal for breakfast promotes satiety. Sure, when you’re comparing oatmeal to a cornflake breakfast. It doesn’t take much to beat the satiating (non)effects of cornflakes. How would oatmeal compare to bacon and eggs, or a big ass salad, or sweet potato hash? This study doesn’t tell you that.

6. Small sample size.

The smaller the sample size, the less impressive the results. The larger the sample size, the more meaningful the results and the more likely they are to apply to the larger population. That’s why the results of n=1 self-experiments are mostly useful for the person running the experiment on themselves and less useful for others; the sample size of one isn’t enough to generalize the results.
Small sample size studies shouldn’t be ignored. They can lead to interesting questions and hypotheses that larger studies can tackle. But they shouldn’t sway public policy, scientific consensus, or your decision about what to eat and how to live.

7. Demographics.

Make sure you know who participated in the study. If you’re a Latino male of 20 years, the study on low-carb diets in post-menopausal black women may not apply to you.

8. Food frequency questionnaires (FFQ).

FFQs require people to recall their typical diet over the last year. That’s hard. Here’s a sample FFQ (PDF); see how you do trying to recall the foods you ate over the last 12 months. Suffice it to say, they aren’t very reliable. People lie. People forget. People tell you what they think you want to hear, downplaying the unhealthy stuff and overstating the healthy stuff. FFQs are probably the best option available for assessing, but they aren’t good enough.

9. The adherer effect.

Michael Eades calls it the “adherer effect.” I’ve called it the “healthy user effect.” Whatever phrase you prefer, this describes the fact that there’s “something intrinsic to people who religiously take their medicine that makes them live longer,” even if that medicine is a completely inert placebo. Perhaps they’re also more likely to heed other medical advice, like exercising regularly, getting checkups, eating healthy foods, and other behaviors that improve health which could explain some of the beneficial effects. But it’s a real thing, and it has a real effect on the results of nutritional studies.

10. Statistical significance versus clinical significance.

You see the phrase “significantly associated with” a lot in scientific papers. “Fat is significantly associated with type 2 diabetes” sounds like “dietary fat has a large effect on your risk of type 2 diabetes.” But what that phrase really means is “The association between fat and type 2 diabetes is unlikely to be a coincidence.” It says nothing about the size of the association. It doesn’t mean eating fat doubles your chance of getting type 2 diabetes. The clinical significance—the biological effect—is very likely trivial.

11. Relative risk versus absolute risk.

Papers will often talk about “the risk” of something. More often than not, that’s a relative risk. Take something like colon cancer. Though it’s the third most common cancer (and cause of cancer-related deaths), the absolute risk of developing colorectal cancer, even in old age when the risk is at its highest, isn’t exactly high. For the average 50 year old, his or her lifetime absolute risk of colorectal cancer is 1.8%. If that 50 year old has a relative with colon cancer, the absolute risk is 3.4%. Having two relatives with a history of colon cancer pushes it up to 6.9%. On the big scale of things that can kill you, colorectal cancer isn’t even in the top five.
So anything that increases the risk of colon cancer starts from that otherwise meager degree of absolute risk.

12. Nutrients versus foods.

Most nutrition studies attempt to measure the effect of specific nutrients on health outcomes. But people don’t eat palmitic acid. They eat dairy and meat. People don’t eat linoleic acid. They eat almonds, or soybean oil, or pumpkin seeds. People don’t eat glucose, fructose, resistant starch, and prebiotic fiber; they don’t even eat “carbs.” They eat cold potatoes, sweet potatoes, blueberries, wild rice. Studies that look at specific nutrients can’t tell you accurate information about the effects of foods, because foods contain far more than just single nutrients.

13. Most research is wrong.

In 2005, John Ioannidis published a paper called “Why most published research findings are false,” citing conflicts of interest, small sample sizes, insignificant clinical effects, and failures to replicate—in other words, most of the stuff mentioned in today’s post. It eventually became the most widely cited paper ever published in PLoS Medicine, and it’s still true today. Keep it in mind.

14. Journals prefer to publish and researchers prefer to submit exciting studies with strong results.

You’re more likely to have your paper published if it presents a new, exciting finding with a strong result. If two researchers run similar studies and only one gets a positive result, a journal will usually publish the “successful” study and ignore the other one. For their part, researchers are more likely to submit “successful” papers to journals. The end result is a lack of negative results, even though they’re informative and vital for accurate science to prevail.

15. We know very little.

“Blueberries improve memory.” In who? People with dementia, people who are at high risk for it? Can kids improve school performance by eating blueberries? What about college students? What if the college students are female—does that change anything?
“Nuts reduce mortality risk by 40%.” How long do you have to eat the nuts? Does the type of nut matter? Does your age affect the protective effects of nuts?
There’s too much we don’t know. There are too many variables we can’t control.
This isn’t to suggest that nutritional studies are useless. I cite and refer to them all the time. They’re often the best, most objective angle on the situation available. Like democracy, it’s the worst except for all the others. But we have to recognize and consider their limitations. Hopefully after today’s post, you’ll know what to look for.
That’s it for today, folks. Let’s hear from you. What do you think? How do you analyze a nutritional study? What do you look out for?



Click Here For More Articles

No comments:

Post a Comment