Nutrient Profiling Systems: Why None of the Methods Work

WARNING: THIS CHART IS EXTREMELY MISLEADING WITHOUT CONTEXT

Have you seen this chart? It’s everywhere on social media because it feeds the popular narrative that nutrition science is completely controlled by corporate interests. There is some truth to that, but this isn’t actually a good example, and I’ll explain why after providing some important background information. It turns out the same issue is behind those green/yellow/red cards at the DFAC that everyone makes fun of.

In this age of data-driven everything, one of the holy grails the nutrition science community is hunting for is a universal food scoring system (also known as nutrient profiling systems, or NPS). In theory, this would be an algorithm that you could plug the nutrition facts for any food into, and it would spit out a numerical score that tells you how “healthy” a food is. While many extremely smart teams have published proposed versions, they all fall short for a similar set of reasons.

There is a subcategory of these algorithms that are not “universal,” meaning they require users to categorize foods in order to select the appropriate algorithm. I’m not going to discuss these here because they’re much more complex, but that’s exactly the problem: that added layer of complexity introduces a bunch of user errors. Universal scoring systems seek to eliminate that by creating one algorithm that can theoretically handle any type of food.


The study receiving all the criticism lately is the Tufts Food Compass as published in Nature in October 2021.  Interestingly, the study itself is rarely mentioned by name in the social media posts, I wonder if these commentators have taken the time to read the whopping 9 pages? If they had they might have noticed something odd: the chart that’s getting shared everywhere is nowhere to be found. Nor are there any “government recommendations” included. The boldest claim the authors make is that “This publicly available tool will help guide consumer choice, research, food policy, industry reformulations and mission-focused investment decisions” (italics mine). 


So where did the hot take come from suggesting that the government is saying that Lucky Charms are healthier than steak? While the Food Compass has higher construct validity than previously published algorithms (like the Nutri-Score and Health Star Rating), I’m sure its own authors would happily admit that there are plenty of individual instances where it will fail. In February 2022 a team of authors published “Limitations of the Food Compass Nutrient Profiling System” which identified many of these examples where the algorithm provided misleading scores. 


The graph that has received so much attention came from this article, which was also US government funded, and explicitly describes the chart as examples of misleading scores. Contrary to so many commentators’ claims that the government is anti-meat and pro-processed foods, the primary conclusion of this follow-up study (which, again, was also funded by the government) is that the Food Compass scores “exaggerate the risks associated with animal-source foods (ASF), and underestimate the risks associated with ultra-processed foods (UPF).”

The authors of this follow-up investigation provide an important caution against all of these scoring systems, stating that “there is no universally accepted gold standard, as all existing NPS face technical and conceptual limitations.” It is functionally impossible to account for all the factors that determine whether a food is “healthy,” which is inextricable from several variables none of these systems can possibly account for (activity demands of the person consuming the food, how it fits into the context of their overall diet, etc.). Realistically, even Food Compass’ “universal” score is only really helpful in comparing foods within the same category.

It's important to note that neither of these articles is a government recommendation. They’re both simply research that received government funding. You should not draw strong conclusions from any individual study, and you should definitely not draw strong conclusions from social media posts by people misattributing charts to a single study they never read in the first place, no matter how mad it makes you.

It turns out all of this is actually relevant to the military as well. If you’re not familiar, those red/yellow/green cards you see in military dining facilities are part of the Go for Green program. Since the Army loves a one size fits all solution (despite the limitations discussed earlier) it should come as no surprise that these placards are determined by an NPS, and it’s a much simpler (and therefore easier to use, but more error-prone) than the Food Compass. In fact, you can find the calculator right here.


So what is the bottom line? Unfortunately, as usual, when it comes to determining whether individual foods are healthy, the answer is it depends. You should probably try and maximize whole foods, minimize ultra-processed foods, fuel appropriately for your training, and find ways to include common shortfall nutrients. What you probably shouldn’t do is rely on algorithms to tell you whether something is healthy or not. If you have access to experts like Registered Dietitians, seek their advice instead of getting your information from social media (even mine).

Previous
Previous

4 Fundamental Truths of Tactical Training

Next
Next

Misleading Reporting on Childhood Obesity Guidelines