The Ketogenic Diet for Health
“I like to start with an evolutionary perspective” — Jennie Brand-Miller
Today at the Food for Thought Conference, Jennie Brand-Miller argued that dependence on exogenous glucose played a critical role in our evolution. I and others disagree for several reasons. Let’s look at the main arguments Brand-Miller put forward in support of exogenous glucose.
- The brain requires a lot of energy
- The brain runs on glucose
- The need for dietary glucose is particularly acute in fetuses
- The cooking of starch allowed us to get that energy
- Some modern HGs make significant use of exogenous glucose
- We have many more copies of amylase than other primates
- Some of us have developed persistent lactase
- We are hard wired to love sweetness
Yes, the brain requires a lot of energy; no it does not have to come from dietary glucose
I agree wholeheratedly that our brains require a lot of energy, much more than other organs, and that our needs are many times more acute than in other primates. Getting this energy was critical for our evolution. However, the idea that the brain “runs” on glucose, and that this shows a requirement for exogenous glucose is incorrect, and omits well-known evidence.
First, our bodies are capable of synthesising enough glucose in the absence of dietary sources to fulfill the most conservative estimates of requirements. As conceded early in the presentation, we are known to be able survive without exogenous glucose. If we could not supply our brain needs in this way, this would simply not be possible. That glucose is mostly synthesised out of protein, and the process is called gluconeogenesis. This fact alone is enough to render this argument irrelevant, but there is more. In the situation for which no dietary glucose is provided, not only can we still make enough glucose endogenously to meet those needs, but in practice what happens is that our needs are different. Instead of running primarily on glucose, our brains metabolism runs mostly on ketone bodies, and uses glucose for only a small portion of its needs, far less than our capacity to generate it.
Fetal and infant growth does not depend on dietary glucose
Brand-Miller also insists that “The fetus grows on the mother’s maternal blood glucose.”, as if this should settle the matter once and for all. However, she neglects to mention that fetuses make extensive use of ketones. I’ve covered infant brain growth and the importance of ketone bodies in this context several times, so I won’t go into it here. See Babies thrive under a ketogenic metabolism, Meat is best for growing brains, What about the sugars in breast milk?, and Optimal Weaning from an Evolutionary Perspective, for fully referenced discussion of the fuel used by growing babies. In any case, maternal blood glucose is maintained just fine without dietary sources, so even if babies did not use ketones, the point would be moot.
The evolutionary argument
Since our brain energy needs are met perfectly well with either a high glucose intake or a low glucose intake, it cannot be reasonable argued that our large brains must have developed under conditions of high glucose intake. There are still at this point two equally plausible evolutionary hypotheses that would enable the evolutionary development of large brains: increased consumption of exogenous glucose, and increased consumption of exogenous fat. An increase in both is also a plausible hypothesis, either together or in alternation. For simplicity, let’s start by considering one state or the other as being the predominant evolved state. Let’s review what evolutionary circumstances would be required for each hypothesis, and what other circumstances would support it without being necessary. Then we can see what evidence we have for those circumstances.
Persistent adequate availability of the predominant energy source and essential micronutrients
- For the exogenous glucose condition to have been the predominant evolved state, we would have required a consistent source of exogenous glucose on a regular basis, year round, for multiple generations.
- For the endogenous glucose condition to have been the predominant evolved state, we would have required a consistent source of exogenous fat and protein on a regular basis, for multiple generations.
The reason we would need fat, and not just protein in the gluconeogenesis case, is that we are limited in our ability to metabolise protein. Protein is better conceived of as a mainly a micronutrient, rather than a macronutrient, because of its structural importance. Besides water, our bodies are primarily made of amino acids and fatty acids. This is one reason why when we rely on gluconeogenesis for all of our glucose needs, we also have reduced glucose needs. It spares protein for more important things.
Protein availability is also of crucial importance even for the exogenous glucose hypothesis, because it is still a fundamental nutritional need outside of energy requirements.
Beyond protein, we would need to supply all of the nutrients that proper brain development requires. These include the minerals iodine, selenium, iron, zinc, vitamins B12, A, and D, and the vitamin-like choline, and the fatty acids DHA and arachadonic acid. Note that of the minerals listed here, animal sources are much more bioavailable, and that plant sources contain substances that actively interfere with absorption. Of the vitamins and fatty acids, one (B12) is not available even in precursor form, and the others only in precursor form. Humans are known to have low and variable ability to synthesise the necessary components out of precursors. It is generally agreed upon in the scientific community that because of these hard requirements of the brain, a significant level of animal sourced food must have been part of our evolutionary heritage. This is supported by the absence of evidence of a single indigenous society that did not include some form of animal sourced food.
If the energy source were dietary glucose, we would require:
A year round, abundant source of glucose coming from
starchy tubers AND widespread use of cooking
sugar in the form of low fibre fruit and honey
The evidence for cooked tubers is weak at best. First, tubers in existence were seasonal and limited in supply. Second, tubers in existence were highly fibrous, much more so than today’s bred varieties, so they didn’t yield much . The wild type still used by the Hadza, for example, have a low yield of glucose even when cooked. According to Schnorr, who studied this, “When roasted, they produce more energy, although the difference is not great. In the real world, the additional calorie content is probably lower than the cost of making a fire.” (https://www.universiteitleiden.nl/en/news/2016/03/roasting-turnips-for-science)
Third, there is little evidence to support access to fire in the time period in question. The most avid advocate of this theory, Wrangham (whose book cover is featured on the slide), must resort to theorising that fire was available and widespread long before current evidence supports . His date estimates for use of fire were developed through backwards reasoning based on the assumption that starch was the evolutionary reason for our brains! Since the only way would could have eaten starch was by cooking, and since our brains changed around 2 million years ago, Wrangham then infers that we must have had the use of fire at that time . While we cannot dispute the theory based on absence of evidence, it does remain theoretical, and is less probable given that we would expect to have found at least some such by now.
As to evidence for or against fruit and honey, this is an unlikely year round source of energy in the evolved environment, particularly during ice ages, and harvest time would not complement tubers. The so-called fruit based diet of other primates, is actually a fibre based diet, (i.e. a fibre-derived fat based diet) as Miki Ben-dor has shown quantitatively. That’s because as in the case of tubers, wild fruits were mainly fibre.
This points to the drastic difference in digestive systems between humans and our closest relatives. They clearly have extensive ability for hindgut fermentation, and we clearly do not. If fruits were a major source of energy for humans, it would not have been the fibrous ones our primate cousins eat. There is no coherent continuity argument available from that perspective.
In this scenario we would also still require animal sourced food for protein and other micronutrients, as discussed above.
If the energy source were dietary fat and protein, we would require:
A year round, abundant source of fat and protein from
- hunted or scavenged game
Evidence for at least some hunted or scavenged game is well supported in various ways. We have direct evidence of hunting and scavenging going back long before evidence of fire, in the form of tools and bones. Unlike in the case of tubers, cooking is not necessary to obtain nutrients and energy from meat and animal fat. Because of our protein and nutrient requirements, we would already have to be eating animal sourced food anyway, regardless of our source of energy.
If the meat were lean, this would not by itself solve the energy problem. However, evidence suggests that the game available during the time in question was much fattier megafauna than modern leaner game . Insofar as this is true, it seems a stretch to suggest that early humans would have hunted game that met their protein and nutrient requirements, and then discarded an abundant energy source that came right along with it in favour of tubers. Moreover, evidence suggests that our meat eating began with scavenging bones and skulls from other carnivorous kills, eating some scraps of meat, but primarily the marrow and high-fat brains we were able to crack our way into.
In fact, the diversity of human diet after the extinction of the megafauna can be viewed as a variety of adaptations to the loss of our evolved diet, rather than evidence that we evolved to eat exactly like any one of them in particular. This brings us to the supplemental arguments from Brand-Miller.
What can we conclude from the diets of modern hunter-gatherer societies?
It seems disingenuous to cite the Hadza, the society with the highest reported carbohydrate intake as evidence that we need carbohydrates to thrive. There are several known indigenous peoples or other groups that subsisted on very low exogenous carbohydrate levels before introduced to wheat and sugar. This includes Mongolians, Plains Indians, Brazilian gauchos, Arctic peoples, and Maasai. The very existence of these societies contradicts the thesis. However, none of them, high carbohydrate or low, should be used as demonstration of a particular evolved way of eating. They each show a way that is viable in the given environment. Carbohydrate can be used as a primary energy source and so can fat. But this is not enough to show which, if any, was primary during the time our brains evolved to make us anatomically modern.
Why do we have more copies of amylase than other primates?
On average we have more copies of AMY1 than other primates. Brand-Miller claims that our number of copies of salivary amylase genes have changed because of our dietary intake of dietary carbohydrates. This is a hypothesis. Fernandez and Wiley recently discussed several problematic inconsistencies in this hypothesis , including the high variability in every population, the fact that starch digestion isn’t materially affected by salivary amylase, and the existence of alternative possible functions of the gene. I have touched on the apparent relationship to stress in a previous post (Science Fiction).
Some populations have extended lactase production
I think this supports more use of animal based nutrition than a need for sugar in particular.
Hard wired to love sweetness
The hard-wired response to the taste of sweet is important and interesting, but I think it shows that sugar was rare, not a staple.
We crave sweets. Craving is different from hunger.
Craving indicates a different kind of reward mechanism than one based on need. It is intensified by intermittent availability and scarcity.
Relatedly, there seems to be no satiety mechanism for sweet.
This is in stark contrast to protein and fat, both of which induce satiety. To me this indicates precisely that the environmental availability of sugar and starch was limited. If it were unlimited, we would have had to develop internal responses to maintain homeostasis. As it is, it argues for seasonal gorge opportunities at best.
The body has limited ability to store glucose. If glucose were the fuel of choice, it seems likely that we would have benefitted from expanded glycogen storage. Instead what we have is fat storage. We can afford to overeat sugar if we store it as fat, and then use the fat as fuel over time. The only way we can use fat as fuel is if we have stopped eating glucose for a significant period.
Evidence supporting endogenous glucose rather than exogenous as our evolved default
In contrast to other species we have studied, humans stay in ketosis even when they have substantially more protein than basic needs require. Like many species, in the absence of significant dietary carbohydrate, when our protein or caloric needs are not met, we produce ketones to spare protein and provide non-glucose energy, and we change our metabolisms to require less glucose. Once protein is sufficient, though, other species will go back to the glucose based metabolism. Humans do not. Humans, apparently uniquely continue in ketogenic mode until and unless so much protein is ingested that the amount being metabolised is resulting in so much glucose that it has to be stored. This suggests that humans had an evolutionary timespan in which access to protein and fat was consistently high and carbohydrate was low for at least long periods. I review the evidence for this here: Ketosis Without Starvation: the human advantage
Brand-Miller’s evolutionary arguments that dietary carbohydrate was the fuel that allowed us to grow our large brains
- Ignores our capacity for exogenous glucose and ketone body supply
- Relies on doubtful claims about the ubiquitous availability of sufficient starch and sugar as well as widespread use of fire at the time in question
- Presumes that the animal fat we are known to have procured was not a sufficient source of energy
- Makes incorrect assumptions about fuel sources in other primates
- Cherry picks from modern hunter-gatherer societies only those that support, but not those that refute the claim that dietary glucose is necessary
- Conflates “glucose is sufficient” with “glucose is necessary” for supplying brain energy
- Evinces poor understanding about fetal and perinatal fuel supply
The available evidence supports at best a seasonally alternating system of glucose and animal fat reliance for brain energy, and does not refute a long-term evolutionary adaptation for little and infrequent dietary glucose.
From the ape’s dilemma to the weanling’s dilemma: early weaning and its evolutionary context. Kennedy GE. J Hum Evol. 2005 Feb;48(2):123-45. Epub 2005 Jan 18.
“Although some researchers have claimed that plant foods (e.g., roots and tubers) may have played an important role in human evolution (e.g., O’Connell et al., 1999; Wrangham et al., 1999; Conklin-Brittain et al., 2002), the low protein content of ‘‘starchy’’ plants, generally calculated as 2% of dry weight (see Kaplan et al., 2000: table 2), low calorie and fat content, yet high content of (largely) indigestible fiber (Schoeninger et al., 2001: 182) would render them far less than ideal weaning foods. Some plant species, moreover, would require cooking to improve their digestibility and, despite claims to the contrary (Wrangham et al., 1999), evidence of controlled fire has not yet been found at Plio-Pleistocene sites. Other plant foods, such as the nut of the baobab (Adansonia digitata), are high in protein, calories, and lipids and may have been exploited by hominoids in more open habitats (Schoeninger et al., 2001). However, such foods would be too seasonal or too rare on any particular landscape to have contributed significantly and consistently to the diet of early hominins. Moreover, while young baobab seeds are relatively soft and may be chewed, the hard, mature seeds require more processing. The Hadza pound these into flour (Schoeninger et al., 2001), which requires the use of both grinding stones and receptacles, equipment that may not have been known to early hominins. Meat, on the other hand, is relatively abundant and requires processing that was demonstrably within the technological capabilities of Plio-Pleistocene hominins. Meat, particularly organ tissues, as Bogin (1988, 1997) pointed out, would provide the ideal weaning food.”
Toward a Long Prehistory of Fire. Michael Chazan Current Anthropology 2017 58:S16, S351-S359
“This article explores a conception of the origins of fire as a process of shifting human interactions with fire, a process that, in a sense, still continues today. This is a counterpoint to the dominant narrative that envisions a point of “discovery” or “invention” for fire. Following a discussion about what fire is and how it articulates with human society, I propose a potential scenario for the prehistory of fire, consisting of three major stages of development. From this perspective, obligate cooking developed gradually in the course of human evolution, with full obligate cooking emerging subsequent to modern humans rather than synchronous with the appearance of Homo erectus as envisioned by the cooking hypothesis.”
“Wrangham and his collaborators work from the observation “that present-day humans cannot extract sufficient energy from uncooked wild diets” (Carmody et al. 2016:1091). From this observation, the logical inference is that obligate cooking must have a point of origin in hominin phylogeny. This point of origin is then mapped onto the increase in hominin brain and body size ca. 2 million years ago, leading to the proposition that obligate cooking began with Homo erectus and was a characteristic of subsequent taxa on the hominin lineage. The power of the approach taken by the cooking hypothesis is that it is at least partially testable based on experimental studies on the physiological and molecular correlates of consumption of cooked food (see literature cited in Carmody et al. 2016; Wrangham 2017). However, this approach also has a number of shortcomings. First, while obligate cooking necessarily must have a point of phylogenetic origin, the same is not true for cooking that might become integrated into hominin adaptation through a process rather than as the result of a single event. Second, while many aspects of human obligate cooking can be experimentally tested, there is no current method for testing whether H. erectus required regular cooked food. In fact, the placement of the onset of obligate cooking at 2 million years ago is not directly testable without recourse to the archaeological record (including recovery of residues from fossils; see Hardy et al. 2017).”
Linking Top-down Forces to the Pleistocene Megafaunal Extinctions William J. Ripple and Blaire Van Valkenburgh BioScience (July/August 2010) 60 (7): 516-526.
“Humans are well-documented optimal foragers, and in general, large prey (ungulates) are highly ranked because of the greater return for a given foraging effort. A survey of the association between mammal body size and the current threat of human hunting showed that large-bodied mammals are hunted significantly more than small-bodied species (Lyons et al. 2004). Studies of Amazonian Indians (Alvard 1993) and Holocene Native American populations in California (Broughton 2002, Grayson 2001) show a clear preference for large prey that is not mitigated by declines in their abundance. After studying California archaeological sites spanning the last 3.5 thousand years, Grayson (2001) reported a change in relative abundance of large mammals consistent with optimal foraging theory: The human hunters switched from large mammal prey (highly ranked prey) to small mammal prey (lower-ranked prey) over this time period (figure 7). Grayson (2001) stated that there were no changes in climate that correlate with the nearly unilinear decline in the abundance of large mammals. Looking further back in time, Stiner and colleagues (1999) described a shift from slow-moving, easily caught prey (e.g., tortoises) to more agile, difficult-to-catch prey (e.g., birds) in Mediterranean Pleistocene archaeological sites, presumably as a result of declines in the availability of preferred prey.”
Rethinking the starch digestion hypothesis for AMY1 copy number variation in humans. Fernández CI, Wiley AS. Am J Phys Anthropol. 2017 Aug;163(4):645-657. doi: 10.1002/ajpa.23237.
“Although it certainly seems that a-amylase and increased AMY1/AMY2 CN are related to starchy diets in some primates and domestic dogs, we conclude that at present there is insufficient evidence supporting enhanced starch digestion as the primary adaptive function for high AMY1 CN in humans. Existing claims for this function are based on the assumption that salivary a-amylase plays a crucial role in extracting glucose from plant foods. Alpha-amylase’ s role in starch digestion is limited to the first (and nonessential) step, while other digestive enzymes and transport molecules are critical for starch digestion and glucose absorption. Specifically, it is assumed that glucose is a major product of a-amylase action, and disregard the essential rate-limiting action of maltase-glucoamylase and sucrase-isomaltase enzymes in whole starch hydrolysis. It may be that the early phase of starch digestion is important in some other way, or interacts in complex ways with these other brush border enzymes. Thus, far there is no evidence that there has been selection on the genes for these other enzymes in humans. We show that a-amylase has alternative potential roles in humans, but find that there is insufficient evidence to fully evaluate their adaptive significance at present. AMY1 and AMY2 are widely distributed across diverse life forms; their ancient origin and conservation suggest that they play crucial roles in organismal fitness, but these are not well described, especially among mammals. It could also be that higher AMY1 copies play no adaptive role, but have waxed and waned in copy number without strong selection favoring or disfavoring them. In this regard, Iskow and colleagues (2012) indicate that although several examples of CNV at coding regions show signals of positive selection, it remains unclear whether these examples represent a pattern for CNV in humans. Alternatively, it is suggested that most CNVs across the human genome may have evolved under neutrality due to the existence of “hotspots” for CNV (Cooper, Nickerson, & Eichler, 2007; Perry et al., 2006) and the fact that CNV in the genome of healthy individuals contain thousands of these variants with weak or no phenotypically significant effect (Cooper et al., 2007). More research is necessary to understand the evolutionary significance of high CN and CNV in AMY1 in human populations.”
Ketosis Without Starvation: the human advantage
I recently had the honour to speak at Low Carb Breckenridge 2018. You can find the video here. Below are my slides, notes, and references.
On a high carb diet, you might need to fast to attain an enlightened brain state. On a ketogenic diet, as a human, that doesn’t appear to be necessary.
The only disclosure I have to declare is that I have some generous supporters on Patreon for my writing. Thank you! The supported content is free, so these are donations.
The foundation of our biochemical understanding of ketosis came from experiments in fasted humans and other animals. For example the groundbreaking work of George Cahill. I recommend his publication Fuel Metabolism in Starvation ([Cah2006]) which reviews many of his findings.
We continue to learn about mechanisms for how ketosis may increase health in a variety of ways. However, these origins carry with them an implicit cautionary note, since starvation is generally not recommended, for obvious reasons. It’s not sustainable indefinitely. It’s stressful to the body. And it can do real harm, sometimes with lasting detrimental consequences. Even fasting for short periods is surrounded by controversy among experts at this very conference, because of its potential to do damage to lean mass, and all the potential problems of protein and calorie malnutrition.
If ketosis is like fasting, we had better use it carefully, judiciously, and sparingly.
Many researchers conceptualise metabolism as operating in two complementary phases. The act of eating or not eating sets off a cascade of hormonal and molecular signals that result in one phase or the other, Sometimes called the fed and fasting states. In this paper [Mat2018] they are called the glucose and ketone phases.
Important things happen in both phases. The fed state is attributed with generating and synthesising things like tissue, mitochondria, and neurons, But the fasted state is attributed with clearing broken structures for renewal and repair, and providing the stimuli to direct the synthesis phase.
Ketosis is normally indicative of the fasting state
Many believe that staying in either phase prolongedly leads to disease. And so you will hear people talk about metabolic switching, metabolic flexibility, insulin pulsatility, and so on.
Ketosis is normally an indication and a signal of the fasting state, so reason tells us that chronic long-term ketosis is unhealthy.
Further, it’s been shown in longevity research that many animals use signals of fed and fasting state to determine whether to reproduce, because it’s a time of plenty, or to slow aging and shut down reproductive ability until more favorable conditions arise.
So again, the comparison leads us to fear that ketosis may have benefits, but that it comes with a severe cost.
[Graphic from: Insulin Signaling in the Central Nervous System. Daniel Porte, Denis G. Baskin, Michael W. Schwartz. Diabetes May 2005, 54 (5)]
But hold on. It turns out that starvation is not the only condition where ketosis naturally arises. Fetuses use ketones in the womb [Sha1985], [Ada1975], [Cun2016]. The placenta is full of BOHB [Mun2016]. Some mammals, humans included, have “ketosis of suckling”. Breastfed infants are in mild ketosis [Per1966], [Kra1974], [Bou1986]. In fact humans of all ages easily attain ketosis without protein or calorie deprivation, so long as they aren’t eating carbohydrates.
This graph shows how quickly the concentration of BOBH goes up in humans when they stop eating. It’s inversely related to age,
One of the stunning things about it is the orders of magnitude involved. Look for example at the 6-8 year old children. If the 4-hour mark is about 0.1 – 0.2, Then in a day, it’s increased by a factor of 20 or 40.
Newborns, who typically aren’t yet eating cereal, of course, don’t start that low. Also notice that children don’t even need to miss a day of food to get above the 0.5 mmol level of ketosis which has been considered the threshold of nutritional ketosis by Phinney and others. In his presentation here, he has even said that benefits likely begin even below that level.
But they don’t have to abstain from eating for ketosis to happen. For example, we have results in epileptic children. The previous standard had been a tightly protein restricted ketogenic diet. We now know that most children don’t need that for seizure control. Eating a modified Atkins diet, which mostly just means they stay in the induction phase instead of adding back carbs, typically they are in ketosis, Even though they eat ad libitum [Kos2013]. these are growing children and adolescents. Unlike with the protein restricted versions of ketogenic diets for epilepsy, which in some cases have impacted growth, when protein isn’t restricted, neither is growth [Nat2014].
Even adults have this ability. To know whether adults are able to stay in ketosis when protein needs are exceeded, we have to know what our protein needs are. It depends who you ask.
I don’t know of a study with the express purpose to find the upper bound of protein for ketosis, but we can look at studies that recorded it. Notice that The figures in this chart are using current weight, not ideal weight, and many of them are studies in overweight people, so the g/kg estimates look lower than they would be if using ideal weight.
I’d love to see this question approached systematically, but the survey does at least suggest that protein levels above our minimum needs based on positive nitrogen balance still support ketosis.
When you compare adult humans with other species instead of with children, It’s even more impressive.
Dogs are in many ways similar to humans. Our digestive anatomy and physiology is very similar.
Dogs can reach ketosis from fasting, but it takes longer, and never attains the same level [Cra1941]. With adequate protein in the diet, it doesn’t happen to any significant degree at all [Rom1981], [Kro1973], [San2015]. I have spoken with staff at KetoPet Sanctuary, who treat cancerous dogs with ketosis. They tell me that it is challenging to keep dogs in ketosis. They have to use a combination of protein restriction, calorie restriction, and MCT oils. It takes constant monitoring and adjustment.
Rodents are often used in experimental conditions, and I do think they are very useful models, but it takes more protein or calorie restriction to achieve an appropriate degree of ketosis than it would with humans. The line between adequate protein and too much for ketosis is almost vanishingly small [Stephen Phinney Q&A Low Carb Cruise 2017], and the levels they achieve are again much less spectacular [Benjamin Bikman, personal communication]. Similarly, almost any level of dietary carbohydrates is enough to shut down ketosis [Richard David Feinman, personal communication]. Some researchers believe this has to do with their relative lack of brains, Since ketosis has been thought of as a way to spare glucose for the brain. But ketosis isn’t the only solution for that.
Obligate carnivores are always on very low carb diets, so you might think they are always in ketosis, but that’s not at all the case. In fact they are specialised at gluconeogenesis, that is, getting all their energy needs met by converting protein into glucose. Protein needs tend to be high.
Cats have much higher protein needs than omnivores and surprisingly, they don’t adapt well to reduced protein or fasting [Cen2002]. They don’t seem to have good mechanisms to compensate for the various amino acid and vitamin deficiencies that develop, so they suffer from ammonia toxicity, methylation problems, and oxidative stress. They do produce ketones fasted, but they don’t seem to use them in a productive way. and they actually accumulate fatty acids in the liver when fasted; the opposite of what humans do, Because they are still producing glucose, they become like human type two diabetics.
Dolphins are particularly interesting because they have really large brains, and they eat a diet that would be expected to be ketogenic if fed to humans. However, they don’t seem to even generate ketone at all, not even when fasting. Instead, they ramp up gluconeogenesis [Rid2013].
They keep their bodies and their brains going by increased glucose.
When faced with this observation that humans use ketosis even when they don’t have to for glucose production, one obviously wonders how this happens from a mechanistic standpoint. I have never seen the question raised in the literature, let alone answered. If I were to take a guess, I’d say it probably happens somewhere in this process.
CPT1A is a kind of gatekeeper, transporting fatty acids into the mitochondria for oxidation. This is normally a necessary step in the creation of ketone bodies. The coenzyme malonyl-CoA inhibits CPT1A [Fos2004]. The functional reason it does that is because malonyl-CoA is a direct result of glucose oxidation and is on the path to de novo lipogenesis. It could be inefficient to be both generating fat and oxidizing it. So this is a convenient signal to slow entry of fat into the mitochondria.
However, its action is not stictly linear. It uses hysteresis. Hysteresis is a way of preventing thrashing back and forth between two states at the threshold of their switch. For example, if you set your thermostat to 20°C, you would not want the heater to be turned on when the temperature drops to 19.999 and turned off again at 20. This would result in constant switching. Instead, a thermostat waits until the temperature drops a little lower before activating the heater, and heats it a little more than required before deactivating it.
Hysteresis is implemented in CPT1A by its becoming insensitive to malonyl-CoA when levels of it are low [Ont1980], [Bre1981], [Gra1988], [Gre2009], [Akk2009]. That means that once CPT1A becomes very active in transporting fatty acids, it takes time before the presence of malonyl-CoA will inhibit CPT1A at full strength again. That means that fluxuations in glucose oxidation, or small, transient increases in glucose oxidation don’t disturb the burning of fatty acids or the production of ketones.
It could be the case that humans develop more insensitivity to malonyl-CoA under ketosis than other species do, allowing them to metabolise more protein without disturbing ketosis. Among humans, this is case in populations such as some Inuit with the Artic variant of CPT1A. That mutation slows down CPT1A activity immensely. This was permitted by their diet which was very high in polyunsaturated fats from sea mammals. Polyunsaturated fats upregulate fatty acid oxidation by a large proportion compared to saturated fats [Cun2002], [Fra2003], [Fue2004], so this mutation would not necessarily have been disruptive of ketosis in that population when eating their natural diet [Lem2012]. But a second effect of the same gene further decreases the sensitivity of CPT1A to inhibition by malonyl-CoA. That means they are less likely to be knocked out of ketosis by high protein intake. I will go into this in much greater detail in my upcoming talk at AHS18.
The second question that comes to mind is what does this difference imply about our evolutionary environment? I would suggest that for humans to have developed the ability to stay in ketosis even with more than sufficient protein intake, we must have at least have spent frequent long periods in a condition of very low carbohydrate, high fat access, either exogenously or endogenously, and more than adequate protein as a dietary norm.
Why do we stay in ketosis even when we have enough protein to feed the brain glucose without compromising lean mass. Or to put it another way: Other animals continue to burn through lean mass with or without ketosis until they have enough protein to fuel everything with glucose.
I suspect it has something to do with our brains. I’ll suggest a few hypotheses along these lines.
The next few slides summarise topics I’ve spoken and written about before. Please see
for more details and links about brain growth and our acquired reliance on meat during evolution.
Our brains are big. Primates are already big brained for mammals, and from that starting point our brains tripled in size over the course of a couple million years.
Brains take a lot of energy to run, To accommodate that we made a trade.
Herbivores get most of their energy from fibre by fermenting it in the gut. But this isn’t very efficient, because intestines also take a lot of energy. So we transferred to a strategy of eating fat directly Giving up colon size for brain size. To get enough fat directly, we had to eat meat.
[Graphic from: Milton, Katharine. “Nutritional Characteristics of Wild Primate Foods: Do the Diets of Our Closest Living Relatives Have Lessons for Us?” Nutrition 15, no. 6 (June 1999): 488–98. https://doi.org/10.1016/S0899-9007(99)00078-7. version enhanced with colour by http://roarofwolverine.com/archives/219 ]
Energy is one reason we might want to stay in ketosis.
Human brains use an extraordinary amount of energy, at least 20% in adults Some 40g/day of that has to come from glucose, because it houses some of the few types of cells that are glucose bound. But the rest can be met by ketones.
Our brains use ketones preferentially when they are available. Though in the modern context, that’s not very often.
If adult brains weren’t large and expensive to run enough, Consider how much bigger the brain of a child is relative to the body.
This may explain why human babies are so fat. These graphs are from a paper exploring different hypotheses about baby fat [Kuz1998], one of them being to supply the brain energy in the form of ketones.
The one on the left shows % body fat at birth in different species. Newborn humans come in at 15% fat. That actually gets higher in the first several months of life, Peaking at about 25%. The only other primate in that graph is the baboon infant at 4%.
The one on the right is what percent of oxygen metabolised by the whole body is going to the brain: Humans at birth 60%, human adults 20%,… the adult chimpanzee comes in at about 9%.
Another consideration is building materials, since our brains are made mostly of fat and cholesterol and we know that ketones are used to synthesize those in situ. The diagram here [Cot2013] shows pathways of how ketones can be generated, oxidized, or used to make fat and cholesterol.
Fetuses and newborns use ketone bodies extensively, as I mentioned previously. But the point here is that it’s not just because they’re using it for fuel. It’s also a source of structural components.
In light of that, It seems like a reasonable hypothesis that ketogenic capacity in humans is so pronounced in childhood because the brain is developing, And ketones are for some reason the preferred material.
Other species tend to wean at the time when brain growth stops. That means that for them ketogenesis stops at the same time brain growth stops. In humans brain growth doesn’t stop at weaning [Ken2005], [Mar1982], [Dob1973], [Dek1978]. Even after it reaches about full size in adolescence, it continues to change structurally well into adulthood.
However, quantitatively, this structural cost is very small compared to energy considerations [Kuz1998], And so that hypothesis seems relatively weak on its own.
Another set of ideas comes from the metabolic effects we see in the lab and clinic. Some of the strongest, most consistent effects we’ve seen therapeutically from ketogenic diets take place in the brain,
These are just a few metabolic changes relative to a high carb diet. Each can have profound effects on the workings of the brain.
I do want to draw attention to the last one about availability of arachadonic acid and DHA. These are important for the brain as they make up the phospholipids, and they are subject to a lot of turnover.
Each of these effects has been proposed as a solution to the mystery of why a ketogenic diet treats epilepsy so effectively [Bou2007], [Nyl2009], [Mas2012], [deL2014].
But it’s not just epilepsy that ketosis is good for. Epilepsy is just the condition with the most research, and the widest acknowledgment.
Other conditions for which at least some evidence supports improvement via a ketogenic diet include neurological disabilities in cognition and motor control [Sta2012]; the benefit here may have to do with the proper maintenance of brain structures such as myelination (Recall phases: tear down damage, rebuild)
Survival after brain damage, the hypoxia of stroke or blows to the head is improved in animal models [Sta2012]. There is even animal evidence that brain damage due to nerve gas is largely mitigated by being in a state of ketosis during the insult [Lan2011]. Again, this suggests a structural support and resilience provided by a ketogenic metabolism. Resilience comes in part from not being as susceptible to damage in the first place, and that could be from reduced oxidative stress when using ketones for fuel.
Ketogenic diets as a treatment for cancer are controversial, but some of the best evidence in support of it comes from glioblastomas. See e.g. [Zuc2010], [Sch2012]. This could be due mostly to the hypoglycemia stalling the rate of tumour development.
And to venture into an area less well studied, but of critical importance given the epidemic that would be more apparent were it less taboo, there is preliminary evidence in the form of case studies that ketogenic diets may be promising treatments for many psychiatric illnesses too, for example, [Kra2009], [Phe2012]. Given that anticonvulsants are also used to treat bipolar, and the solid results of ketogenic diets on epilepsy, this may not be surprising. Additionally, the enhanced availability of AA and DHA may play a crucial role Because these fatty acids are critical for the brain, and dysregulation in their flux has been associated with bipolar disorder and schizophrenia. See e.g. [McN2008] and [Pee1996].
I would almost like to call a ketogenic diet a brain-growth mimicking diet.
The question of how and why humans are so ketosis prone may lead to interesting new insights about us as a species. We seem to avoid giving up ketosis as long as possible. only halting it when we take in so much glucose exogenously that we have to store it.
It seems likely that it facilitated the evolution of our brains, that organ that makes us so different from other animals that we sometimes forget we are animals.
Returning to the importance of metabolic switching between glucose and ketone mode, there seems to be a false dichotomy. There is a stage that doesn’t usually come up in discussions of fed and fasted, and that’s the “postabsorptive” phase.
The absorptive phase on a high carb diet lasts about 4 hours. That’s how long it takes to clear away the exogenous glucose. Only after that can you start the postabsorptive phase, Marked by using glycogen as your source of blood sugar. Other than overnight, SAD dieters typically don’t go more than 4 hours without eating, and so we don’t get very far.
But if you are on a protein and calorie sufficient very low carb diet, then even after eating, your glycogen stores don’t get that full in the first place. I don’t know how long it takes to get from the meal to maximum glycogen storage, But essentially, we should expect to get to a SAD dieter’s postabsorptive almost immediately after a meal, and easily into the ketogenic zone every day.
You can accentuate this by demanding more energy between meals (exercise) or eating less frequently, for example only once or twice a day. Interestingly, this often naturally happens to ketogenic dieters.
On a high carb diet, you might need to fast to attain an enlightened brain state. On a ketogenic diet, as a human, that doesn’t appear to be necessary.
In the interest of time, I did not do my usual practice of end-to-end citations. I will probably return to fix that later!