I'm researching the effect of adherence to Mediterranean diet on levels of inflammation in the body (C-reactive protein & a couple of other markers). I want to stratify participants over levels of exercise (which I believe is an effect modifier) and need to come up with a biological/physiological mechanism whereby the Mediterranean diet would have a different effect on CRP in people who exercise versus those who don't? Would really appreciate any pointers here.
Autophagy: The Real Way to Cleanse Your Body
Forget juice cleanses and detox diets. While there’s probably nothing wrong with drinking your weight in liquid kale, it won’t flush out toxins any faster than if you were eating, you know, actual food.
The good news: There’s a little-known way your body does cleanse itself, and it’s a process you can optimize.
All you need to do is practice a little self-cannibalism.
Yes, you can actually train your body to eat itself — and, believe it or not, you want it to.
It’s a natural process called autophagy (the word literally means “self-eating”).
It’s one way your body cleans house. In this process, your cells create membranes that hunt down scraps of dead, diseased, or worn-out cells gobble them up strip ’em for parts and use the resulting molecules for energy or to make new cell parts Glick D, et al. (2012). Autophagy: cellular and molecular mechanisms. DOI: 10.1002/path.2697
“Think of it as our body’s innate recycling program,” says Colin Champ, MD, an assistant professor at the University of Pittsburgh Medical Center.
Champ is also the author of “Misguided Medicine,” a book that questions many traditional health recommendations and provides evidence-based advice on diet and lifestyle.
There’s some evidence to suggest that autophagy (“ah-TAH-fah-gee”) plays a role in controlling inflammation and boosting immunity, among other benefits. In one 2012 study on mice, researchers found that autophagy protected against: He C, et al. (2012). Exercise-induced BCL2–regulated autophagy is required for muscle glucose homeostasis. DOI: 10.1038/nature10758 :
- neurodegenerative disorders
- inflammatory diseases
- insulin resistance
Another study from that year showed how a lack of autophagy can be harmful. Researchers found that removing the autophagy gene in mice caused weight gain, lethargy, higher cholesterol, and impaired brain function. Coupé B, et al. (2012). Loss of autophagy in pro-opiomelanocortin neurons perturbs axon growth and causes metabolic dysregulation. DOI: 10.1016/j.cmet.2011.12.016
“Autophagy makes us more efficient machines to get rid of faulty parts, stop cancerous growths, and stop metabolic dysfunction like obesity and diabetes,” Champ says.
“So how do I eat myself?” is a question you probably have never asked, but we’re about to tell you how. Autophagy is a response to stress, so you’re going to want to put your body through some hardship to drum up a little extra self-cannibalism.
(We know this article keeps getting weirder, but trust us.)
As is often the case, short-term discomfort can bring long-term benefits.
“It’s our ancestral and evolutionary response to dealing with feast and famine in times of stress,” Champ says. “Since a lot of these things would kill us, like starvation and exercise, it only makes sense that after millions of years we adapted those mechanisms to make them positive.”
Here are the three main ways to boost autophagy in your body.
There’s a great way to activate autophagy without forgoing your favorite rib eye — though you’ll probably need to quit candy.
It’s called ketosis. The idea is to reduce carbohydrates to such low levels that the body has no choice but to use fat as a fuel source. This is the magic behind the wildly popular ketogenic diet.
Keto diets are high in fat and low in carbs (steak, bacon, and peanut butter shakes are a bonus for the keto crowd). Between 60 and 70 percent of your overall calories come from fat.
Protein makes up 20 to 30 percent of calories, while only 5 percent comes from carbs.
Being in ketosis can help people lose body fat while retaining muscle. There’s some evidence that it also may help the body fight cancerous tumors, lower the risk of diabetes, and protect against brain disorders, particularly epilepsy. Paoli A, et al. (2013). Beyond weight loss: A review of the therapeutic uses of very-low-carbohydrate (ketogenic) diets. DOI: 10.1038/ejcn.2013.116
In fact, in a 2018 study, rats fed a keto diet had less brain damage during seizures. Wang B-H, et al. (2018). Ketogenic diet attenuates neuronal injury via autophagy and mitochondrial pathways in pentylenetetrazol-kindled seizures. DOI: 10.1016/j.brainres.2017.10.009
“Ketosis is like an autophagy hack,” Champ says. “You get a lot of the same metabolic changes and benefits of fasting without actually fasting.”
If staying in ketosis sounds too hard, take heart. A 2012 study noted similar benefits in people who followed a diet in which no more than 30 percent of their overall calories came from carbs, Champ says. Draznin B., et al. (2012). Effect of dietary macronutrient composition on AMPK and SIRT1 expression and activity in human skeletal muscle. DOI: 10.1055/s-0032-1312656
Note: Anyone with health issues, especially kidney or liver problems, should talk to a doctor before beginning a keto diet.
Skipping meals is another stressful act that the body may not immediately love but ultimately benefits from. Research has shown there are loads of positives to an occasional fast.
One research review found that intermittent fasting and autophagy can make cancer treatments more effective while protecting normal cells and reducing side effects. Antunes F, et al. (2018). Autophagy and intermittent fasting: The connection for cancer therapy? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6257056/
In another mouse study, intermittent fasting was shown to improve cognitive function, brain structure, and neuroplasticity, which is fancy-speak for the brain’s ability to reorganize and rebuild itself. Li L, et al. (2013). Chronic intermittent fasting improves cognitive functions and brain structures in mice. DOI: 10.1371/journal.pone.0066069
That said, it wasn’t totally clear if autophagy was the cause. Plus, the study was done on mice. You may have heard about a certain Twitter account that has a problem with people talking big about mouse studies.
In the meantime, give fasting a shot. While Champ fasts for 18 hours per day a couple of times per week, he knows that can be a tough routine for most of us.
Different variations of intermittent fasting seem to show pretty awesome health benefits. A review of the research concluded that it may have an array of positive effects, ranging from a healthier body weight and lower risk of diseases to an increased lifespan. Stockman M-C, et al. (2018). Intermittent fasting: Is the wait worth the weight? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5959807/
Keep in mind that fasting is generally not recommended for children, for some people with diabetes or other issues with blood sugar, or for pregnant women.
In case the sweating, grunting, and post-workout pain didn’t tip you off, here’s the deal: Exercise puts stress on your body.
Working out actually damages your muscles, causing microscopic tears that your body then rushes to heal. This makes your muscles stronger and more resistant to any further “damage” you might cause them.
Regular exercise is the most popular way people unintentionally help their bodies cleanse themselves. (So there’s actually something to that fresh, renewed feeling you get after working out.)
A 2012 study looked at autophagosomes, structures that form around pieces of cells the body has decided to recycle. After engineering mice to have glowing green autophagosomes (as one does), scientists found something interesting. He C, et al. (2012). Exercise-induced BCL2-regulated autophagy is required for muscle glucose homeostasis. DOI: 10.1038/nature10758
The rate at which the mice were healthily demolishing their own cells drastically increased after they ran for 30 minutes on a treadmill. The rate continued to increase until the little guys had been running for 80 minutes.
It’s hard to figure out the amount of exercise required to switch on the autophagy boost.
“[These] are hard questions to answer at the moment,” says Daniel Klionsky, PhD, a cellular biologist at the University of Michigan who specializes in autophagy. “Clearly exercise has many benefits, aside from the possible role of autophagy.”
Not yet. But there’s a lot money to be made if researchers can distill the benefits of autophagy into a pill, so you can be sure they’re trying.
“Of course people are looking for ways to induce autophagy through chemicals, because it would be easier than dieting,” Klionsky says, but he warns that we’re a long way off.
Champ notes that anti-epileptic drugs that mimic ketosis already exist.
In 2018, for instance, the FDA approved stiripentol, which can imitate the effects of a ketogenic diet. It’s used for the treatment of seizures associated with Dravet syndrome, a rare form of epilepsy.
Still, don’t get your hopes up. “There are so many metabolic changes that take place during ketosis that mimicking all of them with a pill might not be possible,” Champ says. “The bodily stress that comes with entering ketosis might be necessary for the benefits.”
Just remember: You don’t have to stay in ketosis, fast, or exercise intensely all day, every day to experience these benefits. Even a few hours here and there can help.
Klionsky notes that there’s still a lot we don’t know about autophagy, and it’s too early to definitively say that the process will cure cancer, make you a genius, or stave off aging.
“One fundamental problem is that it is still difficult to monitor autophagy in a living organism, especially a human,” Klionsky says. Still, there’s a pretty strong case to be made that some stress on the body is a good thing.
The takeaway? Occasional carbohydrate restriction, fasting, and regular exercise all carry mountains of benefits in addition to their impact on autophagy. The best that could happen is a stronger, leaner, and cleaner body.
One more thing: Drink plenty of nature’s own best liquid cleanser — pure, clean water.
What is the dawn effect?
The dawn effect refers to an unexpected increase in fasting blood sugar, usually upon waking. Doctors first noted it in patients with type 1 diabetes in the 1980s. They defined the dawn effect as rising blood sugar without the usual compensatory rise in insulin. 1
As morning approaches, the body naturally increases glucose production. However, the insulin the patients took the night before was insufficient to control the glucose rise. The mismatch led to an increase in blood glucose.
Researchers determined that the early morning glucose rise was caused by an increase in the so-called “counterregulatory hormones” cortisol, epinephrine, and norepinephrine. 2 They are called counterregulatory hormones primarily because they “counter” the effects of insulin.
These counterregulatory hormones stimulate the liver to secrete glucose into the body. If an individual has a normal insulin response, their insulin level rises to keep their blood glucose level stable.
For individuals experiencing the dawn effect, the extra glucose circulates until it is taken up by the cells and used for energy. You can think of this as the body preparing itself for the increased energy demands needed to wake up and ensure enough glucose is ready for use as a person becomes active each morning.
Studies in people without diabetes show the body increases insulin secretion between 4 am and 8 am. This extra boost of insulin acts to stabilize the increase blood glucose levels from the counterregulatory hormones.
Therefore, for decades, the dawn effect was assumed to be a problem only for those with type 1 or type 2 diabetes. But now, that assumption may be changing.
How much protein are most people eating now?
Because “high” is a relative term, discussions about whether you should eat a “high” protein diet are based upon a reference point. The typical reference point is the Recommended Daily Allowance (RDA) for daily protein, which is set at 0.8 grams per kilo. 6 For an average 154-pound person (70 kilos), that equates to 56 grams of protein per day — about six ounces of steak. For women, the RDA is even less, around 46 grams.
However, the RDA recommendation addresses the minimum amount required to prevent protein deficiency. The minimum to prevent protein deficiency is not the same as the recommended amount to improve health — a distinction that many people misunderstand. 7
American men average 88 grams of protein per day, and women average 66 grams, which equates to only 14 to 16% of total calories. 8 Of those protein calories, approximately 30% come from plant sources. 9
The main take-home message is that most people are likely not eating enough protein for weight loss, metabolic health, and improving lean muscle mass.
Biological Influences on NEAT
At the simplest level, there is ample evidence that components of NEAT have mechanistic drives. For example, picture the schoolgirl shivering while waiting for the bus. Cancer cachexia patients and starving individuals with food intakes of <500 kcal (64) have very low physical activity levels, whereas patients with hyperthyroidism are tremulous and easily startled (76, 91). What is more challenging is to derive the role of NEAT in human energy balance and to understand whether and how the central integration of this process occurs.
Biological factors and the thermic efficiency of physical activities.
A determinant of NEAT is the energetic efficiency with which nonexercise activities are performed (Fig. 4) (67). It is recognized that even trivial movement is associated with substantial deviation in energy expenditure above resting values. For example, mastication is associated with deviations in energy expenditure of 20% above resting (62). Very low levels of physical activity, such as fidgeting, can increase energy expenditure above resting levels by 20–40% (67). It is not surprising, then, that ambulation, whereby body weight is supported and translocated, is associated with substantial excursions in energy expenditure (40). Even ambling or browsing in a store (walking at 1 mph) doubles energy expenditure, and purposeful walking (2–3 mph) is associated with doubling or tripling of energy expenditure. When body translocation was logged with a triaxial accelerometer, the output from this unit correlated with nonresting energy expenditure (11). This implies that ambulation may be a key component of NEAT. The energy costs of a multitude of occupational and nonoccupational physical activities have been charted and tabulated (1, 2). What is noteworthy is the manifold variance in the energy costs of occupation-dictated activities, ranging from <1 multiple of resting energy expenditure (MET), such as typing, to 5–10 MET, such as wood cutting, harvesting, or physical construction work.
Fig. 4.Energy expenditure of varied low-level activities.
Several factors affect the energetic efficiency of physical activity.
BODY WEIGHT AND THERMIC EFFICIENCY.
It requires more energy to move a larger body than a smaller one. Several investigators have demonstrated that the energy expended during weight-bearing physical activity increases with increasing body weight (12, 79). It is less clear whether work efficiency varies with body composition, independent of body weight. Some studies (12, 38, 82, 105) have found no differences in weight-corrected work efficiency between obese and nonobese subjects, whereas others (23, 73) have found a greater work efficiency in the obese.
EFFECTS OF CHANGES IN BODY WEIGHT ON THERMIC EFFICIENCY.
There is controversy as to whether work efficiency changes with weight loss. Several studies have reported that energetic efficiency is reduced after weight reduction. Foster et al. (31) measured the energy cost of walking in 11 obese women before weight loss and at 9 and 22 wk after weight loss. They determined that the energy cost of walking (after control for loss of body weight) decreased substantially by 22 wk after weight loss. They estimated that with a 20% loss of body weight, subjects would expend about 427 kJ/h less during walking than before weight loss. Geissler et al. (34) compared energy expenditure during different physical activities and found that it was ∼15% lower in the postobese than in control subjects. DeBoer et al. (22) found that sleeping metabolic rate declined appropriately for the decline in fat-free mass when obese subjects lost weight, but that total energy expenditure declined more than expected for the change in fat-free mass. Similar results were obtained by Leibel et al. (60), who speculated that increased work efficiency may be partially responsible for weight regain after weight loss.
Alternatively, Froidevaux (33) measured the energy cost of walking in 10 moderately obese women before and after weight loss and during refeeding. Total energy expended during treadmill walking declined with weight loss but was entirely explained by the decline in body mass. Net efficiency of walking did not change. Poole and Henson (82) also found no change in efficiency of cycling after caloric restriction in moderately obese women. Weigle and Brunzell (101) demonstrated that ∼50% of the decline in energy expenditure with weight loss was eliminated when they replaced weight lost by energy restriction with external weight worn in a specially constructed vest (101).
Thus, although it is clear that total energy expenditure declines with weight loss, the extent to which changes in work efficiency contribute to this decline is controversial. It is an important question, because if work efficiency truly changes, it implies that a mechanism may exist to define the work efficiency of NEAT activities and impact energy balance.
ROLE OF SKELETAL MUSCLE METABOLISM IN DETERMINING WORK EFFICIENCY.
Differences in skeletal muscle morphology and/or metabolism may play a role in differences in work efficiency. Henriksson (41) suggested that changes in muscle morphology in response to energy restriction lead to changes in the relative proportion of type I vs. type II fibers in human subjects. Some studies suggest that type II fibers have a greater fuel economy than type I fibers (19, 103). Because type II fibers appear to be better preserved during starvation than type I fibers (41), overall fuel economy and work efficiency may increase after energy restriction and loss of body mass. However, a recent study on muscle fiber type before and after an 11-kg weight loss in obese females did not show any changes in the fiber type distribution (97).
The potential contribution of skeletal muscle differences to differences in work efficiency between weight-stable lean and obese subjects is more controversial. Data suggest that obese subjects oxidize proportionally more carbohydrate and less fat than lean subjects in response to perturbations in energy balance (94, 109, 110), and that differences in morphology/metabolism of skeletal muscle and sympathetic nervous system activity (6) may underlie some of the whole body differences (16, 110). However, it is not clear to what extent such differences contribute to differences in work efficiency. Furthermore, such differences may arise from genetic and environmental causes.
GENETIC CONTRIBUTIONS TO WORK EFFICIENCY.
Very little information is available to allow estimation of the genetic contribution to differences in work efficiency. When the energy costs associated with common body postures (sitting, standing) and low-intensity activities (walking, stair climbing, and the like) were measured in 22 pairs of dizygotic and 31 pairs of monozygotic sedentary twins, there was a genetic effect for energy expenditure for low-intensity activities (from 50 to 150 W), even after correction for differences in body weight (10). No genetic effect was seen for activities requiring energy expenditure greater than six METs. These observations hint at an intriguing possibility, namely, that the efficiency of NEAT activities may be genetically programmed.
AGE AND WORK EFFICIENCY.
Work efficiency for NEAT activities may vary with age. For example, children are ∼10% more energy efficient during squatting exercises than adults (98). However, there is little information available to evaluate the effects of aging on work efficiency. Skeletal muscle mass is often lost as a subject ages, and if the loss involves a greater proportion of type I vs. type II fibers, work efficiency could increase with age.
EXERCISE TRAINING AND WORK EFFICIENCY.
If the work efficiency of NEAT activities varies as a function of muscle morphology, exercise-induced effects in skeletal muscle could be important for NEAT. Alterations in exercise can alter the fiber type proportions of skeletal muscle as well as induce changes in enzyme activities. Aerobic exercise training results primarily in the transformation of type IIb into type IIa fibers, whereas transformation of type II fibers into type I fibers is not common unless the exercise training has been extremely intense over a long period of time. Type I fibers have a greater mitochondrial density and are more oxidative and more fatigue resistant than type IIb fibers. Type IIb fibers are glycolytic in nature, with lower mitochondrial content, and are more prone to fatigue. Type IIa fibers are intermediate in their mitochondrial content and, in humans, closely resemble type I fibers in oxidative capacity. However, an overlap of oxidative capacity exists between fiber types. Type I and type IIa fibers are more energy efficient than type IIb fibers, and the proportions of these fiber types will vary according to the type of exercise training performed. It has been shown that, even independent of fiber type alterations, the activities of important enzymes in oxidative and glycolytic pathways can be modified as a result of exercise training and can lead to improvements in metabolic efficiency. Training may increase work efficiency whereby elite runners and cyclists average lower energy expenditures (15% for running and ≤50% for swimming) at specified velocities compared with untrained individuals (45, 46, 89). Here, the concept is introduced whereby exercise directly affects NEAT through change in work efficiency.
GENDER AND WORK EFFICIENCY.
There are several reports that female athletes, unlike male athletes, are more energy efficient than their sedentary counterparts (20, 50, 77). There are reports in the literature of increased energy efficiency in female runners (77), dancers (20), and swimmers (50) compared with sedentary females. Most reports make conclusions regarding energetic efficiency on the basis of indirect rather than direct measurements of energy intake and/or expenditure. For example, Mulligan and Butterfield (77) concluded that female runners had increased energy efficiency because their self-reported energy intake was less than their estimated energy expenditure. However, in the few studies in which both intake and expenditure were measured directly, no evidence of increased energy efficiency was seen in female runners (88) or cyclists (47). Thus the questions of whether female athletes show a different energy efficiency than sedentary females is controversial. Whether there are inherent gender differences for the efficiency of nonexercise activities is open to speculation but could readily be studied.
Several independent lines of evidence point toward a genetic role for NEAT (8). Animal data demonstrate that NEAT clusters for murine strain (4). On the basis of twin and family studies, the heritability for physical activity level is estimated to be between 29 and 62%. Analysis of self-reports of physical activity from the Finnish Twin Registry, consisting of 1,537 monozygotic and 3,057 dizygotic twins, estimated a 62% heritability level for age-adjusted physical activity (51). Analyses of self-reported physical activity from the Quebec Family Study, consisting of 1,610 members of 375 families, showed a heritability level of 29% for habitual physical activity (81). When 12 pairs of twins were overfed by an estimated 84,000 kcal over a period of 100 days, weight and fat gain clustered for “twinness” this could only have been through concordant changes in energy expenditure. This in turn was likely to be through NEAT, because changes in BMR and TEF could not account for the twin-associated relationship (9). Nonetheless, it is intriguing to speculate that genetics directly impact NEAT. Perhaps the twin of a laborer is predisposed to becoming a lumberjack rather than an office worker.
Studies have consistently shown a decline in physical activity with aging in men and women (5, 14, 108). Some data suggest that the “aging-gap” is closing. During the period from 1986 to 1990, activity levels increased more in elderly subjects than in young adults (108). One wonders whether the sarcopenia of aging contributes to NEAT.
We overfed 16 healthy subjects by 1,000 kcal/day for 8 wk four of these subjects were women. There was a 10-fold variance in fat gain (0.4–4.2 kg). The four persons who gained the most fat were women [3.4 ± 0.7 (SD) kg] compared with fat gain for men (2.1 ± 1.1 kg). Women did not increase their NEAT with overfeeding on average (ΔNEAT = −2.1 ± 102 kcal/day), whereas men did by 438 ± 184 kcal/day. It would be intriguing if women modulate NEAT differently from men. Could this be a means to preserve fat stores with increased workloads? Could this have impacted the allocation of work tasks in labor-intensive environments?
There are substantial data to suggest that overweight individuals show lower NEAT levels than their lean counterparts (78, 95). This appears to be true across all ages, for both genders, and for all ethnic groups. It is not possible to ascertain whether effects of body composition on nonexercise activities occur independently of weight.
What is fascinating to speculate is that a person with a high “programmed NEAT” might select a more active job (e.g., car washing or ambulatory mail delivery) than a person with a lower biological drive for NEAT (e.g., civil service). The mechanism of the volition for selection of occupation has not been defined.
Total NEAT in energy homeostasis.
There is evidence that NEAT is important in human energy homeostasis. NEAT is the key predictor of non-BMR energy expenditure, and BMR is largely predicted by body size or lean body mass. NEAT then becomes the crucial component of energy expenditure that is most variable and least predictable. Consider the energy expenditure of a person who works as a road layer but then becomes a secretary. For this example, it is self-evident that variations in NEAT can result in severalfold variations in total energy expenditure independent of body size. What is not self-evident is whether changes in NEAT contribute to the mechanism by which adipose tissue accumulates.
Further insight into total NEAT comes from Westerterp's observation (104) that, in free-living individuals, the cumulative impact of low-intensity activities over greater duration is of greater energetic impact than short bursts of high-intensity physical activities (104). Thus, for a given individual, his/her NEAT is defined by the total energetic cost of occupational plus nonoccupational activities, which in turn are influenced by the sedentary conditions of the individual's microenvironment (e.g., workplace) and macroenvironment (e.g., country).
Changes in NEAT with positive energy balance.
Several studies have employed an overfeeding paradigm to determine whether energy expenditure changes during forcible overfeeding. On balance, these studies have demonstrated that, as overfeeding occurs, NEAT increases (87). In one such study, twelve pairs of twins were overfed by 1,000 kcal/day above estimated resting needs. There was a fourfold variation in weight gain, which had to represent substantial variance in energy expenditure modulation, because food intake was clamped. This variance in energy expenditure response could not be accounted for by changes in resting energy expenditure alone, and so, indirectly, NEAT is implicated. What was also fascinating was that twinness accounted for some of the interindividual variance in weight gain, suggesting that the NEAT response to overfeeding is in part genetically determined.
NEAT was directly implicated in the physiology of weight gain when 16 sedentary, lean individuals were carefully overfed by 1,000 kcal/day (63). All components of energy expenditure and body composition were carefully determined. There was a 10-fold variation in fat gain and an 8-fold variation in changes in NEAT. Those individuals who increased their NEAT the most gained the least fat with overfeeding, and those individuals who failed to increase their NEAT with overfeeding gained the most fat (Fig. 5) (63). Studies are too sparse to define how changes in the amount of nonexercise activity interplay with changes in energy efficiency the bulk of evidence suggests that increases in the amount of physical activity predominate. Changes in BMR and TEF were not predictive of changes in fat gain. These data strongly imply that NEAT may counterbalance fat gain with positive energy balance, when appetite is clamped.
Fig. 5.Changes in nonexercise activity thermogenesis (NEAT) with overfeeding. Healthy subjects (n = 16) were overfed by 1,000 kcal/day over baseline energy needs. Fat gain, on the x-axis, was determined from dual X-ray absorptiometry. Change in NEAT was calculated from NEAT values measured before and after overfeeding: NEAT = TDEE − (BMR+TEF), where TDEE is total daily energy expenditure, BMR is basic metabolic rate, and TEF is the thermic effect of food.
Changes of NEAT with negative energy balance.
With underfeeding, physical activity and NEAT decrease. Chronic starvation is known to be associated with decreased physical activity (49, 55, 70). Whether those individuals who are susceptible to ready fat loss are those who fail to decrease NEAT with underfeeding has not been established. However, let us argue that, with a prolonged energy deficit of 500–600 kcal/day, BMR decreases by ∼10% (i.e., ∼200 kcal/day). This assumes a sustained decrease in lean body mass (LBM) that may not actually occur (13, 15, 25, 30, 32, 37, 75, 99), and TEF decreases by ∼0–50 kcal/day (26, 102). Hence, NEAT has to decrease by ∼200–300 kcal/day once fat loss reaches a plateau. In one study with severe energy reduction (800 kcal/day) (60), decreases in NEAT likely accounted for 33% of the decrease in total daily energy expenditure (TDEE) in lean subjects, 46% in obese subjects with 10% weight loss, and 51% in obese subjects with 20% weight loss (60). If NEAT decreases with negative energy balance, is it because the quantity of physical activities decrease or because there are decreases in energy efficiency, or both? Studies to date have not definitively answered this question. With severe energy reduction (420 kcal/day) in obesity, maximal O2 consumption (V̇ o 2 max) and energy expenditure at submaximal loads may decrease (54) however, with less severe energy restriction, V̇ o 2 max appears unchanged (52). Thus the balance of information suggests that NEAT decreases with negative energy balance. It is unclear whether the effect is through decreased amounts of activity, altered energetic efficiency, or both.
Overall, there are a multitude of biological effectors of NEAT. It appears that with weight gain, NEAT increases, and with weight loss, NEAT decreases. This creates an intriguing scenario whereby NEAT might act to counterbalance shifts in energy balance. It could be that these changes in NEAT, along with those that affect BMR and TEF, are small and swamped by changes in energy intake. However, consider that some subjects who were overfed by 1,000 kcal/day increased NEAT by >600 kcal/day. This argues that changes in energy expenditure and NEAT may be quantitatively important in the physiology of body weight regulation. Let us now consider the mechanism by which NEAT is modulated.
CORONARY ARTERY DISEASE
Atherosclerotic disease is the leading cause of mortality in developed countries, with CAD being the number one killer of both men and women. In fact, every year since 1919, cardiovascular diseases have ranked as the no. 1 killer in the United States. In 2001, cardiovascular diseases accounted for ∼39% of all deaths (931,108 deaths) (9, 15). Despite estimates that death rates from cardiovascular diseases declined 17% from 1990 to 2000, secondary to improvements in disease diagnostics, surgical procedures, and drug therapy, the actual number of deaths increased 2.5% during this period (390).
The association between lifestyle, diet, and CAD has been investigated since the early 1900s. In the latter half of the 20th century, with feeding studies demonstrating that saturated fat and dietary cholesterol increased serum cholesterol (247), dietary fat emerged as a determinant of serum cholesterol (251). Epidemiological and clinical studies established a link between dietary saturated fat, dietary cholesterol, serum cholesterol, and CAD mortality (192, 193). Keys’ Seven Countries Study examined risk factors for CAD in over 12,000 men and both the average population intake of saturated fat (192), and changes in average serum cholesterol levels (252) were strongly related to CAD mortality rates. Interestingly, intakes of flavonols (antioxidant polyphenols) were also independent contributors in explaining population differences in CAD mortality rates (147), suggesting that low-density lipoprotein (LDL) modification may also be critical to the progression of atherosclerosis. The Framingham Heart Study and MRFIT Study emphasized the relationship between serum cholesterol, especially LDL-cholesterol (LDL-C), and CAD (68, 311, 358). Cross-culturally, in rural China for example, fat intake was less than half that in the United States, and fiber intake was three times higher. Animal protein intake was low, at ∼10% of the US intake. Mean serum total cholesterol (Total-C) was 127 mg/dl in rural China vs. 203 mg/dl for adults aged 20–74 yr in the United States, and CAD mortality was 16.7-fold greater for US men and 5.6-fold greater for US women than for their Chinese counterparts. Importantly, there was no evidence of a threshold beyond which further benefits did not accrue with increasing proportions of plant-based foods in the diet (63). Migration studies have also provided compelling evidence for the relation between saturated fat intake and CAD (184). These early data have been confirmed by the more recent cohort studies. The Nurses’ Health Study reported that saturated and trans-fatty acids are associated with increased risk for CAD (162). It is now well established that LDL-C levels are increased by saturated fatty acids, especially those with 12–16 carbon atoms, and by trans-fatty acids (193).
In addition, carbohydrate type affects CAD risk. Refined carbohydrates are highly processed, resulting in removal of fiber, vitamins, minerals, phytonutrients, and essential fatty acids. Consumption of refined carbohydrates compared with whole grains increases the risk of CAD (173, 232), resulting, in part, from the increased glycemic load of these types of carbohydrates (233). Furthermore, increased fiber consumption is inversely related to both CAD (228, 413) and all-cause mortality (174). High-fiber foods lower LDL-C levels and improve insulin sensitivity (58). The large Women’s Health Study showed an inverse relation between dietary fiber intake and the risk of CAD events (228). This may be attributed in part to increased consumption of fruits and vegetables, which have been documented in numerous studies to decrease CAD risk (38, 179, 230). Additionally, moderate consumption of protein is associated with a reduced risk of CAD (163), whereas substitution of red meat with poultry and fish also decreases risk (161). As a consequence of this research, diet has gone to the forefront as a regulator of CAD progression.
Physical activity also plays a critical role in the pathogenesis of CAD. The Adult Treatment Panel III summary concluded that physical inactivity is a major risk factor for CAD (98). Total physical activity and vigorous activities associate inversely and strongly with CAD risk (349), as Blair et al. (46) documented an inverse association between cardiorespiratory fitness and both all-cause and CAD mortality in over 13,000 individuals. The relative risk of CAD has been estimated to be about twofold higher for inactive subjects compared with physically active persons (314). In the Women’s Health Initiative Observational Study (239) and the Nurses’ Health Study (240), 30–40% of CAD was prevented by simply walking briskly >2.5 h/wk, compared with less than this amount of physical activity. Additionally, in the Harvard alumni study, mortality risk, primarily from cardiovascular diseases, varied inversely with calories expended (286). In a study of 4,276 men, the relative risk of death from CAD was about threefold higher for unfit individuals independent of conventional coronary risk factors (91), and several additional studies have documented that physical activity is comparable to conventional risk factors in the ability to predict risk (44, 406). Laukkanen et al. (214) noted an inverse relation between maximal oxygen consumption (V̇ o 2max) and relative risk of cardiovascular death, as high fitness was associated with slower progression of carotid atherosclerosis as measured by B-mode ultrasonography (212). Additionally, in the Health Professionals Follow-up Study, men who trained with weights for at least 30 min/wk had a 25% reduction in CAD risk (370).
Several cohort studies have assessed the combined effects of a healthy lifestyle on CAD. In the Nurses’ Health Study cohort, in which 84,129 women aged 30–55 yr were enrolled and followed up for 14 yr (359), a healthy lifestyle was defined as not smoking, consuming at least half a drink of alcoholic beverage per day, engaging in moderate to vigorous physical activity for >30 min/day, and a BMI <25 kg/m 2 . A healthy diet included components such as cereal fiber, marine n-3 fatty acids, folate, low trans-fatty acids, and glycemic load. Adherence to these factors correlated inversely with 14-year CAD incidence. Stampfer et al. (359) noted that 82% of CAD events could be prevented by a combination of physical activity and diet, providing additional evidence for a combined effect. When comparing dietary intake, consumption of vegetables, fruit, legumes, whole grains, fish, and poultry was associated with a decreased risk of CAD, whereas typical Westernized diet patterns high in red and processed meats, refined grains, sweets/desserts, and high-fat dairy products was associated with increased risk independent of other lifestyle factors (114, 159).
Intervention studies and mechanisms.
Despite the abundance of evidence that lifestyle modification can mitigate the burden of cardiovascular diseases, they are still the major cause of death in developing nations. Several clinical trials and intervention studies have been conducted, unequivocally documenting the benefits of regular physical activity and diet for CAD risk reduction, mediated by changes in plasma lipids, blood pressure, inflammation, insulin sensitivity, coronary blood flow, endothelial function, and oxidative stress, among others. One of the earliest intervention trials was the Oslo-Diet Heart Study, in which 412 men were randomized to either a cholesterol-lowering diet or a control diet 1 to 2 yr after their first myocardial infarction (220). Men consuming a diet lower in saturated fat and cholesterol had a 17.6% reduction in Total-C compared with 3.7% in the control group over 5 yr and after 11 yr, significantly fewer CAD-related deaths. Schuler et al. (346) investigated progression of coronary atherosclerotic lesions in patients with stable angina pectoris. Intervention patients consumed <20% fat calories and exercised for >3 h/wk. Significant regression of coronary atherosclerotic lesions by angiography was noted in 7 of the 18 patients no change or progression was present in 11 patients, whereas in patients receiving usual care, regression was detected in only 1, with no change or progression in 11 patients. In addition, there was a significant reduction in stress-induced myocardial ischemia, indicative of improvement of myocardial perfusion, which was not limited to patients with regression of coronary atherosclerotic lesions, suggesting that not only does lifestyle modification retard progression of CAD, but improvement of myocardial perfusion may be achieved independently from lesion regression. In a larger group of patients, this group noted that CAD progressed more slowly with daily activity and diet modification (347).
In the Stanford Coronary Risk Intervention Project, 300 patients with angiographically defined coronary atherosclerosis were randomly assigned to usual care or multifactor risk reduction. Patients assigned to risk reduction were instructed to consume <20% fat (<6% from saturated fat) and <75 mg of cholesterol per day. Physical activity was recommended, consisting of an increase in daily activities such as walking, climbing stairs, and household chores and a specific endurance exercise training program. Intensive risk reduction resulted in improvements in LDL-C, ApoB (both ∼22%), high-density lipoprotein cholesterol (HDL-C) (+12%), triglycerides (TG) (−20%), body weight (−4%), and exercise capacity (+20%) compared with the usual-care group. The intervention group also exhibited a 47% reduced rate of narrowing of diseased coronary artery segments, with some showing regression (136). In the Lifestyle Heart Trial, 48 patients were randomized to either intensive dietary and lifestyle changes, including a whole-food vegetarian diet with 10% of energy from fat, aerobic exercise, stress management training, smoking cessation, and group social support, or usual care, consisting of an NCEP Step I diet (282). After 1 yr, the experimental group showed more favorable changes in angina frequency and quantitative coronary arteriography. After 5 yr of follow-up, the experimental group exhibited a relative reduction in diameter stenosis of 7.9% compared with a 27.7% progression in the control group (281). The risk ratio for a cardiac event in the control group compared with the experimental group was 2.47.
One intervention that has been studied extensively is the Pritikin residential lifestyle intervention, designed to achieve changes in lifestyle that are very extensive in each subject. Participants undergo a complete medical history and physical examination, before a 26-day (more recently 21-day or 11-day) physical activity and diet intervention. Meals are served buffet style, and all participants are allowed unrestricted eating except for the meals when 3 oz. of fish or fowl are provided. Prepared meals contain 10–15% of calories from fat, 15–20% of calories from protein, and 65–75% of calories from carbohydrates, primarily unrefined, according to analysis by computer dietary analysis software. Carbohydrates are in the form of high-fiber whole grains (≥5 servings/day), vegetables (≥4 servings/day), and fruits (≥3 servings/day). Protein is primarily derived from plant sources with small amounts of nonfat dairy (up to 2 servings/day) and fish or chicken. The diet contains <100 mg of cholesterol, and alcohol, tobacco, and caffeinated beverages are not served during the program. Before starting the exercise training, subjects undergo a graded treadmill stress test according to a modified Bruce protocol to determine the appropriate individual level of exercise intensity. On the basis of the results, the subjects are provided with an appropriate training heart rate value and given an individualized aerobic exercise program. The exercise regimen consists of daily treadmill walking at the training heart rate for 45–60 min. The training heart rate is defined as 70–85% of the maximal heart rate attained during the treadmill test. Additionally, the subjects perform flexibility and resistance exercise.
Early studies documented that this combined physical activity and diet intervention decreased all serum lipids and angina in patients, the majority of whom had a prior myocardial infarction and/or multiple vessel disease and all of whom had been recommended for bypass surgery. The majority were taken off cardiac and/or blood pressure-lowering drug therapy. The durability of the changes were evidenced by a 5-yr follow-up, which documented that adherence to the program resulted in maintenance of the changes and dramatically reduced the need for bypass surgery (25). The 4,587 men and women who completed the 26-day physical activity and diet intervention from 1977 to 1988 revealed an average Total-C reduction of 23%, from 234 to 180 mg/dl. LDL-C decreased by 23%, from 151 to 116 mg/dl, with male subjects exhibiting a greater reduction in Total-C (24 vs. 21%) and LDL-C (25 vs. 19%) compared with female subjects. HDL-C was reduced by 16%, but the ratio of Total-C to HDL-C was reduced by 11%. Serum TG decreased 33%, from 200 to 135 mg/dl, with male subjects showing a greater reduction than female subjects (38% vs. 23%) (21). Figure 1 indicates the effect of combined lifestyle modification vs. diet modification, as tested by using an NCEP Step I or Step II diet, and suggests that more intensive dietary changes and the addition of exercise increase lipid reductions. Body weight was also reduced, 5.5% for male subjects and 4.4% for female subjects. Follow-up studies for 18 mo on a subgroup documented that continued compliance with the program led to maintained Total-C values, documenting that reductions were not transient. The drop in HDL-C is consistent with Brinton et al. (56) using a low-saturated fat, low-cholesterol diet, who suggested that diet-induced reductions in HDL-C changing from a high-fat to a low-fat diet does not carry the same risk as low HDL-C within a given diet. In the context of absolute lipid levels, one with a lower Total-C, LDL-C, HDL-C, and Total-C-to-HDL-C ratio would be at lower risk (249) than one with elevated levels, and given that diet affects numerous other cardiovascular variables (see below), a high-fiber, low-fat diet would be more appropriate. Additionally, it is well established that polyunsaturated fats decrease heart disease risk however, this has led some to suggest that polyunsaturated fats should replace carbohydrate in the diet, citing increases in TG (85, 324), an effect that does not occur when high-fiber-containing carbohydrates are consumed (11). Furthermore, the beneficial effects of polyunsaturates can be largely attributed to omega-3 fatty acids in nuts (2) and fish (1, 156).
Fig. 1.Analysis of lipid reductions with National Cholesterol Education Program (NCEP) diet interventions vs. Pritikin combined lifestyle intervention. LDL, low-density lipoprotein HDL, high-density lipoprotein. (Data from Refs. 21, 420).
This lifestyle intervention has also been documented to improve coronary flow reserve. In 1995, Czernin et al. (73) measured myocardial blood flow at rest and after dipyridamole-induced hyperemia quantified with [ 13 N]ammonia and positron emission tomography in 13 individuals undergoing 6-wk outpatient lifestyle modification. Resting rate-pressure product decreased (8,859 ± 2,128 vs. 7,450 ± 1,496), and the metabolic equivalent (METs) during an exercise task improved from 10.0 ± 3.0 to 14.4 ± 3.6 METs. Coronary resting blood flow decreased (0.78 ± 0.18 vs. 0.69 ± 0.14 ml·g −1 ·min −1 ), whereas hyperemic blood flow increased (2.06 ± 0.35 vs. 2.25 ± 0.40 ml·g −1 ·min −1 ), resulting in an improved myocardial flow reserve (2.82 ± 1.07 vs. 3.39 ± 0.91-fold).
It is now clear that, in addition to the level of a given lipoprotein, its properties (HDL-inflammatory/anti-inflammatory properties, LDL size, and susceptibility to oxidation) may be critical to the atherogenic process. During an acute phase response, HDL is proinflammatory, independent of the level of HDL-C (271, 272, 393). In a study of 27 patients with normal levels of plasma HDL and yet with angiographically documented coronary atherosclerosis, who were not diabetic, who did not smoke, and who were not taking hypolipidemic medications, Navab and coworkers (271, 272, 393) studied the ability of the patients’ HDL to inhibit LDL oxidation. This assay is performed using cocultures of human aortic endothelial cells and smooth muscle cells treated with native LDL and patient HDL. After an incubation period, the supernatant is removed and tested for monocyte chemotactic activity as a result of stimulation by the oxidized LDL. These investigators observed that the HDL from the patients was not protective against LDL oxidation (270). This group went on to document the same effect in patients with very high HDL-C (mean HDL-C 85 mg/dl) (12). Roberts et al. (unpublished data) documented in subjects at risk for CAD that, despite a lifestyle modification-induced reduction in HDL-C concentration, the ability of HDL to protect against LDL oxidation improved, supporting the contention of a complex relationship between HDL, diet, and physical activity. Although at a population level higher plasma HDL-C levels are associated with lower risk for CAD, at an individual level, HDL function may well be more important than plasma HDL-C levels.
Increasing evidence indicates that oxidative stress, for example the oxidation of apolipoprotein-B-containing lipoproteins, may play an integral role in lipoprotein atherogenicity (235). For example, 8-isoprostane PGF2α (8-iso-PGF2α) has been shown to be elevated in individuals at risk for cardiovascular events (295). Beard et al. (39) investigated the effects of physical activity and diet on LDL quality as well as its susceptibility to in vitro oxidation in men and women. The mean particle diameter of LDL increased, correlated with the reduction in serum TG, and LDL oxidation decreased 21%. Parks et al. (294) also addressed the issue of whether physical activity and diet can affect LDL oxidation. Twenty-five patients with documented CAD underwent a 3-mo treatment, and although two indexes of oxidizability, LDL particle size and fatty acid composition, were not affected by the treatment, it did increase the vitamin E and β-carotene contents of LDL and reduced the in vitro oxidizability of LDL. These data were corroborated in a group of postmenopausal women (27) and suggest that lifestyle modification may reduce LDL oxidizability. Others have noted decreases in LDL size on lower fat diets (85), which would increase LDL susceptibility to oxidation and may be related to the use of refined carbohydrates, which affects hepatic lipid metabolism.
Lifestyle modification has also documented significant improvement in plasma lipids in patients undergoing cholesterol-lowering drug therapy. In a group of 93 subjects, before drug therapy, mean Total-C was 276 ± 5 mg/dl and was reduced by 20% to 220 ± 4 mg/dl (24). Total-C dropped an additional 19% to 178 ± 4 mg/dl with the diet and exercise intervention. LDL-C decreased an additional 20% (126 ± 4 to 101 ± 3) and TG were reduced by 29% (195 ± 10 to 139 ± 6) with combined drug therapy and lifestyle modification. Patient query revealed that 51% percent of the primary care physicians had not used diet therapy before initiating drug therapy and 29% did not use diet therapy along with the drugs as recommended by the NCEP. Benefits have also been noted in postmenopausal women on hormone replacement therapy (27). More recently, Jenkins et al. (176, 177) used the whole-diet approach in hyperlipidemic patients to compare the effects of diet to those of lipid-lowering therapy. The diet, which was low in saturated fat and included viscous fibers, almonds, soy protein, and plant sterols, induced reductions in lipids that were comparable to statin therapy, independent of changes in body weight. Total-C decreased from 268 to 209 mg/dl on the diet vs. 256 to 197 mg/dl on lovastatin, LDL-C 178 to 126 mg/dl vs. 172 to 117 mg/dl on the statin, HDL-C from 45.9 to 42.8 mg/dl vs. 45.5 to 44.0 mg/dl on the statin, and TG from 219 to 202 mg/dl vs. 196 to 180 mg/dl on the statin. In the Dietary Approaches to Stop Hypertension (DASH) trial, the effect of a diet alone on plasma lipids was tested in 436 participants, who increased consumption of fruits, vegetables, and low-fat dairy products and reduced saturated fat, total fat, and cholesterol. Relative to the control diet, the DASH diet decreased Total-C (13.7 mg/dl), LDL-C (10.7 mg/dl), and HDL-C (3.7 mg/dl) with no change in TG or body weight (280).
Physical activity and/or dietary intervention can also reduce the risk for CAD by other mechanisms, and attention has recently focused on the involvement of inflammation in CAD, with multiple prospective studies suggesting that elevated C-reactive protein (CRP) is a sensitive predictor of myocardial infarction, stroke, peripheral arterial disease, and sudden cardiac death. When considered in conjunction with plasma Total-C, CRP serves as a better predictor of CAD risk than Total-C alone (328), may be a stronger predictor of cardiovascular events than LDL-C, and adds prognostic information at all levels of the metabolic syndrome (326). These data suggest that atherothrombosis, in addition to being a disease of lipid accumulation, also represents a chronic inflammatory process. Wegge et al. (404) demonstrated that the Pritikin combined physical activity and diet intervention decreased CRP by 45%, serum amyloid A by 37%, and soluble ICAM-1 in postmenopausal women on hormone replacement therapy with risk factors for CAD. Heilbronn et al. (143) reported that CRP decreased when obese women underwent a 12-wk low-fat, 1,400 kcal diet (14% fat, 61% carbohydrate, 23% protein), whereas Ziccardi et al. (421) noted decreases in P-selectin and ICAM-1 and improvement in vascular response to l -arginine, after 1 yr of a 1,300 kcal diet (23% fat, 55% carbohydrate, 22% protein) and exercise (encouraged to walk 1 h 3 days/wk) intervention in obese women. In the aforementioned study by Jenkins et al. (177), after 1 mo CRP decreased 28% in the diet group and 33% in the lovistatin group, suggesting that the ability of diet to reduce CRP was comparable to statin therapy. Elevated CRP is associated with decreased nitric oxide (NO) bioavailability in human endothelial cells (395, 396), induces plasminogen activator inhibitor (82), and is independently related to insulin (316). Along these lines, Barnard et al. (26) documented reduced platelet aggregation and thromboxane formation and Mehrabian et al. (250) noted a reduction in plasminogen activator inhibitor after the Pritikin physical activity and diet intervention. The mechanisms responsible for the observed reductions in inflammation may be related, in part, to attenuation of oxidative stress, as flavonoids and other antioxidants present in fruits and vegetables have been demonstrated to possess anti-inflammatory activities (255). The addition of vegetables to the diet has been shown to reverse the increase in sICAM-1 and sVCAM-1 induced by high-fat meal consumption (121). Consumption of an array of phytonutrients may be optimal, as the effect of individual supplements on inflammatory markers is not consistent (389). Liu et al. (229) have shown that glycemic load is associated with increased plasma CRP concentration, and epidemiological studies indicate that regular physical activity can also reduce inflammation (108), suggesting that both physical activity and diet may contribute to reduced inflammation.
The mechanisms for the benefits of physical activity in reducing CAD risk are numerous and include effects on plasma lipids (202), endothelial function (127), insulin sensitivity (172), inflammation (108), and blood pressure (100). For example, Oscai et al. (285) reported a normalization of elevated TG levels and very-low-density lipoprotein patterns after 7 days of 45-min daily walking or jogging. Exercise enhances levels of antioxidant enzymes superoxide dismutase and glutathione peroxidase (315). Smith et al. (355) documented that mononuclear cell production of atherogenic cytokines [interleukin (IL)-1α, tumor necrosis factor-α, and interferon-γ] and CRP fell by 58 and 35%, respectively, after the exercise program, whereas the production of atheroprotective cytokines (IL-4, IL-10, and transforming growth factor-β1) rose by 36%. Changes in transforming growth factor-β1 and cytokine production after the exercise program were proportionate to the time subjects spent performing lower body exercise. In a series of studies, Hambrecht and colleagues (128, 130) established that exercise training improves endothelium-dependent vasodilation in patients with CAD and chronic heart failure and provided evidence that improvement of endothelial function is associated with increased endothelial NO synthase (NOS), Akt phosphorylation, and endothelial NOS Ser 1177 phosphorylation by Akt (127). Recently, this group provided evidence that event-free survival was superior with 1 yr of exercise training compared with percutaneous coronary angioplasty, and this occurred at a lower financial cost secondary to reduced rehospitalizations and repeat revascularizations (129).
Additionally, it is important to point out that, although obesity contributes to atherosclerosis progression, mediated by effects related to visceral obesity, inflammatory cytokines produced in adipocytes, among other potential causes, the benefits of physical activity and diet modification may be accrued independent of significant weight loss. Evidence comes from Ehnholm et al. (90), who placed 54 subjects on a low-fat (∼24% of total calories) diet for 6 wk. Total-C decreased from 263 to 201 mg/dl in men and from 239 to 188 mg/dl in women however, body weight only decreased 2 lb. When the subjects resumed their usual diet (including ∼39% calories from fat), Total-C increased back to baseline levels, despite no change in body weight. Just as risk factors for heart disease can be affected by changes in lifestyle independent of changes in body weight, the actual disease itself can be as well. Applegate et al. (14) evaluated coronary angiograms of more than 4,500 men and women and noted that the risk of atherosclerosis actually decreased as body weight increased. The large-scale International Atherosclerosis Project analyzed over 23,000 sets of coronary arteries obtained at autopsy and found no relation between body weight or body fat and degree of CAD (115). In the Cholesterol Lowering Atherosclerosis Study, 82 moderately overweight men with CAD underwent a 2-yr program. Men who improved their diets showed no new fatty deposits in their coronary vessels, determined by coronary angiography. However, men who failed to make significant dietary changes all showed evidence of new lesions. Neither group lost any weight during the 2-yr study, suggesting that the appearance of new lesions can be influenced without weight change (47).
Physical inactivity and dietary factors both contribute vitally to atherosclerosis and consequent CAD. Studies indicate that inactivity may be as predictive of CAD risk as conventional risk factors, exercise training may improve endothelial function and is superior to percutaneous angioplasty for short-term survival. Additionally, several dietary factors such as fiber, fat (amount and type), glycemic load, and fruit and vegetable consumption appear to significantly modulate CAD risk. Combined exercise and diet interventions mitigate atherosclerosis progression and may in fact induce plaque regression and/or improve myocardial flow reserve. These benefits are, at least in part, due to reductions in plasma lipids, lipid oxidation, and inflammation. Improvements in risk factors with diet may, in some instances, be as great as with statin therapy, and lifestyle interventions combined with statin therapy possess additive effects on lipid lowering. Moreover, although obesity contributes to CAD, risk can be modified independent of large changes in weight.
Experiencing childhood trauma makes body and brain age faster
Children who suffer trauma from abuse or violence early in life show biological signs of aging faster than children who have never experienced adversity, according to research published by the American Psychological Association. The study examined three different signs of biological aging -- early puberty, cellular aging and changes in brain structure -- and found that trauma exposure was associated with all three.
"Exposure to adversity in childhood is a powerful predictor of health outcomes later in life -- not only mental health outcomes like depression and anxiety, but also physical health outcomes like cardiovascular disease, diabetes and cancer," said Katie McLaughlin, PhD, an associate professor of psychology at Harvard University and senior author of the study published in the journal Psychological Bulletin. "Our study suggests that experiencing violence can make the body age more quickly at a biological level, which may help to explain that connection."
Previous research found mixed evidence on whether childhood adversity is always linked to accelerated aging. However, those studies looked at many different types of adversity -- abuse, neglect, poverty and more -- and at several different measures of biological aging. To disentangle the results, McLaughlin and her colleagues decided to look separately at two categories of adversity: threat-related adversity, such as abuse and violence, and deprivation-related adversity, such as physical or emotional neglect or poverty.
The researchers performed a meta-analysis of almost 80 studies, with more than 116,000 total participants. They found that children who suffered threat-related trauma such as violence or abuse were more likely to enter puberty early and also showed signs of accelerated aging on a cellular level-including shortened telomeres, the protective caps at the ends of our strands of DNA that wear down as we age. However, children who experienced poverty or neglect did not show either of those signs of early aging.
In a second analysis, McLaughlin and her colleagues systematically reviewed 25 studies with more than 3,253 participants that examined how early-life adversity affects brain development. They found that adversity was associated with reduced cortical thickness -- a sign of aging because the cortex thins as people age. However, different types of adversity were associated with cortical thinning in different parts of the brain. Trauma and violence were associated with thinning in the ventromedial prefrontal cortex, which is involved in social and emotional processing, while deprivation was more often associated with thinning in the frontoparietal, default mode and visual networks, which are involved in sensory and cognitive processing.
These types of accelerated aging might originally have descended from useful evolutionary adaptations, according to McLaughlin. In a violent and threat-filled environment, for example, reaching puberty earlier could make people more likely to be able to reproduce before they die. And faster development of brain regions that play a role in emotion processing could help children identify and respond to threats, keeping them safer in dangerous environments. But these once-useful adaptations may have grave health and mental health consequences in adulthood.
The new research underscores the need for early interventions to help avoid those consequences. All of the studies looked at accelerated aging in children and adolescents under age 18. "The fact that we see such consistent evidence for faster aging at such a young age suggests that the biological mechanisms that contribute to health disparities are set in motion very early in life. This means that efforts to prevent these health disparities must also begin during childhood," McLaughlin said.
There are numerous evidence-based treatments that can improve mental health in children who have experienced trauma, McLaughlin said. "A critical next step is determining whether these psychosocial interventions might also be able to slow down this pattern of accelerated biological aging. If this is possible, we may be able to prevent many of the long-term health consequences of early-life adversity," she says.
Goal F objectives:
F-1: Identify and understand environmental, social, cultural, behavioral, and biological factors that create and sustain health disparities among older adults.
Many complex and interacting factors can affect the health and quality of life of older adults. For example:
Environmental factors related to income, education, occupation, retirement, and wealth may have a serious impact on key determinants of health over the life course and ultimately the health and well-being of older adults.
Social factors such as individual and structural forms of discrimination and bias can shape the everyday experience of individuals from minority or vulnerable populations.
Cultural factors can have a tremendous influence on approaches for managing stress, diet and food preferences, attitudes toward physical activity, and other critical health/coping behaviors.
Behavioral factors and psychological processes represent major pathways by which environmental and social factors affect health. Optimism, pessimism, and sense of control serve as risk or resilience factors for impacting health, while chronic stress exposure can enhance vulnerability.
Biological factors that are influenced by environmental and sociocultural factors — and transduced through behavioral processes — may alter the course, severity and acceleration of disease and disability.
All these factors and their interconnections must be understood to develop and implement effective interventions to address health disparities among various population groups. NIA will support and conduct research across diverse population groups to:
- Gather data to further distinguish patterns of health disparities and causes.
- Gather and analyze data on burdens and costs of illness, healthy life expectancy, longevity, and mortality trajectories. Determining the health burden and other costs of specific illnesses has always been difficult due to the lack of adequate data on incidence and prevalence as well as inconsistencies in calculating health and monetary costs. These difficulties are compounded across populations by differences in use of formal medical care and informal family caregiving. Projections of future healthy life expectancy, longevity, and mortality depend on assumptions about how groups of individuals will change over time, particularly as recent immigrants become culturally assimilated. This research will be archived in the best interest of all populations and will provide valuable information for projecting the specific needs for health care services within various population groups.
- Support the development and wide sharing of data resources that are needed to conduct health disparities research related to aging. Research to understand health disparities requires that data from multiple sources be accessible in standard formats to researchers on a national level. NIA will continue to support and expand surveys of health disparity populations in order to provide the data needed by researchers and public policy makers, including cross-national, comparative, and historic research. We will provide access to these and related data for use in health disparities research and to inform policy development.
- Develop comparable databases — including cross-national databases — on health outcomes, risk factors, and determinants of health disparities. Although many of the disparities in adult health and life expectancy across national, racial/ethnic, and social class boundaries are well documented, causal mechanisms are less well understood. Research to understand these differences will be critical to the development of behavioral and public health interventions.
- Use ongoing data collection programs to oversample health disparities populations. These data will provide important information on socioeconomic factors, health care needs, collective cultural responses, social network characteristics, perceptions of stress and resilience, risk/coping behaviors, genetic stability, and other important factors.
Identify the determinants of disparities in the prevalence of diseases and conditions such as heart disease, obesity, hypertension, frailty, diabetes, comorbidities, and certain types of cancer. Researchers will explore the influence of contextual factors such as residential segregation, stress, education, language, and access to health care and how these may link with genetic, molecular, and cellular mechanisms to sustain differences across populations.
Determine the reasons for variation in the prevalence of cognitive decline and AD/ADRD across population groups. NIA will support research to better understand the differences in the prevalence of AD and related dementias among African Americans, Asians, and Hispanics compared to non-Hispanic whites. We will continue to examine a range of possible causes of these disparities, including the impact of comorbidities such as hypertension, cardiovascular disease, and diabetes health behaviors and disease processes. This research will draw on culturally appropriate and standardized measures to better understand these differences and to suggest culturally appropriate interventions.
Understand differences in aging processes across diverse populations. We will characterize normal and accelerated processes of aging in diverse populations to increase our understanding of the course of disease and disability and to identify similarities and differences.
Understand how environmental, sociocultural, behavioral, and biological factors lead to disparities in health at older ages and develop interventions to reduce those disparities. Health disparities persist within and across diverse racial, ethnic, and socioeconomic groups. Research is needed to understand the causes of these disparities and how they relate to relevant factors. Examination of cross-national research opportunities has the potential to provide increased knowledge of natural experiments in divergent aging experiences and aging policy developments that would inform a more general understanding in aging societies.
Explore mechanisms through which the effects of environmental and sociocultural factors manifest themselves, as well as critical periods for reversing such effects and/or the optimal timing of intervention. Specific groups of the U.S. population experience chronic socioeconomic disadvantage throughout their lives or for extended periods in life that generate persistent, chronic stress. The patterns of stress reactivity appear to hasten the progression of disease. It is therefore important to invest in research on the effects of discrimination, bias, stigma, and stereotypes, particularly the mechanisms through which these environmental and sociocultural factors become biologically embedded to influence health disparities.
Determine how environmental, sociocultural, behavioral, and biological determinants interact to increase risk of disease and disability. Environment, socioeconomic factors, and risk behaviors can all interact to influence biological influences and accelerate aging as well as the development, progression, and outcome of disease in populations groups. NIA will support research to learn more about risk factors for disease and preventive factors contributing to good health by researching these influences individually and in concert. We will place a special emphasis on longitudinal data to untangle the multitude of factors that affect health and well-being.
Determine the effects of early-life factors on health disparities among older adults. Differences in childhood socioeconomic status, stress exposure, risk/coping behaviors, disease incidence, environmental exposure, and health care in fetal development and early life can affect disease and disability in later life. NIA will support research to identify these early-life factors, as well as the mechanisms through which they influence health in later life. These findings can then be used to inform clinical and even policy interventions to reverse the effects of childhood disadvantage among older adults.
F-2: Develop strategies to promote active life expectancy and improve the health status of older adults in diverse populations.
Life expectancy has increased among all population groups however, notable disparities remain. For example, African American men have the lowest life expectancy of all racial/gender population groups in the U.S. In addition, more adults are living with one or multiple chronic conditions that may not affect length of life but may dramatically affect quality of life, and significant disparities have been observed in this area, as well. For example, African Americans suffer disproportionately from hypertension and prostate cancer, and Hispanics suffer more from diabetes. NIA will continue to:
F-3: Develop and implement strategies to increase inclusion of underrepresented populations in aging research.
The ability to recruit and retain research participants that are representative of the total U.S. population is essential to the conduct of rigorous health disparities research related to aging. However, specific racial, ethnic and socioeconomic population groups have been underrepresented in health-related research, including clinical trials and population-based research. NIA will:
F-4: Support research on women’s health, including studies of how sex and gender influence aging processes and outcomes.
Older women outnumber older men in the U.S., and the proportion of the population that is female increases with age. In 2014, women accounted for 56% of the population ages 65 and older and for 66% of the population ages 85 and older. Despite living longer, however, older women are more likely to report depressive symptoms or limitations in physical function, are more likely to live alone (a potential indicator or risk factor for isolation, lack of caregivers, or lack of support), and live in poverty at a disproportionately high rate. American women also lag significantly behind their counterparts in other higher-income nations in terms of longevity, and since 1980, the pace of gains in life expectancy of older U.S. women has slowed markedly compared to that in other industrialized countries.
NIA supports a diverse portfolio of research on older women’s health, including studies on sex differences in the basic biology of aging hormonal influences on cognitive health women’s health across the life course, with a particular emphasis on the menopausal transition sex and gender-related demographic disparities in older age economic implications of sex and gender at older ages and age-related diseases and conditions that are unique to or more common in women, such as osteoporosis, breast and ovarian cancer, and urinary tract dysfunction. In addition, we support initiatives to ensure that women are fully represented in NIH-supported research, including the Sex as a Biological Variable (SABV) and Inclusion Across the Lifespan policies. As part of our commitment to supporting research on women’s health, NIA will:
Support research to better understand effective strategies for communicating health messages that are appropriate in diverse populations. Because of language, educational, and cultural differences, disproportionately affected populations do not always receive important information about healthy behaviors. Research on communication with specific audiences will assist the development of appropriate health messages and dissemination channels we will continue to communicate with diverse audiences in various ways.
Develop appropriate strategies for disease, illness, and disability prevention and healthy aging among the underserved. Aging Americans need understandable, culturally appropriate tools they can use to maintain and improve their well-being. For example, diet and physical activity recommendations may need to be adjusted to take into account religious, ethnic, and cultural sensitivities. To address these concerns, researchers will:
- Develop and promote culturally appropriate interventions to improve healthy behaviors along with strategies to increase the likelihood that these interventions will be initiated and maintained.
- Design and promote interventions appropriate for older adults in diverse populations to more effectively prevent, diagnose, or reduce the effects of disease.
- Design and promote evidence-based and culturally appropriate strategies for self-management of chronic diseases.
- Investigate the factors affecting medication misuse and culturally appropriate strategies for enhancing proper use and compliance with medication regimens.
- Develop interventions that build long-term and meaningful relationships among community leaders and members to create trust and to understand the cultural limitations of interventions.
- Develop interventions to reduce health disparities and inequities associated with poor provider-patient interactions. Recent studies have revealed that how older adults are diagnosed and treated is as much a function of who they are, who is treating them, and where care is provided as it is a function of the symptoms they present. NIA will investigate ways to ensure that each individual is treated with appropriate evidence-based interventions regardless of race, ethnicity, sexual orientation/gender identity, place of birth, or cultural background.
Develop training programs to prepare culturally proficient researchers. We will facilitate training of researchers in the biomedical, behavioral, and social sciences working with older adults to help them better understand the medical implications of the growing diversity of our population. Training programs will help prepare the next generation of health professionals by incorporating new materials sensitive to these issues and preparing a cadre of culturally competent health care providers prepared to assist with patient decision making.
Continue to support training for clinical and research staff in message development, recruitment strategies, and community and media outreach. NIA will explore effective ways to mitigate the difficulties associated with enrollment of health disparities populations in research studies and clinical trials. For example, Community Based Participatory Research methods may be used to address cultural and language barriers and encourage effective communication about the potential benefits of studies and trials that seek to address health disparities and improve public health in priority communities.
Investigate novel approaches for increasing recruitment and retention of underrepresented researchers pursuing careers in science, particularly health disparities research. NIA will work to identify the best strategies for training and attracting a diverse workforce of new, midcareer, and senior researchers. This may be important for evaluating important strategies — including those that account for cultural and geographic factors — to enhance the recruitment of underrepresented groups into aging research. We will continue programs to train high-quality researchers through flexible mechanisms that reflect the rapidly changing needs of science and provide cross-disciplinary training. NIA will also work to tap the talents of all groups of society by encouraging degree-granting institutions to establish and improve programs for identifying, recruiting, and training diverse groups of individuals for careers in biomedical science.
Engage broad segments of the U.S. population in research on Alzheimer’s disease and related dementias. As funding for AD/ADRD has increased, the need for more people to participate in relevant research has grown. In particular, an urgent need exists to engage underrepresented communities. Today’s participants in AD/ADRD research are mostly white, non-Hispanic, well-educated, heterosexual, and married, with a spouse study partner. However, studies point to significant differences between rates of AD in specific populations, for whom factors like diet, culture, genetic influences, geography, and medical conditions may play a role. Broadly diverse participation in both observational and clinical studies will help us to better define and address racial, ethnic, gender, and other differences so that interventions can be better tailored to communities and individuals. We will continue to provide resources and support to facilitate widespread engagement in our research studies.
Encourage research to understand sex and gender differences in health and disease at older ages.Sex differences in health, longevity, and response to various preventive and treatment interventions are well documented. For example, many of the compounds tested through the Interventions Testing Program demonstrate differential effects on male and female mice. We will accelerate research on the basic biology driving health differences between sexes. In addition, recent demographic and economic trends have gender-specific implications for health and well-being at older ages. Unmarried women, for example, are less likely than unmarried men to have accumulated assets and pension wealth for use in older age, and older men are less likely to form and maintain supportive social networks. We will support research to explain how these and other factors may contribute to the differences in life expectancy and disability rates among men and women at older ages.
Support research on sex and gender differences in cognitive decline and AD/ADRD etiology, presentation, prevention, and treatment. Recent estimates suggest that nearly two-thirds of individuals diagnosed with AD are female. At the same time, most studies conducted in the U.S. have not observed sex differences in the incidence of Alzheimer’s disease — that is, in the rate of developing the disease. This may be in part because women, on average, live longer than men. Other potential reasons for this are complex and may include differences in brain structure possible differential effects of the APOE ε4 genotype, which is the most common genetic risk factor for late-onset disease differences in education between men and women in the age cohorts currently at greatest risk and effects of sex steroid hormones on the brain. NIA will continue to study possible AD/ADRD risk and protective factors in both men and women, the mechanisms through which estrogen and other sex hormones work on the brain, and the effects of different forms of menopausal hormone therapy on cognition.
Solicit and support research on topics that are uniquely relevant to the health of older women. Some age-related health issues — for example, menopause and certain types of cancer — are unique to women. Others, such as osteoporosis, are significantly more common in women than in men. We will support research designed to understand and address these conditions, with an additional focus, where appropriate, on how common diseases manifest and respond differently to treatment in women and men.
Support initiatives designed to ensure that women are fully represented in basic, translational, and clinical research. Data from the NIH Office of Research on Women’s Health suggests that women now account for roughly half of all participants in NIH-supported clinical research. However, basic and preclinical biomedical research frequently focuses on male animals and cells, which may obscure understanding of key sex influences on health processes and outcomes. NIH has adopted a stringent “Sex as a Biological Variable” policy stating that the organism’s sex will be factored into research designs, analyses, and reporting in vertebrate animal and human studies. NIA will continue to support this and other policies designed to ensure full representation of women in all levels of research.
Track, monitor, and report on participation of women in NIA-supported research, including adherence to the NIH SABV policy. We will continue to report on progress in this domain through programs currently active across the NIH.
Psychological Stress and Cancer
Psychological stress describes what people feel when they are under mental, physical, or emotional pressure. Although it is normal to experience some psychological stress from time to time, people who experience high levels of psychological stress or who experience it repeatedly over a long period of time may develop health problems (mental and/or physical).
Stress can be caused both by daily responsibilities and routine events, as well as by more unusual events, such as a trauma or illness in oneself or a close family member. When people feel that they are unable to manage or control changes caused by cancer or normal life activities, they are in distress. Distress has become increasingly recognized as a factor that can reduce the quality of life of cancer patients. There is even some evidence that extreme distress is associated with poorer clinical outcomes. Clinical guidelines are available to help doctors and nurses assess levels of distress and help patients manage it.
This fact sheet provides a general introduction to the stress that people may experience as they cope with cancer. More detailed information about specific psychological conditions related to stress can be found in the Related Resources and Selected References at the end of this fact sheet.
How does the body respond during stress?
The body responds to physical, mental, or emotional pressure by releasing stress hormones (such as epinephrine and norepinephrine) that increase blood pressure, speed heart rate, and raise blood sugar levels. These changes help a person act with greater strength and speed to escape a perceived threat.
Research has shown that people who experience intense and long-term (i.e., chronic) stress can have digestive problems, fertility problems, urinary problems, and a weakened immune system. People who experience chronic stress are also more prone to viral infections such as the flu or common cold and to have headaches, sleep trouble, depression, and anxiety.
Can psychological stress cause cancer?
Although stress can cause a number of physical health problems, the evidence that it can cause cancer is weak. Some studies have indicated a link between various psychological factors and an increased risk of developing cancer, but others have not.
Apparent links between psychological stress and cancer could arise in several ways. For example, people under stress may develop certain behaviors, such as smoking, overeating, or drinking alcohol, which increase a person’s risk for cancer. Or someone who has a relative with cancer may have a higher risk for cancer because of a shared inherited risk factor, not because of the stress induced by the family member’s diagnosis.
How does psychological stress affect people who have cancer?
People who have cancer may find the physical, emotional, and social effects of the disease to be stressful. Those who attempt to manage their stress with risky behaviors such as smoking or drinking alcohol or who become more sedentary may have a poorer quality of life after cancer treatment. In contrast, people who are able to use effective coping strategies to deal with stress, such as relaxation and stress management techniques, have been shown to have lower levels of depression, anxiety, and symptoms related to the cancer and its treatment. However, there is no evidence that successful management of psychological stress improves cancer survival.
Evidence from experimental studies does suggest that psychological stress can affect a tumor’s ability to grow and spread. For example, some studies have shown that when mice bearing human tumors were kept confined or isolated from other mice—conditions that increase stress—their tumors were more likely to grow and spread (metastasize). In one set of experiments, tumors transplanted into the mammary fat pads of mice had much higher rates of spread to the lungs and lymph nodes if the mice were chronically stressed than if the mice were not stressed. Studies in mice and in human cancer cells grown in the laboratory have found that the stress hormone norepinephrine, part of the body’s fight-or-flight response system, may promote angiogenesis and metastasis.
In another study, women with triple-negative breast cancer who had been treated with neoadjuvant chemotherapy were asked about their use of beta blockers, which are medications that interfere with certain stress hormones, before and during chemotherapy. Women who reported using beta blockers had a better chance of surviving their cancer treatment without a relapse than women who did not report beta blocker use. There was no difference between the groups, however, in terms of overall survival.
Although there is still no strong evidence that stress directly affects cancer outcomes, some data do suggest that patients can develop a sense of helplessness or hopelessness when stress becomes overwhelming. This response is associated with higher rates of death, although the mechanism for this outcome is unclear. It may be that people who feel helpless or hopeless do not seek treatment when they become ill, give up prematurely on or fail to adhere to potentially helpful therapy, engage in risky behaviors such as drug use, or do not maintain a healthy lifestyle, resulting in premature death.
How can people who have cancer learn to cope with psychological stress?
Emotional and social support can help patients learn to cope with psychological stress. Such support can reduce levels of depression, anxiety, and disease- and treatment-related symptoms among patients. Approaches can include the following:
- Training in relaxation, meditation, or stress management or talk therapy
- Cancer education sessions
- Social support in a group setting
- Medications for depression or anxiety
More information about how cancer patients can cope with stress can be found in the PDQ® summaries listed in the Related Resources section at the end of this fact sheet.
Some expert organizations recommend that all cancer patients be screened for distress early in the course of treatment. A number also recommend re-screening at critical points along the course of care. Health care providers can use a variety of screening tools, such as a distress scale or questionnaire, to gauge whether cancer patients need help managing their emotions or with other practical concerns. Patients who show moderate to severe distress are typically referred to appropriate resources, such as a clinical health psychologist, social worker, chaplain, or psychiatrist.
Artherholt SB, Fann JR. Psychosocial care in cancer. Current Psychiatry Reports 201214(1):23-29.
Fashoyin-Aje LA, Martinez KA, Dy SM. New patient-centered care standards from the Commission on Cancer: opportunities and challenges. Journal of Supportive Oncology 2012 e-pub ahead of print March 20, 2012.
Lutgendorf SK, DeGeest K, Dahmoush L, et al. Social isolation is associated with elevated tumor norepinephrine in ovarian carcinoma patients. Brain, Behavior, and Immunity 201125(2):250-255.
Lutgendorf SK, Sood AK, Anderson B, et al. Social support, psychological distress, and natural killer cell activity in ovarian cancer. Journal of Clinical Oncology 200523(28):7105-7113.
Lutgendorf SK, Sood AK, Antoni MH. Host factors and cancer progression: biobehavioral signaling pathways and interventions. Journal of Clinical Oncology 201028(26):4094-4099.
McDonald PG, Antoni MH, Lutgendorf SK, et al. A biobehavioral perspective of tumor biology. Discovery Medicine 20055(30):520-526.
Melhem-Bertrandt A, Chavez-Macgregor M, Lei X, et al. Beta-blocker use is associated with improved relapse-free survival in patients with triple-negative breast cancer. Journal of Clinical Oncology 201129(19):2645-2652.
Moreno-Smith M, Lutgendorf SK, Sood AK. Impact of stress on cancer metastasis. Future Oncology 20106(12):1863-1881.
Segerstrom SC, Miller GE. Psychological stress and the human immune system: a meta-analytic study of 30 years of inquiry. Psychological Bulletin 2004130(4):601-630.
Sloan EK, Priceman SJ, Cox BF, et al. The sympathetic nervous system induces a metastatic switch in primary breast cancer. Cancer Research 201070(18):7042-7052.
7 Bhaskaran K, Douglas I, Forbes H, dos-Santos-Silva I, Leon DA, Smeeth L. Body-mass
index and risk of 22 specific cancers: a population-based cohort study of 5&bull24 million UK adults. Lancet. 2014 Aug 30384(9945):755-65. doi: 10.1016/S0140-6736(14)60892-8. Epub 2014 Aug 13.
8 Kasen, Stephanie, et al. &ldquoObesity and psychopathology in women: a three decade prospective study.&rdquo International Journal of Obesity 32.3 (2008): 558-566.
9 Luppino, Floriana S., et al. &ldquoOverweight, obesity, and depression: a systematic review and meta-analysis of longitudinal studies.&rdquo Archives of general psychiatry 67.3 (2010): 220-229.
10 Roberts, Robert E., et al. &ldquoProspective association between obesity and depression: evidence from the Alameda County Study.&rdquo International journal of obesity 27.4 (2003): 514-521.
11 U.S. Department of Health and Human Services. The Surgeon General&rsquos call to action to prevent and decrease overweight and obesity. [Rockville, MD]: U.S. Department of Health and Human Services, Public Health Service, Office of the Surgeon General . Available from: US GPO, Washington.
12 Wolf AM, Colditz GA. Current estimates of the economic cost of obesity in the United States. Obesity Research.19986(2):97&ndash106.
13 Wolf, A. What is the economic case for treating obesity? Obesity Research. 1998 Apr6 Suppl 1:2S-7S.
14 Hammond RA, Levine R. The economic impact of obesity in the United States. Diabetes, metabolic syndrome and obesity : targets and therapy. 20103:285-295. doi:10.2147/DMSOTT.S7384.
15 Finkelstein EA1, Trogdon JG, Cohen JW, Dietz W. Annual medical spending attributable to obesity: payer-and service-specific estimates. Health Aff (Millwood). 2009 Sep-Oct28(5):w822-31. doi: 10.1377/hlthaff.28.5.w822.
16 Trogdon JG, Finkelstein EA, Hylands T, Dellea PS, Kamal-Bahl. Indirect costs of obesity: a review of the current literature. Obes Rev.20089(5):489&ndash500.
17 Maxey H, Bishop S, Goodman B, Browning D. Breaking Point: Child malnutrition imperils America&rsquos national security. Mission: Readiness Council for a Strong America 2020.