Tuesday, September 24, 2013

Telomeres: size may matter



I guess just about everyone in the world has heard of DNA.  I presume everyone knows that DNA is what genes are made of.  And I suppose everyone knows what genes are.  I suspect, however, that many people don’t know what a “chromosome” is. 

Simply put, a chromosome is a combination of DNA and protein, and under a microscope it looks like a rod.  Humans have 46.  Chimpanzees have 48.  Corn has 20.  And so on.  You might think of a chromosome as a “carrier” for DNA. 

When I studied genetics about 40 years ago, all we knew about telomeres was that they are the tips found on either end of each rod-like chromosome, but we didn’t know WHY they are there.  Scientists now know a whole lot more, of course, and in 2009 there was even a Nobel Prize awarded for a discovery having to do with telomeres.

Today it is known that telomeres serve essentially the same function as an “aglet”, that little plastic sleeve at the end of a shoelace.  Both telomeres and aglets protect the ends of things from degradation, and what telomeres protect are chromosomes.  What happens is that every time a cell divides, the chromosomes divide too—along with the DNA that makes up the chromosomes.  Because of the way DNA is replicated, the ends of each DNA strand get chopped off with each cell division, which results in the loss of some DNA .   So a telomere is essentially some “disposable” DNA that is added as a buffer to the end of each DNA strand so that non-essential DNA gets chopped off rather than the critical DNA that makes up the main part of the strand.  And then, to complete the process, there is an enzyme called telomerase that adds this buffer-DNA back on to the end of the telomere.

But unfortunately this system isn’t perfect, and eventually telomeres get shortened to the point that critical DNA is no longer protected.  When this happens, the critical DNA is said to be “exposed,” and it can combine randomly with DNA from other chromosomes.  This leads to chromosomal abnormalities and is the reason ALL cells eventually die:  they can no longer divide properly.  In fact, current theories about the aging process assume there is a finite limit to the number of times a cell can divide.

A small percentage of cancers (5-10%) have the ability to activate telomerase and thus add DNA to the end of their chromosomes.  This continual “rejuvenation” can make cancerous cells immortal—that is, they do not die.  Cell aging, therefore, represents a critical balance between keeping cells dividing properly, using telomeres, but NOT allowing them to continue dividing if they become defective (cancerous)

So basically, a lot of human-telomere biology can be summed up as follows:  long telomeres are good, short telomeres are bad.  This is because long telomeres allow cells to divide more times than cells with short telomeres. 

It is not surprising, therefore, that short telomere length has been correlated with many age-related disease conditions: cardiovascular disease, hypertension, arthrosclerosis, Alzheimer’s, Parkinson’s, dementia, diabetes, osteoporosis, and cancer.  In fact, the literature supporting these associations is very extensive.

Further, some pretty incredible results have been obtained by lengthening telomeres in mice, actually reversing some of the characteristics associated with the aging process.  The mice used in these studies are strains that don’t produce telomerase, so their telomeres are very short.  They show tissue atrophy, reduced testis size, deteriorated spleens and intestines, reduced fertility, diminished sense of smell, smaller brain size, and a lifespan only about half as long as that of normal mice.  But guess what—a 2011 study showed that after these mice were injected with 4-OHT (a drug that induces production of telomerase, which in turn results in elongated telomeres), indications of aging actually REVERSED:  testes grew, fertility increased, brains became larger. 

It seemed to be the fountain of youth . . .  at least until it was reported in 2012 that mice receiving the 4-OHT injections had higher rates of cancer.  This result is consistent with observations in many other studies correlating longer telomeres with increased cancer.  Apparently the researchers had reversed aging, but in the process, they had raised cancer rates.  (Some scientists think that inhibiting telomerase is a viable way to treat cancer—the theory being that if you can shorten telomeres, the cancer cells will die.)

And then (of course), there are telomere studies with results that are truly puzzling.  For example, 1,091 Scots born in 1936 (548 men and 543 women) were examined in 1947 at age 11 then again in 2006 at age 70.  The subjects were given cognitive tests, a physical examination, and their telomeres were examined.  As reported in 2012, telomeres were statistically longer in men than women, and longer telomeres were associated (in women only) with higher cognitive scores and lower levels of C-reactive protein, a molecule associated with inflammation.  Telomere length was not associated with any other measures of aging.

And then in 2011, a paper that reported on 60 mammalian species showed an INVERSE relationship between telomere length and lifespan—in other words, species with the longest telomeres had the shortest lifespans.  Moreover, the longest lifespans were not correlated with telomerase activity.

So what’s going on?

Well, there have been over 16,000 publications relating to telomeres and aging, and there are bound to be inconsistencies.  And, of course, what’s going on with mice may not relate to what’s going on with humans.  And what’s going on with most mammalian species may not have anything to do with humans.  But even so, these studies do seem to contradict everything we think we know about the relationship between telomeres and aging.

In spite of these findings, at least one of the three winners of the 2009 Nobel Prize, Elizabeth Blackburn, thinks that telomeres “are an integrative indicator of health.” She has been involved in the formation of Telome Health in San Francisco, which will measure telomere length of your white blood cells (if requested by a researcher).    The Telome Health website (http://www.telomehealth.com/index.html) asserts that telomere length is an indicator of aging and health and believes that telomeres can serve as biomarkers that would be useful to pharmaceutical companies in assessing drug response.  The company’s website lists 155 publications supporting their claims.  (And, near and dear to my heart, their foundational patents were licensed from the University of Utah.  A great example of University technology leading to the formation of a new company!)

However, as is often the case with emerging scientific disciplines, not everybody agrees that measuring one’s telomere length is useful.  Carol Greider, who shared the Nobel Prize with Blackburn in 2009, was quoted in 2011 as saying that telomere length is not particularly helpful in assessing health and well-being because telomere length is so variable.  “Do I think it is useful to have a bunch of companies offering to measure telomere length so people can find out how old they are?  No.”

In 2010, another telomere researcher, Maria Blasco of the Spanish National Cancer Centre in Madrid, Spain, founded a company called Life Length that also measures telomeres.  Well, actually, it reports the percentage of short telomeres, a metric that  Blasco apparently thinks is a better indicator of health than “average telomere length,” which is the metric used by Telome Health.  Life Length’s website (http://www.lifelength.com/index-eng2.html) is exceedingly thorough, and like Telome Health, it makes a compelling case for telomere analysis.

As an indication of how small the world of telomere science really is, Calvin Hartley from Telome Health has collaborated with Maria Blasco of Life Length.  In a 2011 paper, they reported on a telomerase activator called TA-65 that is purified from the root of Astragalus membranaceus.  They showed that in mice with short telomeres, injections of TA-65 resulted in increased telomere length and improved glucose tolerance, bone density, and skin fitness—and best of all, the rate of cancer did not increase.   A similar study using human cells that was published in 2013 also reported that TA-65 increased telomere length.

It turns out you can actually buy TA-65—it is sold, for example, by Telomerase Activation Sciences, Rev Genetics, and Life Meds, to name just a few.  Apparently TA-65 was discovered by Geron Corporation, a California company that was an early pioneer in stem cell commercialization.  It is sold as a “neutraceutical” and thus has NOT been subjected to the kind of testing that is mandated by the FDA for pharmaceuticals.  In other words, there have been no large scale clinical trials, efficacy has not been proven, and no one has determined the optimal dose.

Interestingly, a class-action lawsuit was filed in 2012 against Telomerase Activation Sciences by one of its former employees, who claims that he got prostate cancer while taking TA-65.  (Unfortunately, I haven’t been able to find out anything else about this case.)

Although the current research results are intriguing, at this point I’d look at telomere testing the same way you might look at getting your cholesterol or your blood pressure checked.   It is just another test that MAY be an important indicator of your health.  But I’d shy away from chemicals purporting to make your telomeres longer until we have results from large-scale clinical trials indicating that that these compounds are safe and effective—and won’t give you cancer.

All this is making me want to talk about neutraceuticals, natural product claims, the role of the FDA, statistical tests, clinical trials, and how we know if something is “true,” but I’ll leave that for another time.


Useful references:

http://www.nature.com/nature/journal/v469/n7328/full/nature09603.html
http://www.mdpi.com/2073-4409/2/1/57           
http://www.nature.com/news/lawsuit-challenges-anti-ageing-claims-1.11090



Wednesday, September 18, 2013

Can you Patent a Steak?




In September, 2011, Oklahoma State University filed a patent application on a method of producing a new kind of steak, as well as the resulting steak itself.  The so-called Vegas Strip steak.  Believe it or not, the Vegas Strip has never existed before.  Now, it may have been made accidently sometime in the last 50,000 years, but if so, there is no proof.  Nowhere has it been written down, and nowhere has it been offered for sale.  It comes from a part of the cow (the subscapularis) that is normally converted into hamburger or left on the carcass to become part of another cut of meat known as “chuck.”

This METHOD for producing the Vegas Strip, as well as the steak that results, actually satisfies the three criteria for patentability:  novelty, unobviousness, and utility.   Novel because no one has done it before, at least as far as we know; unobvious because it is not an obvious extension of what the world already knows; and useful, because, well, it is a steak, it tastes good, and conservatively it adds perhaps an additional $2 to the value of a beef carcass in an industry that is scrambling for pennies.  Seems useful to me.

Now, applying for patents is something we do all the time at OSU, about 15 per year.  And normally they are not controversial—new vaccines, machines, etc.  But applying for this steak patent caused a real stir in the blogosphere.  You’d think we had violated a law of the universe.  The vast majority of the comments/objections were variations on the theme of, “You can’t patent a steak!”

Really?  Well, why not?  There are many patents on meat processing.  For example, U.S. Patent No. 8,105,137, issued in 2012, covers a method of cutting up a chuck roll.  U.S. Patent No. 7,214,403, issued in 2007, covers a method of boning hams.  And proving that the concept of patenting cuts of meat is nothing new, U.S. Patent No. 1,381,526, issued in 1921, covers a method of cutting up a tenderloin. 

So clearly we can, theoretically, patent a steak—if it is novel, unobvious, and useful.

But perhaps the objection is not so much based on whether you can technically patent a steak, but whether you should.  Some people seem to think such a patent is a tad unethical—like maybe, well, steaks belong to everyone.  Or as one commentator wrote, “You mean you are going to sue me when I eat my steak?”  Or maybe due to the fact that the subscapularis muscle is produced by a cow, the steak produced from that muscle is thought to be a “natural product,” and patenting a “natural product” bothers folks at some deep intuitive level.  This is akin to the fuss back in the 1980’s about patenting plants and animals (“But you can’t patent life!”).  Or the disputes in the 1990’s and 2000’s about patenting DNA. 

Actually, the history surrounding the patentability of “natural products” is very VERY interesting.  And the soundness of the logic ebbs and flows over time.  The basic objection to patenting a natural product is often framed as a question:  “How can you obtain a monopoly on something produced by nature?”  And the answer most commonly given is, “You didn’t invent it because it was already there!”

On the surface this seems to make sense—until you start looking at particular examples.  Then it seems to make somewhat less sense.  And that is because finding something in nature “that is already there” constitutes a “discovery.”  And discoveries are by their nature very difficult to come by, and by their very definition are novel and unobvious.  And, of course, they may be useful. 

Take DNA from humans, for example.  I recently blogged about the Supreme Court decision in Molecular Pathology v. Myriad Genetics Inc., so I won’t go through it in detail again, but in fact the Myriad decision and OSU’s patent application on producing a new kind of steak are closely related. 

And that is because they both revolve around the issue of “natural product” patenting.   Over the past 120 years the courts have consistently agreed that “natural products” are not patentable.  However, the definition of what is and is not a “natural product” has varied considerably over the years, and its history consists of a series of reversals by the courts. 

Arguably the most famous case in this area concerned a patent that issued in 1903 covering adrenaline, which had been isolated from the suprarenal glands of cattle, sheep, etc. (U.S. Patent 730,176).  Adrenaline was marketed by Parke, Davis & Co. as a drug to treat asthma and to stop bleeding from minor surgeries, so it was clearly useful.  But obtaining the patent was difficult nevertheless.  The patent examiner at first argued that isolated adrenaline was the same as naturally occurring adrenaline—in other words, it was a natural product and therefore not patentable.  The inventor was eventually able to overcome these objections by arguing that, as a purified product, the isolated adrenaline was in fact different from adrenaline as it existed in the body. 

I cite this patent because its wording—that an isolated or purified product is different from the product as it appears in the body—became the basis for the patenting of many natural products (such as insulin, claimed in U.S. Patent 1,469,994, which issued in 1923).  The same logic was used right up through the biotechnology revolution of the 1980’s and continues to be used today.  Well, almost.

The recent Supreme Court decision in Molecular Pathology v. Myriad Genetics, Inc. has muddied the waters considerably, essentially ruling that isolated DNA is NOT patentable—and requiring that DNA be isolated and CHANGED in order to be patentable.  This decision has caused much consternation and uncertainty in the patent world, especially with respect to “natural products” other than DNA.

And so, finally, we have come back around to the opening question of this blog—can you patent a steak?

I would argue that the answer is “yes,” if the isolated steak satisfies the criteria of novelty, unobviousness, and utility and its final form is different from that in which it exists in the animal.  This result is consistent not only with meat processing patents in the past, but also with the recent ruling in Myriad.

And that is because the Vegas Strip steak has actually been “abstracted,” (cut out of) the subscapularis muscle, and looks nothing like that muscle does when it is still inside the animal—much as Michelangelo’s statue of David looks nothing like the chunk of Carrara marble that it came from. 

So there.  You can patent a steak.  I think.  We will know for sure in 2-3 years, which is the amount of time that the U.S. Patent and Trademark Office generally takes to decide such things.


Useful references:




Friday, September 13, 2013

Genetically Modified Organism (GMO) Scare Tactics




Let’s talk about rats.  That’s right, rats.  Specifically, Sprague-Dawley rats.

It turns out that Sprague-Dawley rats are very susceptible to getting spontaneous cancers.  All kinds of cancers—pituitary, mammary, pancreatic, adrenal, liver, thyroid, ovarian, uterine, prostate, skin, kidney, bladder, stomach, brain . . . you get the idea.   This is the reason that these rats are commonly used for carcinogenic studies.  Makes sense—if you want to find out if a compound is carcinogenic, you might as well start out with an animal that is prone to getting cancer.  Even if you do absolutely nothing to these rats, between 50% and 70% of them will develop some kind of neoplasm (any abnormal growth of tissue, whether benign or lethal).

But if you knew that 50% would develop a neoplasm of some type, how many cancers above 50% would you need in order to “prove” that your chemical caused cancer?  Well, this is the kind of statistical problem that scientists face all the time:  how do you show that the effect of the chemical is not due to chance alone, like flipping a coin?

First of all, you need LOTS of rats.  I think that’s intuitively obvious, especially if you’ve ever flipped coins.  If you flip only 10 times, you might get 10 heads in a row, but if you flip 1,000 times, you’ll get a lot closer to the statistically-expected 50/50 split between heads and tails.

So how many rats do you need?   Well, the scientific world has decided that 50 rats per treatment is a minimum for these kinds of carcinogenic studies.  The Organization for Economic Co-operation and Development recommends “at least” 50 animals per group.  The Environmental Protection Agency states: “Current standardized carcinogenicity studies in rodents test at least 50 animals per sex per dose group in each of three treatment groups and in a concurrent control group, usually for 18 to 24 months, depending on the rodent species tested . . . .”

By now, you are probably wondering why I’m going through all of this.   The answer is that a 2012* paper published in a respected scientific journal DID NOT follow these guidelines. 

They didn’t even follow the guidelines for a well designed high school science project.

And yet their paper is being used by the press to “prove” that genetically engineered crops (“genetically engineered organisms” or “GMOs”) are undesirable.  This has caused hysteria around the world—and some of that hysteria is on the part of scientists who want the paper retracted.

Before we get into the nitty-gritty of the experiment, we need to know what the term “genetic engineering” really means.  There are some people who would say that ANY plant breeding constitutes genetic engineering, even the old-fashioned kind that involves transferring pollen by hand from one plant to another.  But in the context of the GMO debate, most people would say genetic engineering consists of moving individual genes from one organism to another using molecular biology techniques developed since the 1970’s.   And for our purposes here, all we need to know is that DNA, the chemical that makes up genes, can be extracted from any known organism and transferred to nearly any other known organism.  Genes can be moved from bacteria to humans, for example—with a high degree of precision.  They can be “engineered.”

And with that little bit of background, here’s the experiment that caused all the furor:

The test subjects were Sprague-Dawley rats that were fed a diet including various percentages of corn.  The “normal” (control) group received 33% non-GMO corn, and the remaining rats received either 11%, 22%, or 33% GMO corn.  The experiment ran for two years, during which time the researchers kept track of the number of rats that died and those that got one or more cancerous growths. 

There were 10 rats in each treatment group for each sex.  When you do the math, it turns out that with an expected 50% baseline Sprague-Dawley mortality rate, it takes 9 rats in a single group to die (or survive) in order to “prove” with statistical significance that the cause was something other than chance.

Here is the mortality data for each treatment group in the study—

Number of deaths (male rats)
Non-GMO Corn
11% GMO
22% GMO
33% GMO
3
5
1
1


Number of deaths (female rats)
Non-GMO Corn
11% GMO
22% GMO
33% GMO
2
3
7
4

So, in six of the eight treatment groups, the number of deaths was no different than what you would expect get a coin toss—remember that because these rats are bred to get cancer, an average of 5 in each group would die regardless of their diet.  And what about the other two groups, the ones that had only one death apiece?  Statistically, those results ARE significant because fewer rats in these groups died than predicted by chance alone.  The conclusion?  Since these rats were fed the two highest rates of GMO corn, it must have been good for them.  HA!

Now, the baseline mortality rate I used for these calculations was 50%, but the mortality rate for Sprague-Dawley rats over the course of two years can be as high as 70%.  If we do the same calculations again but with a 70% baseline mortality rate, it turns out that ALL the rats in a given treatment group would have to die or survive in order for the results to have any statistical significance.

The point here is that the number of rats assigned to each treatment group in this study is entirely too small for the results to be at all meaningful.  This is highlighted by the fact that the data show no relationship between the mortality rate for a given treatment group and the “dose” of GMO corn in that group’s diet.  After all, if a particular substance actually caused the rats to die, you would expect the rats receiving the most of that substance to have the highest mortality rate—but just the opposite was observed in this study. 

So what this means is that all of the study data is apparently just random statistical noise. 

Which raises an interesting question—why did the researchers use such small numbers of rats?  Were they really incapable of designing a robust, meaningful study? Or did they just WANT to generate mortality in treatment groups as cheaply as possible in order to raise an alarm?

I must say that upon a quick glance, the original paper is scary.  The raw data is arresting (35% of the rats in the GMO groups DIED), the accompanying photographs of gigantic tumors are lurid, and reports of pituitary cancer are enough to frighten anybody.  It is only after doing some homework that you begin to realize that these same results are found in a high percentage of ALL Sprague-Dawley rats. 

Furthermore, a 2012 review paper that looked at GMOs in 24 studies for maize, soybeans, potato, rice, and triticale found NO papers showing that GMO crops have any negative impact on health.  Perhaps we shouldn’t be surprised that the authors of the Sprague-Dawley rat study do not cite any of these papers.

A related issue has to do with the “composition” of GMOs.  There has been a concern ever since the introduction of GMO crops that their protein and/or carbohydrates and/or fats had somehow been altered by the introduction of new genes.  This has never made any sense to me—I mean, is it really likely that an herbicide-resistance gene would affect the fat content of soybeans?  But under the theory that the nutritional content of GMOs might somehow be different from that of varieties produced by traditional means, the “compositional equivalence” of GMOs has been examined since 1993 (thus adding about $1 million dollars to the cost of producing a new GMO variety).

In fact, in Europe at least eight field sites must be used, with each site to include both GMO and non-GMO lines for comparison purposes. 

Let’s be clear here: “traditional” plant varieties, produced by conventional techniques, do NOT require any type of compositional testing—even though traditional plant breeding has the potential to cause radical changes in the genome.  For example, if ancestral parents are crossed with modern varieties, very strange progeny can result; a virtual “earthquake” of random genetic effects can be induced, including newly-introduced genes that have not been seen in the modern varieties for thousands of years.

In my opinion, GMOs are subjected to additional requirements not because of rational scientific concerns, but rather because of fear on the part of the general public—these additional tests are just roadblocks designed to prevent or delay the introduction of new GMO crops.

This is borne out by a 2013 paper that reviewed 20 years’ worth of GMO studies covering corn, soybeans, cotton, canola, wheat, potato, alfalfa, rice, papaya, tomato, cabbage, pepper, raspberry, and mushrooms.  Guess what they found?  GMOs are compositionally indistinguishable from non-GMOs.  In fact, plants produced by “traditional methods” show more variation in nutritional composition than GMOs do.  Which actually makes sense given the various ways genetic variation is introduced using conventional techniques (radiation, chemical mutagenesis, somaclonal variation, wide crosses). 

And in the same vein as the Sprague-Dawley rat study, there is yet another 2013 paper that is being used by some parties to show that GMO crops do not have a yield benefit.  Now, if this is in fact the case, why are farmers growing GMOs?  I mean, millions of acres of GMO crops are planted worldwide, including 90% of the corn, cotton and soybean acreage in the United States alone.  If there is no benefit in terms of yield, I would have to conclude that all of these farmers are just plain dumb.  Since that seems unlikely, one has to wonder if perhaps there is another explanation.

So, once again, we need to look at the paper to see what the data actually says. 

For this study, 4,748 hybrid corn varieties were grown across Wisconsin from 1990 to 2010.  2,653 of these varieties were conventional hybrids (non-GMO), and 2,095 were GMOs.  (My first thought is to question how 4,748 hybrids could possibly be developed for a minor corn-producing state like Wisconsin and consequently, how well any of these hybrids were adapted to growing conditions there.  I would rather have seen this study performed in Ohio or Illinois.)

The GMO varieties were divided into 12 groups: one group was tolerant to Round Up herbicide; another was tolerant to glufosinate herbicides; another produced Bacillus thuringiensis (Bt) toxin against European corn borer; another produced Bt toxin against corn root worm.  Some hybrids had two-way gene combinations, such as resistance to both Round Up and corn rootworm, and others had three-way combinations. 

The study data show that three of the GMO groups had yields that were statistically lower than the conventional hybrids, three of the GMO groups were statistically superior to the conventional hybrids, and the remaining six groups were statistically the same as the conventional hybrids.  So on average, it appears that the GMO yields are equivalent to the non-GMO yields, and this is what is being touted by the GMO naysayers.  But what they ignore is that ALL of the GMO crops had greater STABILITY than the conventional hybrids. 

Why is this important?   Because when a crop has “stability”, it consistently gives the same yield from one year to the next and from one environment to another.  This means less risk for the farmers because they can plan on getting a certain yield from their crop every year, even if it is grown in a different place or under different conditions.  The risk-reduction benefits of crop stability are so predictable that agricultural economists attach an actual value to it (called a “risk premium”).  In other words, they can predict the added economic benefit that will accrue to the farmer from growing a particular high-stability, low-risk variety.  That benefit is expressed in terms of bushels/acre because it is equivalent to the increase in profit that would result if yields increased by a certain amount.

In the Wisconsin corn study, the researchers showed that the reduced risk due to the stability of the GMO varieties was equivalent to an increase of 0.78 - 4.19 bushels/acre over the actual yield.

So even the studies that were apparently designed to show genetically-engineered crops in a bad light were unable to do so.  This is due to the fact that genetic engineering is closely aligned with a phenomenon known as “horizontal gene transfer,” which is the transfer of genetic information between different species in the absence of mating—something that nature has been doing for, oh, about 500 million years.  And horizontal gene transfer is not limited to the transfer of genes between closely related organisms.  A 2012* paper showed that a particular moss had acquired 57 different families of nuclear genes from bacteria, fungi, and viruses.  These genes are related to vascular development, cuticle and epidermis, hormones, stomata pattern, herbivore resistance, plastid development and pathogen resistance. 

Now, that is genetic engineering on a grand scale!

My point is that plants/animals/bacteria/viruses/fungi have been mixing it up for millions of years, and genetic engineering by the hand of man is not inherently different from genetic engineering by Mother Nature.

Useful references:

**http://research.sustainablefoodtrust.org/wp-content/uploads/2012/09/Final-Paper.pdf


http://pubs.acs.org/doi/full/10.1021/jf400135r


Thursday, September 5, 2013

Exercise, part 2: Moderation in all things?




In part 1 of this series I made the following points:

1.         Barring extenuating circumstances, older folks can sustain muscle strength and muscle mass well into “old age.”

2.         Protein synthesis appears to be sustained throughout life, perhaps to the same extent as found in young people;

3.         This is counter to the prevailing paradigm, which holds that muscle wasting is an inevitable consequence of aging.

4.          Resistance exercise is rarely recommended for old folks; generally they are urged to “go walk in the mall for 30 minutes” as if that is all they are capable of.

5.         Muscle wasting in older adults may occur primarily because of their sedentary lifestyle.

Pretty exciting stuff, no?

But there is emerging evidence that “too much” exercise, at least exercise that gets the heart rate up to a “high” level for extended periods, may not be good for you—whether you are young OR old.  Further, data indicates that there are “optimal” levels of exercise.  Scientists have not, however, zeroed in on how often and how hard we should exercise to achieve maximum health benefits.

In a medical-screening study published in 2011*, researchers surveyed 416,175 Taiwanese individuals about their level of exercise upon enrollment in the study and then repeated this survey each year for the next 8 years.  Based on their answers to the survey questions, each subject was placed into one of five exercise categories:  “inactive,” “low volume,” “medium volume,” “high volume,” and “very high volume.”  During the course of the study, the researchers recorded deaths as well as the incidence of cancer, diabetes, cardiovascular disease, heart attack, and stroke .

Simply put, here is what they found:  As the participants’ level of activity increased, the rate of overall mortality decreased, as well as rates of death from cancer, diabetes, and cardiovascular disease.  All statistically significant.  The same results were found for every tested category, whether male or female, young or old.  Surprisingly, the results held true even for individuals with chronic kidney disease, metabolic syndrome, hypercholesterolemia, obesity, diabetes, or high blood pressure.   The conclusion is that exercise, even in very moderate amounts, helps EVERYONE at any age, regardless of the state of their health.

In fact, exercising for only 92 minutes PER WEEK decreased all-cause mortality and resulted in an increased life expectancy of 3 years.  Amazingly, all-cause mortality was further reduced by each additional15 minutes of exercise beyond a minimal 15 minutes per day.

BUT the benefit maxed out at 90 minutes of exercise per day.  So the authors concluded that exercising beyond this amount has no additional benefit as far as mortality is concerned.   And for those in the “vigorous” group, the maximum benefit was achieved after about 45 minutes.

Similar results were found in a 15-year study of 52,000 people, 14,000 of whom runners.  Overall, the runners had a 19% lower rate of mortality than non-runners, but the benefit was NOT seen in those who ran the fastest or the farthest.  For example, those who ran at 7 mph had the least mortality (17% reduction compared with non-runners), but those who ran faster than 8 mph had the same mortality as those who ran 1-5 mph (~10% reduction).   Similarly, those who ran the greatest distances (over 25 miles per week) had mortality reductions of 5-10%, while those who ran 0.1-19.9 miles per week had reductions of about 25%.

So this study indicates that there is an “optimal” level of running for fitness—and too much may be, well, too much.

Finally, the really big news in this area is the recent evidence that athletes who do EXTREME amounts of running may be damaging their cardiovascular systems.  The issue at hand is reminiscent of the legend of Pheidippides, the famous Greek runner who ran 150 miles in 48  hours to deliver the message “Victory is ours!” after the Battle of Marathon in 490 BC—and then dropped dead.  Read on.

Dr. James O’Keefe, professor of medicine at the University of Missouri-Kansas City, has published several articles concerning the effects of extreme running on heart health.  His publications have been pretty radical, prompting lots of discussion in exercise physiology circles.  In fact, they have sent a tremor through the exercise world.

Here are some examples:

1.         A 2012 study reports that sudden cardiac death in marathoners who run the full 26.2 miles is 1/100,000.  I could not find statistics for the expected rate of sudden cardiac death among non-marathoners.  However, between the year 2000 and 2010, 11 million people ran in full and half marathons.  59 experienced cardiac arrest (0.54/100,000).  For half marathons (13.1 miles)), the rate of sudden cardiac death was 0.27/100,000, and for full marathons the rate was 1/100,000.   Although these numbers are really low, they do suggest that something is going on.

2.         In a 2010 study, 60 male patients with cardiovascular disease were divided into two groups.  One group exercised for 30 minutes, and the other for 60 minutes.  The researchers took blood pressure measurements and performed an EKG (electrocardiogram) on each of the participants.  They found that the two groups did not differ with regard to blood pressure (rather counter to the idea that exercise decreases blood pressure).  More importantly, they also found that the 30-minute group had MORE favorable EKG results than the 60-minute group.

3.         A 2010 study looked at a particular chemical that is generally considered to be associated with cardiac damage—troponin.  (Troponins are molecules that help with muscle contraction, and when they leak out of muscle fibers, it may be an indication of muscle damage.  So, finding cardiac troponins in blood plasma may be diagnostic of several types of heart damage, including heart attacks.)  The authors reviewed 18 studies involving various types of exercise:  walking (18 to 30 miles), running (full and half marathons), cycling (124 miles), and one iron man triathlon (swim 2.2 miles, cycle 112 miles, run 26.2 miles).  Although 0% to 100% of the participants in a given event showed elevated cardiac troponin, the shorter the duration of the event, the HIGHER the troponin levels.  This suggests that shorter events, which are more intense and require greater cardiac output, result in the production of more troponin—and possibly more heart damage.

The hearts of athletes are different from those of normal people.  Overall an athlete’s heart is larger—which makes sense, since a larger heart can do more work.  The concern is that the hearts of some athletes involved in endurance events may show signs of “strain,” such as scarring (fibrosis), diastolic dysfunction, large-artery wall stiffening, and coronary artery calcification (plaque build up).  And in particular, the right ventricle may have decreased functionality.

Here are some more studies comparing the cardiovascular systems of endurance athletes to those of “normal” people:

1.         A 2008 study looked at forty athletes who participated in marathons (7), triathlons (11), ultra-triathlons (13), or alpine cycling events (9).  90% were males, their average age was 37, they had an average of 10 years in training, and they exercised an average of 16 hours per week.   Each athlete was examined before a race, immediately after a race, and one week later.  What the researchers found is that the function of an athlete’s right ventricle immediately following a race was reduced in comparison to its function before the race, and after one week it was almost back to baseline.  However, 5 of the 40 athletes showed areas of tissue damage in the septum  (tissue separating left and right ventricles), and those athletes also had hearts that pumped less blood.   The authors concluded that (a) intense endurance exercise caused dysfunction of the right ventricle (but not the left), (b) eventual recovery was nearly total, and (c) reduced right ventricle function was most evident in some of the most “practiced” athletes.

2.)         A 2009 study looked at 102 runners, age 50 or older, who had completed at least five full marathons in the last three years and had no history of heart disease.  It  showed that 12% of them had heart tissue damage; this compared to 4% of a “normal” population.

3.)         A 2010 study looked at 49 marathon runners who were, on average, 38 years old.   It showed that these athletes had significantly higher blood pressure than a group of “normal” people did. 

I could go on and on, but it seems that a consistent story is emerging:  very intense aerobic exercise such as running, cycling, and rowing over a long period of time may lead to damaged heart and arterial tissues. 

These studies also suggest that, at least in some individuals, there is an optimal level of exercise and exceeding it may be harmful, or at least provide no benefit.  This may be genetic, and it may be true in only a “small” percentage of the population.  So unfortunately there are no rules here—and few recommendations, except that some is good and too much may be bad.

It may behoove us, as we age, to have our cardiovascular system checked out more thoroughly than is possible with a family doctor’s stethoscope.  As we enter our “golden” years, and especially if we are beating ourselves up with lots of exercise, perhaps it is worth having an echocardiogram every 5-10 years, just as a status check.

Finally, I’d like to point out that the “intensity” of exercise is measured by one’s heart rate, regardless of the type of exercise.  (After all, the heart does not know if it is beating fast because we are running or lifting weights.  I personally find my highest heart rates occur when wall climbing, and weight lifting gets my rate as high as if I were running.)  There is growing interest in highly-intense exercise of short duration—this is the regime advocated by the increasingly popular “Crossfit” program,  which is designed to maximize heart rate through running, weight lifting, and various body-weight exercises.  In light of recent studies, is it possible that there are negative consequences to a lifetime of causing our hearts to beat wildly, even if only for 10 minutes at a time?  No one knows.


Useful References:

*http://vivafit.eu/pdf/Pang_Wen_minimum_amount_PA_reduced_mortality_Lancet_2011.pdf





http://eurheartj.oxfordjournals.org/content/early/2011/12/05/eurheartj.ehr397.full




http://ajh.oxfordjournals.org/content/23/9/974.long