Monday, April 15, 2013

PART 1---WHAT IF PETROLEUM IS NOT A FOSSIL FUEL AFTER ALL?


What if petroleum is not a “fossil fuel” after all?

Everyone knows that petroleum comes from plants and animals, converted by millions of years of heat and pressure into a “fossil fuel.” Heck, this is considered standard knowledge by any 6th grader.  Right?

In fact, this idea is so dominant we don’t even question it.  It originated with the Russian scientist Mikhail Vasilyevich Lomonosov, and dates back to at least 1757. 

Wikipedia flatly states the following about petroleum:  “A fossil fuel, it is formed when large quantities of dead organisms, usually zooplankton and algae, are buried underneath sedimentary rock and undergo intense heat and pressure.” NO other possible mechanism is given for the formation of petroleum. 

Let me state this right up front:  This is the DOMINANT theory for fossil fuel formation, and without question, the vast majority of geologists and scientists support it.  But let me also say, this is only a THEORY.

Other fossil fuels besides oil include coal, shale, natural gas, and peat.  In fact, peat clearly represents “coal in the making,” right? I mean you can saw it out of a bog somewhere in Great Britain and burn it in a stove.  Peat can have plants growing at the top, and decayed plants below, grading into a black mass.  It certainly looks like an intermediate form of  coal.  And that is exactly how coal is thought to be formed—peat accumulates at the rate of about 1 millimeter/year and is changed into lignite coal, then bituminous coal, and then (with appropriate time, pressure and temperature) into anthracite, the highest grade of coal.  And, of course, we also know that methane can be formed by decaying vegetable matter—in fact you can, for example, put manure in a reactor in your back yard and produce your very own methane.  There are bacteria that will do this for you just fine.  You can probably buy some on the internet.

Finally, there is a huge body of literature supporting the idea that petroleum comes from organic matter—for example, there are “bio markers” in petroleum that give an indication of its biological past, including isoprenoids (thought to have come from marine organisms) and oleanes (found in ferns).  And oil is often associated with “sedimentary deposits” which rest above deeper crustal material such as granite and oceanic basalt—supporting the idea that oil was formed from organic material that collected in these deep basins.   Further, oil is found in relatively “young” rocks that are thought to have been created just about the same time that a vast amount of organic matter was being deposited at the bottoms of ancient oceans or terrestrial swamps. The list, and the evidence, goes on and on.

Oh, and interestingly, the scientific field that specializes in the study of carbon compounds is “organic chemistry.”  This is a term that dates back to the early-1800’s when it was thought that living things (organic matter) were composed only of non-carbon containing compounds.  As the field evolved, scientists learned that the opposite is true:  life as we know it is totally dependent on chains of carbon atoms. But by that time, the name “organic chemistry” was already entrenched and “carbon chemistry” never caught on.  The very word “carbon” comes from the Latin “carbo” for coal, which automatically signals our belief that fossil fuels have an organic origin.

And so there is a vast amount of  literature supporting the idea that fossil fuels result from the conversion of biological materials.   But, there are niggling little facts that can make you wonder.  For example, it is estimated that carbon is the fourth most common element in the Milky Way, exceeded only by oxygen, helium, and hydrogen.  And methane, CH4, is found on Venus, in moon soils, in the atmospheres of Mars, Jupiter, Saturn, Saturn’s moon Titan, Uranus, and Pluto, and in Halley’s comet.  But where did all this carbon come from?  Carbon found in outer space is not likely to have a biological origin.  After all, you certainly won’t find decayed plant matter on Jupiter.

And it is not just simple carbon molecules that we are finding in these unexpected places.  Propane (5 carbons linked together) has been found on Titan, and even more complex carbon molecules called “hydrocarbons” (compounds of hydrogen and carbon that are also the chief components of petroleum and natural gas) have been found in meteorites.   In fact, there was a lot of excitement when VERY complex “polycyclic aromatic hydrocarbons” were first discovered in meteorites because some scientists argued that their very existence was evidence for extraterrestrial life—until it was found that these molecules can also be formed by non-biological processes.

So, if hydrocarbons can be formed by non-biological processes, one can’t help but wonder how much of the earth’s “fossil” fuels were actually produced by non-biological processes too.

Wait, what?  Let me say that again, but in an expanded way:  there are some scientists who postulate that oil and gas on earth were formed by NON-biological processes—that is, they are NOT fossil fuels—and that these same processes are still going on today.  Think about the consequences of that for a second.  Could it mean that our so-called fossil fuels are not being depleted, but are in fact still being made  by non-biological processes deep within the earth?   And could it mean that mankind is not really facing an “end of the oil age” apocalypse”?

Radical stuff.  But what is the evidence for fossil fuels NOT being made solely from decayed animals and plants?

The theory that petroleum has “abiogenic” origins was very popular in the past, although very few people today have ever heard of it.  First proposed in the 16th century, the idea was revived in the 19th century by none other than Dmitri Mendeleev, the Russian chemist who created the periodic table, and Alexander von Humboldt, a Prussian bio-geologist who gave his name to the “Humboldt Current” and was an early proponent of the theory that Africa and South America were once joined.  Then in 1890, the Russian geologist N. V. Sokoloff hypothesized that all coal has cosmic origins, basing his theory on the existence in meteorites of hydrocarbons with, presumably, non-biologic origins.

So it seems that the abiogenic theory for the formation of fossil fuels first found support among Russian chemists and geologists, who have continued to advocate it in their scientific publications since the 1950’s.  But the idea never really caught on in the western world.  Possibly this is because much of the scientific literature advocating abiogenesis was written in the Russian language.  Or maybe the biologic theory for the origin of fossil fuels just explained the data better—and predictably led to the discovery of oil reserves in sedimentary rocks, just where you would expect to find them if fossil fuels really do have a biologic origin.

Anyway, in 1982 none other than Sir Fred Hoyle (the famous astronomer, author, and physicist who coined the term “Big Bang” for the origin for the universe) said:

"The suggestion that petroleum might have arisen from some transformation of squashed fish or biological detritus is surely the silliest notion to have been entertained by substantial numbers of persons over an extended period of time."


Now Hoyle had marginal (fringe) thoughts about a lot of things— he rejected the Big Bang theory even though he coined the term; he thought that the correlation of flu outbreaks with sunspots indicated that flu viruses came to earth through solar winds.  But it turns out that Sir Fred was not alone in his support for the abiogenic theory of oil formation.


An eminent U.S. scientist by the name of Thomas Gold became enamored with the abiogenesis of petroleum in the 1950’s.  A member of the National Academy of Sciences as well as a Fellow of the Royal Society in London and a teacher and mentor of Carl Sagan at Cornell University, Gold first grabbed hold of the idea when it was found that thriving communities of heat-loving microbes live around hydrothermal vents at the bottom of the ocean.  The discovery that these microbes subsist solely on METHANE and hydrogen led him to postulate that they may live in inside rocks deep in the earth’s crust in such huge numbers that their combined mass could be equivalent to that of all surface life.  That is a pretty astounding idea all by itself, but for purposes of the abiogenic theory of petroleum formation, the important thing about these microbes is that they would contaminate any oil that surfaced through fractures in the earth’s crust. 


So, Gold argued, the biological markers in petroleum are only contaminants from microbes living “down under”, and their presence cannot be used as proof that the oil has biological origins.  Get it?  He said you can’t use “biomarkers” to prove that oil is made through biological processes.  And indeed, we now know that the oceanic crust is filled with microbial colonies happily subsisting on chemical “food” and employing a vast array of metabolic pathways that don’t require sunlight for energy—these microbes are now known as “chemolithotrophs.” (In the 1980’s, I worked at the Prudhoe Bay oilfield and actually isolated microbes living at 150F in oil that had come directly out of the ground from about 9,000 feet below the surface).


Okay, but how do oil and gas form abiogenically?   In other words, what chemical processes could convert “rock” into oil?  Gold conjectured that carbon-containing rock (from meteorites, for example) was part of the formation of the earth itself, and under the high pressures and temperatures found deep underground, was converted into hydrocarbons like methane and petroleum.  He relied in part on theoretical work done in 1976 by a Ukrainian engineer indicating that this was at least possible.  For example, he calculated that complex hydrocarbons (such as the paraffins, napthenes, and other aromatics making up crude oil) would be stable at temperatures of more than 1,832F and at pressures starting at 15 miles below the surface.  Likewise, 95% of the methane would survive at temperatures of 1,832F or less all the way up to the surface.


And so, assuming that crude oil was in fact formed spontaneously under the high temperatures and pressures of deep earth, how could it survive a migration to the surface without being oxidized to carbon dioxide?  It was theorized that cracks in the earth, such as those resulting from earthquakes, would provide a pathway for methane and other hydrocarbons to move upward, and that this mixture (plus the many metals and other elements that would be present) would be sufficiently stable to avoid oxidation. 


I just know that you are thinking to yourself: “Yeah, so why doesn’t someone try to duplicate this in the laboratory?”  Well, good news!  Somebody HAS.


In 2002 scientists from Russian universities as well as the Gas Resources Corporation in Houston, Texas, published an article concerning the origin of petroleum in the Proceedings of the National Academy of Sciences (a top-ranked journal).  They reported that when marble (a form of limestone), iron oxide (FeO, found in the earth), and water are pressurized in a system equivalent to what you’d find 60 miles below the surface of the earth at a temperature of 2,192F, the result is a full range of hydrocarbons in “distributions characteristic of natural petroleum”—all within one to two hours.  Holy Cow!  From my viewpoint (which is not that of a petroleum scientist), this seems like a major experiment validating the abiogenic theory.


Then in 2009, scientists at the Carnegie Institution showed in the laboratory that when methane (CH4) is subjected to temperatures of 1,300F to 2,240F and pressures equivalent to those found 40-95 miles below the surface, it is converted to ethane (C2H6), butane (C4H10) and propane (C3H8) and hydrogen.  Then if the ethane is subjected to the same conditions, it reverts to methane.  According to the researchers, this reversibility implies that the synthesis of these hydrocarbons is a function of thermodynamics “and does not require organic matter.”


So let’s sum up.  Microbial contamination can possibly account for the biomarkers that we find in petroleum, which makes their presence unreliable as evidence for the biological origins of “fossil fuel.”  There is theoretical evidence (based on mathematical calculations) demonstrating that petroleum can be produced non-biologically.  There is also empirical evidence (based on laboratory experiments) demonstrating that real “rocks” can be abiogenically converted into the hydrocarbons found in petroleum.  And finally, given the presence of earthquakes and the very deep sinking (subduction) of the earth along the margins of tectonic plates, there are potential pathways for abiogenic oil produced deep in the earth to rise toward the surface and fill, or contaminate, reservoirs.   


Well, this blog post is already long enough—and we still need to discuss the actual field evidence.  That is, the evidence found in petroleum reservoirs that either supports or contradicts the abiogenic theory.  And I just have to talk about an important chemical reaction discovered by the Nazis that is possibly going on under our feet at this very moment.


So I will stop here, knowing that you will be intrigued enough to tune in next week for the conclusion of the abiogenic oil story.

Thursday, April 11, 2013

THE STEM CELL CONTROVERSY - OR YET ANOTHER WAY TO MAKE THE U.S. LESS COMPETITIVE AND HARM MANKIND

Written 8 January, 2013

Stroke.  Baldness.  Blindness.  Deafness.  Amyotrophic lateral sclerosis.  Myocardial infarction.  Muscular dystrophy.  Diabetes.  Cancer.  Brain injuries.  Learning disorders.  Alzheimer’s disease.  Parkinson’s disease.  Missing teeth.  Wound healing.  Bone marrow disease.  Spinal cord injury.  Osteoarthritis.  Rheumatoid arthritis.  Celiac disease.  Huntington’s disease.  Birth defects.

The above is a long list of conditions that many of us have or may have. For many of these conditions there is no cure.

But what if there were a promising new technology with the potential for treating these conditions?  Anything that could possibly provide such a huge benefit to humanity would be pursued vigorously around the world, right?

Wrong.  Amazingly, a number of countries actually prohibit research that could result in effective treatments for the conditions on this list.  Including the United States.

We are talking about stem cell research.

First, what are stem cells?  They are cells that can either change into other types of cells, or continue propagating as the same cell type.  So, for example, a stem cell from amniotic fluid could possibly be induced to make a new liver.  Pretty simple concept.  Stem cells can potentially be derived from many sources to treat every single condition on the list—stem cells placed into gums could cause new teeth to grow; stem cells placed in the scalp could possibly grow hair, and on and on.  Sounds almost too good to be true.

However, there IS a controversy surrounding the use of stem cells.  And that is because the most useful stem cells are derived from human embryos.  That’s right—if you take a human embryo at the stage where it has between 50 and 150 cells (around 4-5 days old), pluck out some cells from the inside of this cell mass, these are stem cells.  As you might imagine, they have the potential to make all the different tissues in the human body.  After all, that’s what an embryo is programmed to do.

And so stem cells can be categorized as being either non-embryonic stem cells (NESC) or embryonic stem cells (ESC).  NESCs can be derived from many many types of tissues—skin, amniotic fluid, umbilical cord fluid, bone marrow, fat, etc.  Even urine!  A very recent paper (November 2012) demonstrated that stem cells derived from the kidney can be isolated from only 30 ml of urine and induced to form stem cells that have the potential to form many other types of cells and, therefore, tissues.

Scientific experiments on NESCs have been conducted since 1908, when the term “stem cell” was coined.  Research on the use of NESCs for the treatment of disease has been going on since the 1960s, when a bone marrow transplant (in effect, a stem cell transplant) was first used to treat severe combined immunodeficiency, or “SCID”, an extremely rare disease popularized by the “bubble boy”.  In addition to bone marrow transplants, NESCs are being used around the world today in the treatment of cancer and various immunologic conditions.   In fact a survey conducted in 2006 indicated that there were 50,000 uses of NESCs in 71 countries.  However, these treatments are not without hazards—for example, NESCs have the potential to differentiate into the wrong type of tissue—themselves forming tumors.  And even if all goes well at the tissue-generation stage, the treatment may ultimately fail because the recipient’s body sees the NESC as “foreign” and rejects it.

Unlike NESCs, ESCs have NOT been used in any clinical treatments in the United States.   This is because research on human ESCs has been stopped/delayed/handicapped by federal law.  I guess you can see the source of the controversy: deriving stem cells from human embryos kills the embryo.  But wait—human ESCs are derived from embryos that are slated for destruction.  They are essentially already dead, or soon will be.

All human ESCs are derived from embryos that have been created by in vitro fertilization (IVF).  In IVF, eggs are removed from a female and placed in a Petri dish, where they are flooded with sperm.  A healthy looking embryo is then plucked off the Petri dish and implanted in the female. This procedure is widely used around the world to treat infertility—in the U.S. there are 126 IVF procedures per 1 million couples each year, resulting in about 60,000 live births.  The numbers are even higher in other countries—899 procedures for each million couples in Iceland and over 1,600 for Israel.   What a wonderful thing!  And, by the way, children born by this procedure are just fine.  The world’s first IVF baby, Louise Brown, was born in 1978, and she gave birth to her first child in 2006.  I guess I’d ask those who oppose IVF if they’d rather that Louise Brown had never been born.  Well, that is a topic for another day.

Back to the controversy surrounding human ESCs, but first a tiny bit of history.  The first human ESC was developed in 1995 at the University of Wisconsin.   Of importance here is that these ESCs were developed with private funds— no federal money was involved.  This is important because United States law prohibited federal funds from being used for research on human embryos, and the ESCs were extracted from human embryos—embryos derived from IVF.  When the paper describing these experiments was published in 1998, much controversy ensued.  Suffice it to say that no federal money was made available to do research for making NEW human ESCs.  In 2001 during the Bush administration, a law was passed that permitted the use of federal funding for research on EXISTING human ESCs—the cell lines already made by the University of Wisconsin—but stated that no new human ESCs could be made using federal funding. 

Private companies, however, continued doing this kind of research and making products for sale.  It is also interesting to note that in 2004 the state of California, believing that research on human ESCs was vitally important to mankind, passed Proposition 71, which authorized $3 billion in bonds to fund research on human ESCs.  When funding subsequently stalled, in 2006 the Terminator (Gov. Schwarzenegger) authorized $150 million in loans to jump-start the process.  Many other laws during this time were debated, passed, vetoed.  Congress dithered.  States that were not as forward-thinking as California either restricted or imposed complete bans on human ESCs.  These include Arkansas, Iowa, Kansas, Louisiana, Nebraska, North Dakota, South Dakota, and Virginia.  I presume it would be fair to prohibit the use of any future therapies based on human ESCs in these states?

Finally, in 2009 the US Food and Drug Administration approved clinical trials using human ESCs—gee, that was nice of them.  So glad they are on our side.

Also in 2009, President Obama removed the restriction on the use of federal money to support the production of new human ESCs.  But in August 2010 a Federal District Court ruled that Obama’s executive order was invalid.   In 2011 the injunction was lifted, and in September of 2012 the U.S. Court of Appeals for the District of Columbia Circuit unanimously ruled  that the federal government can fund research on human ESCs. 

(Caveat:  The above has been garnered from many sources, and the spin may not be quite right as I have seen different dates for certain events.  But one thing is clear:  no new human ESCs have yet been produced using federal funds.)

Just imagine if you were a scientist wanting to do work in this area.  You would never know if you’d get funded, and even if you were, another law might be passed making it illegal for you to do the research.  Right in the middle of your experiments!

Certainly one effect of the ban on research developing/using human ESCs is that it has forced researchers to find ways of getting around the use of human ESCs, such as mentioned above.  But clearly stem cell research worldwide is not benefiting completely from U.S. genius.  In fact, Finland, Greece, Italy, and the Netherlands allow for the production of human ESCs.  As does China.

Now, I guess one can argue that hey, if Finland or China is doing this work, then why be bothered about it?  Won’t they share their results the U.S.?  So what’s the big deal?  Actually there are several “big deals.” First, if U.S. universities are not involved in this research, then U.S. universities will not be making any inventions in this area. If those inventions don’t exist, U.S. companies will not be getting preferential access to them, and entrepreneurial startup companies will not be formed to commercialize them.  Second, it means that U.S. scientists are not being trained in this field, putting the U.S. behind in cutting-edge science.  And it also follows that students are not learning “at the bench.”  So the next generation is not being trained. Third, it means that the United State’s considerable power in science is not being applied to this area, at least not to its full potential.  And so the world’s collective effort to develop therapies for a long list of diseases is not as effective as it could be.

So tell me, how is this to the benefit of mankind?

And what if researchers in Hong Kong make a huge future breakthrough—would we be denied access to therapies because they were derived from human ESCs?  That should give pause to those of us who are getting long in the tooth.  Maybe someday we will have to fly to China to get treatment.  My bet is that that’s where the cutting edge research will take place.

There is no historic example of the world benefiting from reduced science and technology.

COMPRESSED NATURAL GAS (CNG), U.S. ENERGY NEEDS, AND THE OSU WORLD-RECORD RACING CAR

One of the few bright spots in the United States energy picture is our production of natural gas, as well as our exporting of refined petroleum products.  Did you get that—EXPORTING.  But that is a subject for later; today I’d like to talk about natural gas and CNG.

Swamp gas.  Cow flatulence.  Biogas.  Nasty odors from refuse dumps.  Yup, natural gas is all around us!  Natural gas is methane (composed of a single carbon atom), often mixed with other gases such as carbon dioxide and hydrogen sulfide (think rotten eggs).   It is not propane (composed of three carbon atoms) also known as LPG (liquid petroleum gas).  Nor is it butane (composed of four carbons).  CNG is just compressed natural gas.

Through the application of new technologies developed by oil companies, our usage of home-grown natural gas has increased from 5% to 20%, which means we have decreased our use of imported natural gas from 95% to 80%.  It is projected that it will be down to 40% by 2020.  Electrical utilities are converting from coal to natural gas at such a rapid rate that railroads are suffering, coal being one of the major products they transport.  In 2003 coal generated 51% of our electricity, and by 2011 that figure was down to 43%.  It continues to fall as coal-burning utilities either convert or close down, and the decreasing price of natural gas will continue to drive this switch.  Natural gas currently offers the cheapest way to produce electricity—20% lower than coal, nuclear, or most renewables.

North America has massive supplies of natural gas.  We are able to produce it for sale at prices ranging from $2.50 to $4.00 per thousand cubic feet, which is very competitive in comparison with Europe ($10) and Asia ($15).   Part of the reason we can sell it so cheaply is that the U.S. is a leader in developing and deploying a new technology called “fracking” or “induced hydraulic fracturing.”  This consists of injecting water and other chemicals under high pressure DEEP underground (as much as 4 miles!) into shale, coal, limestone or sandstone. This pressure fractures the stone, allowing low concentrations of gas to migrate into pipes that then bring it up to the surface.

Since there are huge natural gas reserves around the world, natural gas may very well be the fossil fuel that helps transition us to other energy resources in an oil-depleted world.  In addition to land reserves, natural gas exists at the bottom of the ocean and under permafrost in the form of “hydrates” (combinations of methane plus water in a ratio of 1 methane molecule to approximately 6 water molecules).  These form crystals that look like ice.  Worldwide supplies of these hydrates are estimated to be twice those of all other fossil fuels combined. You can actually ignite hydrate crystals and water will flow out!

I’m sure you’ve seen those signs along the highways advertising gas stations supplying CNG, but they are few and far between— there are 920 CNG stations in the U.S., compared with 120,000 stations dispensing gasoline. If you look at a map of CNG stations in the United States, it is clear that Oklahoma and California predominate.  South Dakota has none.  The great state of Montana has two.  And overall they are very few and far between.  Be very careful planning on a family cross country trip, and stay off the side roads!

So who is buying CNG and why?

For vehicle use, CNG is dispensed on a “gasoline gallon equivalent” (gge).  This means that one “gge” has the same energy as one gallon of gasoline.   The price of 1 gge averages about $1.00 less than the price of a gallon of gasoline, though current prices of 1 gge range from $1.00 to around $3.00.  Still a lot less than gasoline. 

So what does this mean in terms of driving? Assume a CNG car and a non-CNG car drive 15,000 miles. Assume the non-CNG car gets 41 miles/gallon and the CNG car gets 28 miles/gallon. And finally, assume a $1.00 price per gallon difference between CNG and gasoline. This would mean the annual fuel cost for a CNG car would be $1,071 less than the non-CNG car.

The U.S. has 114,270 CNV vehicles, mostly buses.  The buses here in Stillwater are CNG, and 37 of the cars in the OSU motor pool are either CNG or a mixture of CNG and gasoline.  FedEx and UPS both have CNG vehicles in their fleets. With the savings in fuel, it’s not surprising that private industry is experimenting with CNG.  If the long-term savings is “real,” then we can expect to see industries with high fuel costs accelerate the conversion from gasoline or diesel to CNG.  And many other countries in the world are far ahead of the U.S. in using CNG—Iran has more vehicles than any other country, followed by Pakistan, Argentina, Brazil, and then India.  Canada is even converting a locomotive over to CNG.  How much of this is due to politics rather than economics, I can’t tell.

The Honda Civic Natural Gas is apparently the only car sold today that runs on CNG right out of the factory, but you can convert your existing car to CNG for between $8,000 and $16,000 using an EPA-approved conversion kit.  Yes, that’s a lot of money to save just a thousand dollars a year, but would you expect a CNG car to be as cheap as one fueled by gasoline?  CNG is new technology, after all, and we can expect the conversion cost to drop considerably.  In the likely event that gasoline gets even more expensive, and CNG gets even cheaper, the economics of using CNG will improve quite a bit.

70% of US imported oil is used for transportation, and 38% of that is used for heavy-duty trucks, buses, and municipal vehicles.  If just those are converted to CNG, we will have a 30% savings in imported oil use.  Would it be worth it for the federal government to subsidize the installation of CNG stations at truck stops across the country?  Surprisingly, it might pay for itself due to a reduction in military expenditures alone if greater reliance on CNG lessens the need to police the flow of oil around the world.

Although CNG has a number of advantages in addition to cost, such as reduced greenhouse gases and a higher octane rating than gasoline, there are geologic, mechanical, political, and aesthetic concerns about natural gas that we are just beginning to explore.  For example, does fracking cause earthquakes?  Does it contaminate water aquifers?  Can we reduce the cost of making gasoline-to-CNG conversions?  Does CNG cause wear and tear on a car engine?

As usual, many of these questions are being worked on at universities.  To decrease the need for CNG stations, Colorado State University and Oregon State University are developing a new type of fuel system that will accept uncompressed natural gas from sources such  a pipeline at your house—and then compress it inside the car. Texas A&M University is working on a super adsorbent lining for CNG tanks that would permit fueling at lower pressures.

And as part of an international design competition sponsored by the Society of Mechanical Engineers, a student club at Oklahoma State University called OKracing has built the world’s only Formula racing car that runs on CNG.  (A Formula car by definition is an open-wheeled, single-seater.)  OSU’s car, which can accelerate from 0 to 60 mph in four seconds, recently traveled 770 miles in 24 hours, breaking its previous record of 590 miles over the same time period.  The rules of the competition require the entire design team, including drivers, to be made up of college students.  What an awesome learning experience—and a tremendous opportunity for further research into the impact of CNG on increased engine performance. The student’s team leader, Professor in residence Jim Beckstrom says it best:

"The more Universities we can get to compete, the more innovation they'll drive, and I think it will be a great, great next step for natural gas as a fuel,"

Conversion of state fleets to CNG is gaining momentum. Oklahoma Governor Mary Fallin and Colorado Governor John Hickenlooper have led a national, bi-partisan initiative that just this last week resulted in identifying 22 car dealerships for Chrysler, Ford, GM and Honda to deliver compressed natural gas cargo, utility and passenger vehicles for use in their state fleets.

I myself have learned how to “tank up” a CNG car from the OSU fleet.  It took some instruction, but overall it was not unlike filling up with gasoline.  The “whooshes” of gas moving from the pump to the car were kind of exciting (and scary!), but the best part was thinking, “Wow, here I am pumping CNG.”  It was a futuristic thrill.  Honest

CONCRETE: IMPROVEMENTS TO AN OLD TECHNOLOGY

Quiz time.

Question:  What is the largest commodity in the world?   Answer:  water. 

Question:  What is the second largest commodity in the world?  Answer:  concrete.

Over 6 billion cubic meters of the stuff are poured every year.  That is a meter (over 35 cubic feet) for every person on earth.  Per year.

And did you know that the concrete in the Hoover Dam is still getting stronger, even though it was completed in 1936?

OK, now I know I have your attention.  First, for the layperson like me, we need to deal with the terminology: concrete and cement are not the same thing.  Concrete is what you drive on.  Cement is the grey powdery stuff you pour out of a bag, add gravel and sand to, and mix with water.  After several hours, it hardens to become concrete.  Then you can drive on it.  Pretty simple, right?  Actually, concrete is a widely studied material with billions of dollars devoted to improving its cost, service life, and ease of construction. 

Where does cement come from?

Well, first just a little chemistry.  To really appreciate cement’s ancient history, you’ll have to bear with me. 

It turns out that all cements have common ingredients.  That is, they all have atoms of calcium (Ca), oxygen (O), silicon (Si), iron (Fe) and/or aluminum (Al).  With lots of heat, these individual atoms are combined into four basic molecules.  These molecules are largely calcium in combinations with the others.  When you add water, these molecules dissolve and reform in a “new” combination.  The primary material formed is (CaO)•(SiO2)•(H2O)(gel) + Ca(OH)2,  plus heat.  If you include gravel and sand as inert filler and wait awhile, then you can walk on it.  You have concrete.  If you didn’t add a source of silicon, you’d have “lime plaster,”  the most common kind of plaster.  The problem with plaster is that it is not stable in water.  That means that without silicon (sand), your roads would wash away when it rains!

What is historically interesting is how these ingredients were found in nature and mixed together.  CaO, also known as “lime” or “burnt lime”, is made by heating limestone, which is naturally occurring and found almost everywhere.  Limestone is calcium carbonate, or CaCO3. (Limestone is fun because you can dribble acid on it and it will “fizz”, as the limestone releases carbon dioxide, or CO2.. I used to carry acid with me all the time when I went rock hunting, so I could tell when I had limestone.)  Anyway, if limestone is heated to 1,500o F,  it releases CO2 gas and leaves CaO solid behind.  When CaO mixes with water, it releases a lot of heat and forms calcium hydroxide, or Ca(OH)2.  (In the 1200s, the English apparently won a sea battle by throwing a bunch of CaO into the air so that it drifted into the eyes of the French sailors on the other side, which then, since eyes have water, caused heat—and, well, the English won).  Another source of CaCO3 rather than limestone, is sea shells.

The source of silicon or aluminum is varied.  The most common source for making cement is from “clays.”  Clays primarily contain various combinations of silicon and oxygen, or aluminum and oxygen, or silicon and aluminum and oxygen, or all of these and magnesium, calcium, sodium, iron, and potassium.  There is such a blizzard of combinations, each one of which is considered a different mineral, that I’m sure introductory geology students pass or fail based on their ability to keep them all straight.  Of course clays occur all around the world, so are very accessible.

So there you have the chemistry to appreciate the very long history of concrete.

Plaster, which is like cement but without the silicon or aluminum, was used as a building material as far back as 8,000 B.C. in the Middle East.  It has been found in pyramids 4,000 years old, and the Aztecs paved streets with it.  Note that these are all places where it almost never rained!

Apparently the first use of cement dates back to 800 B.C. with the Macedonians, but the Romans are the ones who gave cement its name and showed its real potential as a building material.  These groups were the first to realize that in order to keep cement from dissolving in water, you’ve got to add silicon.  This was a critical finding, one that has enabled ancient concrete aqueducts, roads, and cathedrals to remain standing today.   Good examples are the Coliseum, completed in 80 A.D, and the Pantheon’s dome, built in 128 A.D.   Roman cement consisted of lime (CaO), plus Pozzolana (a source of silicon and aluminum), named for a volcanic ash quarry from the Italian town of Pozzuoli, near Naples. Another source of silica and aluminum was powdered ceramic pots, since they were made of clay.  Amazingly, Roman concrete is said to have the same compressive strength as modern concrete.  That is pretty impressive all by itself, but even more so in light of the fact that it took modern engineers over a hundred years of continued testing and improvement to get to that point.  Moreover, some surviving Roman concrete is even more resistant to corrosion from salt than modern concrete.

After the Romans, concrete technology virtually disappeared during the Dark Ages.  It next shows up in Finland the 15th century, and in 17th century we have the incredible Canal de las Doas Mars (also called the Canal du Midi).  Completed in 1681, it stretches across most of France to the Mediterranean, allowing ships to skip the month-long trip around Spain.  And the canal is still functional!  You can still row its 150-mile length today (how awesome would that be?!).

The use of concrete got a really big boost in the 1700s when British engineer John Smeaton discovered that the best concrete was made of native limestone that had a particular clay content.  He closely studied the work of the great Roman builders and essentially rediscovered concrete technology.   There then followed a series of improvements that culminated in the still-famous “Portland cement.”  First, this cement was NOT made in Portland, Oregon (which is what I used to think).  It was so named because the color of the cement made it look like the highly-sought-after British Portland limestone.  The basic improvement in cement manufacture consisted of grinding the limestone, heating it, mixing it with clay,  and then mixing in water and letting it set.  The resulting concrete was heated in a furnace (kiln) and re-ground to a fine powder, which was called “Portland cement” in Joseph Aspdin’s 1824 patent.  Around this time, several methods for making cement were evolving, but Portland cement is still the most
commonly used.  As you might imagine, however, modern manufacturing methods are very different from the process described in the Aspdin patent.

So by the 1800s, concrete production was coming on strong.  And in the late 1800s, the greatest inventor of all time even got into the “mix” --Thomas Edison invented concrete houses that you can still see in New Jersey today.  I understand some may leak, but hey, so did homes made by Frank Lloyd Wright!  Although Edison’s concrete business was ultimately not successful, along the way he invented the 150-foot-long rotating kiln, which was almost twice the size of the standard kiln in use at the time and resulted in economies of scale that led to significant cost savings.

And that brings us to the construction of the Hoover Dam, 1931-1936, which was at that time the largest concrete project the world had ever seen.  The dam was not poured as a continuous structure, but was actually composed of huge blocks of concrete that were poured in place.  If the builders had made a continuous pour, it would have taken 125 years to cool, and cracks would have formed.  Amazingly, the concrete is still “curing”, and a core sample taken in 1995 shows that as this process continues, the dam is still gaining in strength.  All told, there is enough concrete in the Hoover Dam to make a two lane road stretching from San Francisco to New York.  And just to set the record straight, there are no human bodies entombed inside the dam, even though that has been rumored from time to time.

And now for roads.  We all know that a lot of roads are made of concrete, but others are made of asphalt.  Which is best?

That turns out to be an unexpectedly complicated question, which requires us to answer another question first:  What is asphalt?  Asphalt is derived from oil.  It is called the “bottoms,” since it is the approximately 6% that remains after other “lighter” components have been removed from “normal” crude oil through distillation.   Asphalt can also occur naturally in deposits, such as the Canadian Tar Sands.  When mixed with gravel/rocks, it has been used for making roads since before the Roman times.  Today it is sometimes called “asphalt concrete,” not because it contains concrete, but because the engineering definition of concrete is broad enough to include a mixture of asphalt and aggregate, such as rocks and gravel.  And many of us use the name “tarmac” to refer to airplane runways, but tarmac (abbreviated from “tar macadam”) is simply another word for asphalt concrete, or just plain “asphalt.”

Since asphalt is made from oil, the cost of asphalt is absolutely connected to the price of oil.  This is a negative.  However, asphalt makes for less noisy roads when compared to concrete.  This is a positive.  Asphalt also becomes deformed when it gets hot, and heavy traffic can cause troughs or ruts; it is also very sensitive to moisture and subsequent cracking.  On the other hand, concrete requires steel reinforcement, which is more sensitive to salt.  All told, asphalt roads have a life expectancy of 10-15 years, while concrete roads have an expectancy of 30-40 years and sometimes much higher.  Several concrete roads in Texas have been in place for over 70 years.  However, around 95% of all concrete roads in the U.S. have an asphalt “topping,” so even if you think you’re driving on an asphalt road, it may really be concrete underneath.

Another innovation regarding concrete and roads is “rebar,”—you know, those steel bars crisscrossing the base of concrete highways.  Or bridges.  Or any other concrete structure that needs to “flex.”  These materials are a great match!  While concrete is weak under tension, steel is not.  So in the areas where the concrete will bend, they add rebar to help keep the inevitable cracks very small.  In addition, the universe has conferred an amazing physical property on steel rebar—it has the same expansion/contraction properties as concrete.  So when a road freezes, for example, there is no separation/cracking around the rebar because it shrinks just as much as the concrete does.  However, when salt seeps down inside concrete, it can cause corrosion when it makes contact with the rebar.  Fortunately the steel in concrete takes much longer to corrode than steel in the open air because the chemicals in the concrete form a protective layer around the rebar.  For all of these reasons, rebar and concrete are intertwined in modern construction.

Now you might think that with a building material that is thousands of years old, the chances of making any new improvements would be slim.  You’d assume, reasonably enough, that there is not much room for innovation.  But you would be wrong.

Here at OSU, improvements in concrete technology are coming fast and furious.  Our concrete inventor here is Dr. Tyler Ley, Associate Professor in Engineering.  He had a thing or two to say about this blog.  All good, of course.

Take corrosion.  As mentioned above, the rebar in concrete can corrode, and losing the strength of rebar in concrete is not a good thing.  Think collapsing bridges and crumbling buildings.  Inventors at OSU have developed a silver-dollar sized corrosion detector made with a low powered circuit that mimics an RFID chip (the same kind of chip Wal-Mart embeds in clothing to detect if a garment has been removed from the store illegally).  If salts enter the concrete, then they will corrode the sensor’s external detection wires, and the sensor will change its response.  A remote detector can be used to read the sensor and thus detect the salts.  All of this can be done without any wires or batteries, and for about $30 per sensor.  OSU has actually embedded these sensors in several bridges in Oklahoma, two of them on I-35 near Stillwater.  Pretty cool, no?  Of course we have filed for a patent on this.

Then there is concrete “curing.” When a highway is poured, especially under the brutally hot conditions here in the south, the concrete can lose water too rapidly.  This can cause cracking and for the concrete not to gain strength.  What we want to do in this situation is slow down the evaporation and control the temperatures in the concrete, which allows more time for the various chemical reactions to take place and results in stronger concrete.  A common practice is to spread wet burlap bags over the concrete to slow the evaporation.  In looking for a better solution to this problem, inventors at OSU have developed a product that is 95% recycled materials, the largest ingredient being recycled paper pulp.  When sprayed on new, wet concrete, this mixture will hold moisture three times longer than conventional burlap bags.  Further, “wetness” dyes can be added to the pulp to indicate the moisture of the covering.  If it dries too quickly, then the material can be re-wetted.   As soon as the concrete has reacted enough, the pulp can simply be blown off and recycled.  This system was tested with great success last summer on a bridge in Woodward, Oklahoma.  We’ve filed for a patent on this technology, too.

Freezing and thawing are hard on concrete, but the problems can be alleviated if small pores are allowed to form within concrete by adding soaps while mixing.  In fact there are standards dictating the size and number of these “void” spaces.  The function of the void is to provide an escape path for water that soaks into the concrete, so that upon freezing, the water can expand into the voids and not crack the concrete.  The problem is that currently these small void spaces can only be detected AFTER the concrete has cured, at which time a core of concrete is removed and examined under a microscope.  If it turns out that the voids are not the right size and number—you guessed it, the concrete has to be broken out and re-poured.  Ouch.  So inventors at OSU have developed a device that measures the size and number of the voids in WET concrete by using the response of the material to air pressure.  If the concrete doesn’t have the correct voids, then more soap can be added and mixed into the concrete while it is still on the truck.  Do you think we’ve filed for a patent on this one?  Absolutely.

A point I continually make is that the universe seems to be infinitely amenable to more and more innovation.  Even in the oldest systems.  And so it is with concrete.

Tuesday, April 9, 2013

The Cholesterol Connection


The Cholesterol Connection

Although I didn’t mention it by name in my recent 4-part blog series on dietary fat, the so-called “Lipid Hypothesis” is central to any discussion of the relationship between fat and heart disease.  The idea behind the Lipid Hypothesis is a pretty simple one:  eating saturated fat leads to increased cholesterol in the blood, which in turn causes cardiovascular disease.  The Lipid Hypothesis is behind all, or nearly all, dietary recommendations, and almost everything you read about diet in the popular press toes this party line. 

I ended the last blog post with a statement that flies in the face of the Lipid Hypothesis, namely that eating saturated fat does not cause cardiovascular disease.   But what about cholesterol—does eating fat, any fat, cause an increase in cholesterol?   And do elevated cholesterol levels cause heart disease?  Part of the reason that there is so much confusion about a healthy diet in general is that the story is highly nuanced, so an answer to any question concerning nutrition requires more than a flat yes or no.

But in answer to the first question, here is what the data says:  eating some kinds of fat does result in higher cholesterol levels, but the increase is negligible and it depends on the type of fat.  The answer to the second question is what this blog post is all about.

Cutting right to the chase, I can say yes, the data supports the hypothesis that increased cholesterol in blood plasma is correlated with increased cardiovascular disease.  The science here is very good and as a result, there is near unanimity (or “consensus”) of opinion that the connection between cholesterol and heart disease is more than a mere correlation—there is causation as well.  To add fuel to the fire, high cholesterol is also correlated with increased mortality. 

But there are always nonbelievers, in this case “The International Network of Cholesterol Skeptics.”  So, I, of course, went to their website, expecting to find scientific literature dispelling, or at least chipping away at, the “cholesterol to cardiovascular disease” connection.  And what did I find?  A bunch of YouTube videos and press releases ranting about—my last four blogs.  Well, they hadn’t read my blogs, but the points they made were essentially the same.  That is, they were attacking the anti-fat dogma.  They also railed about the negative health effects of statin drugs, but I didn’t find anything on their website about cholesterol and cardiovascular disease per se except the following:
For decades, enormous human and financial resources have been wasted on the cholesterol campaign, more promising research areas have been neglected, producers and manufacturers of animal food all over the world have suffered economically, and millions of healthy people have been frightened and badgered into eating a tedious and flavorless diet or into taking potentially dangerous drugs for the rest of their lives. As the scientific evidence in support of the cholesterol campaign is non-existent, we consider it important to stop it as soon as possible.
   The International Network of Cholesterol Skeptics (THINCS) is a steadily growing group of scientists, physicians, other academicians and science writers from various countries. Members of this group represent different views about the causation of atherosclerosis and cardiovascular disease, some of them are in conflict with others, but this is a normal part of science. What we all oppose is that animal fat and high cholesterol play a role. The aim with this website is to inform our colleagues and the public that this idea is not supported by scientific evidence; in fact, for many years a huge number of scientific studies have directly contradicted it.   

Now, the majority of this diatribe is most certainly false.  Except for the part about the public being badgered into eating flavorless food.  I mean, who would want skim milk in their latte rather than Half & Half?  On the contrary, the scientific evidence for the connection between blood cholesterol levels and cardiovascular disease is overwhelming, especially in people whose cholesterol is very high.  (And although I have no poll data, I suspect the group of folks supporting this Network is not growing very fast.  I count 98 people on their “member” list.)  Certainly the dietary fat/saturated fat connection is starting to fall apart, and I’ll grudgingly concede that they may have a point concerning the negatives associated with a carbohydrate-based diet.  But but on balance the rest of their assertions are not supported by the most recent evidence.

So let’s look at some of the data.  The cholesterol story is very interesting, and some elements are surprising. 

First, there is a genetic disease called “familial hypercholesterolemia,” (note the prefix “hyper,” meaning “more”) or “FH” for short.  People with a gene causing this rare condition have cholesterol levels in excess of 350mg/dL (“normal” is currently defined as less than 200 mg/dL).  Now, geneticists love genes that cause medical conditions, because it enables them to target specific causes and biochemical pathways.  And in this case the most common gene implicated in FH (there are at least three) is one that causes a defect in the liver’s removal of LDL (the “bad” cholesterol) from the blood.  So LDL builds up, the patients develop plaque, and left untreated, a lot of them die prematurely from cardiovascular disease.

In a study published in 2008, researchers surveyed 3,382 British people in their 40’s.  Before receiving treatment for cardiovascular disease, 28% of the men and 20% of the women had already had a heart attack, by-pass surgery, or angioplasty, which is well above the norm for any population.  In 1992, the test subjects started treatment with a class of drugs called “statins” that interferes with the formation of cholesterol.   In those without any signs of cardiovascular disease prior to treatment, mortality from the disease dropped 48% and in those who already had cardiovascular disease, mortality dropped 25%.

So let’s pause here and consider the facts.  It is clear that persons with highly-elevated cholesterol have above-normal rates of cardiovascular disease.  It is also clear that when you interfere with cholesterol production in these persons, their rate of cardiovascular disease is greatly diminished.  And in fact, this is all pretty much scientific dogma today—I don’t think any scientists are disputing it.

But what about people without pre-existing cardiovascular disease?  

A large meta-analysis published in 2009 surveyed 68 long-term studies (2.79 million person years) in Europe and North America involving 302,430 people who DID NOT have cardiovascular disease at the start of the study.  The authors looked at the relationship among total cholesterol, LDL cholesterol (“bad” cholesterol), HDL (“good” cholesterol), and triglycerides. What they found was that high total cholesterol, high LDL, and low HDL were significantly associated with cardiovascular disease.  Interestingly, triglycerides (a measure of “fat” in your blood) showed a poor association.  Let me say that another way:  there was no relationship between triglyceride levels and cardiovascular disease.  Further, the incidence of ischemic stroke (the interruption of blood flow to the brain caused, for example, by a clot blocking an artery), showed a modest association with these blood factors, but the incidence of hemorrhagic stroke (rupture of a blood vessel) did not.
(And surprisingly, fasting before a cholesterol test was irrelevant.  In other words, it didn’t matter whether or not the patients had eaten prior to the test or not.)

I also looked at an earlier 2007 meta-analysis based on 61 studies, mostly in western Europe and North  America but also a few from China and Japan.  These studies involved almost 900,000 adult patients who DID NOT have previous cardiovascular disease.  The meta-analysis showed that total cholesterol, HDL, and LDL were linearly associated with heart disease mortality.  High HDL was “good” (low association with mortality) and high LDL was “bad” (high association with mortality), but the ratio of total cholesterol to HDL was the best predictor.  And as in the 2009 study, there was only a modest relationship between levels of these blood factors and ischemic stroke.

Okay, so these studies give us evidence for correlation when what we really need is evidence for causation.   Clearly, the best evidence for a causal relationship between high cholesterol and death would a finding of decreased mortality after specifically lowering cholesterol levels—without changing anything else.  Fortunately we have a way of doing just that:  by administering “statins.”

In 1971 statins were discovered in Penicillium sp. by Japanese biochemist Akira Endo (and they have since been found in other fungi as well).  At that time it was already known that cholesterol is manufactured in the liver through a very complicated series of biochemical steps called the “Mevalonate Pathway”, which is central to the formation of hormones and their precursors as well as such diverse chemicals as the rubber (in the rubber tree) and latex (in the sap from milkweeds).  What’s important here is that all biochemistry students had to memorize the Mevalonate Pathway, including Mr. Endo (and me).

Anyway, Akira Endo worked with fungi, and with his knowledge of the Mevalonate Pathway, he hypothesized that fungi produce a chemical that inhibits cholesterol formation in the parasitic organisms that prey on fungi.  The chemical that he found was compactin, now known as the very first “statin.”  In 1987, sixteen years after Endo’s discovery, Merck started marketing lovastatin, the first of this new class of drugs to reach the marketplace.  (What an interesting story this is!   I wish I knew more about it, as it is an excellent example of an obscure area of science leading to one of the most significant breakthroughs in history.) 

So statins have been used commercially since 1980.  What do we know about their efficacy and safety?

A meta-analysis published in 2011 reviewed 76 studies that compared statin use with placebo in 170,255 patients who were, on average, 60 years old.  The studies, which lasted an average of 2.7 years, included both men (74%) and women (26%), some of who already had coronary heart disease.  Here is what they found: statins reduced all mortalities by 10%, all cardiovascular deaths by 20%, and deaths from heart attack by 18%.  (There was also a reduction in deaths from stroke, but it was insignificant.)  Statin use resulted in a 26% reduction in non-fatal heart attacks and had no impact on cancer, but there was a TINY BUT STATISICALLY SIGNIFICANT RELATIONSHIP BETWEEN STATIN USE AND THE INCIDENCE OF DIABETES—an increase of 3.8% for statin users compared to 3.5% for controls.  But for purposes of the cholesterol hypothesis, the most relevant finding was that for every 10% reduction in LDL, there was a 2% reduction in cardiovascular mortality.  The bottom line is that using statins to reduce LDL cholesterol results in fewer deaths from heart disease.  

But here is the kicker:  because statins have shown such dramatic effects in cardiovascular disease over the past few decades, physicians have started prescribing statins to people at low risk.  And in Great Britain statins are available over-the-counter—at low dosages to be sure, but nevertheless available without a prescription.  Even if you can get them only in 10 mg pills, what’s to prevent someone from taking 10 pills, or more?

As a result, good scientists have started to wonder about the effect of statins on people at low risk.  Several studies have looked at this issue, but here are two that reach interesting, though OPPOSITE, conclusions.

A 2010 meta-analysis analyzed 11 studies and 65,229 people, ages 51-75, without cardiovascular disease but with LDL levels considered borderline high (an average of 138 mg/dL).  The subjects were divided into two groups—one received statin treatment and the other did not.  After 3.7 years, the statin group’s LDL averaged 94 mg/dL, while the placebo control group averaged 134 mg/dL.  HOWEVER, there was no difference between the two groups with regard to mortality.

A 2011 meta-analysis analyzed 29 studies, 8 of which were also covered by the 2010 meta-analysis discussed in the previous paragraph.   The 80,711 participants were 62% male and 38% female with an average age of 58 years and had a LOW risk of cardiovascular disease—less than a 10% probability.  (Interestingly, 47% of the study participants had high blood pressure, indicating that high blood pressure does necessarily not place people at high risk.) The patients were divided into two groups, with one group receiving a placebo, and the other a statin.  Those who were given statins had significantly lower rates of all-cause mortality, non-fatal heart attacks, and non-fatal strokes.  The was NO DIFFERENCE IN RATES OF CANCER OR DIABETES.

And guess what?  The last study didn’t even report on cholesterol.  So why did I even bring them up when cholesterol is central to any discussion of the Lipid Hypothesis?  Because I think one can conclude that as far as many scientists are concerned, the relationship between statin use and cholesterol reduction is so strong and so well-known that it no longer needs to be pointed out.  Since the authors of the last meta-analyses didn’t need to re-invent that wheel, they were able to ignore cholesterol altogether and proceed directly to the bottom line—the effect of statins on cardiovascular disease, cancer, and diabetes. 

But did you notice that the 2010 study found that statins had no effect on all-cause mortality and the 2011 study found that they had a significant effect?  I don’t know how to interpret these results, except to say that is just how science goes.  It may be that the first study did not last long enough for the effects of statin use to become apparent, but it is impossible to tell.  Unfortunately the 2010 study did not report on events other than death (such as non-fatal strokes or heart attacks), and neither of them reported the extent to which the test subjects had significant plaque build-up.

I should also point out that another 2012 meta-analysis showed that statin use did NOT result in fewer deaths from deep-vein thrombosis or “DVT” (an obstructive blood clot in a deep vein).  This particular analysis was conducted because of an earlier study indicating that statins can affect blood clotting, but it is not immediately clear why this should be the case.   The classic risk factors for DVT are numerous and varied (obesity, leg trauma, pregnancy, pancreatic cancer, old age, etc.), but elevated cholesterol is not one of them.  Even so, there was nearly a statistically-significant relationship between statin use and a reduction in PVT mortality (only off by .03%).

So what does this all say about the Lipid Hypothesis?  It seems that the first half is false, and the second half is true.  In other words, ingesting saturated fat has no significant effect on cholesterol levels, but cholesterol levels do have a significant effect on cardiovascular disease.  (At least as of 2012, when the most recent meta-analysis was published.)

And, finally, statins overall seem to be efficacious and safe.  They reduce deaths from cardiovascular disease, and they apparently do not cause cancer.  Nevertheless, there are those who continue to rail about the danger of statins in spite of all the evidence to the contrary.  In fact there are some new (2012) books with titles such as: How Statin Drugs Really Lower Cholesterol: And Kill You One Cell at a Time, and The  Truth About Statins:  Risks and Alternatives to Cholesterol-Lowering Drugs and Poisoned!:  Recovery from Statin “Side Effects”.  I have not read these books, so I don’t know where the authors are getting their information.   However, I have no doubt that if you look hard enough, you can find data to support any position you might want to take.  This is evident from the meta-analyses that I scrutinized as a part of the research for my blog series on dietary fat—I occasionally found individual studies that reached the opposite conclusion from that of the overall analysis.  By focusing on those few outlier studies to the exclusion of the many studies that support the majority conclusion, it would be possible to cite legitimate data in support of a position that runs counter to the weight of the evidence.  But why would you want to do that?  Unless maybe you thought you could make some money as a whistle-blower by spreading biased or unfounded theories, it really makes no sense. 

It seems to me that one should strive to look at ALL studies, whether they support your position or not.  That is why meta-analyses are so valuable.  Unlike most books, they are peer-reviewed, which means that prior to publication, other scientists were asked for their opinions as to the quality of the underlying research.  You can bet that any scientist who thinks a paper is not scientifically sound will not hesitate to make that opinion known.   That’s the beauty of peer-review, which is central to all scientific publishing, but NOT a part of the publishing process for most books intended for a general audience.

I should also make it clear that I do NOT believe everyone should be taking statins.  Why take even the small risk that they pose unless you are trying to avoid the much greater risk of cardiovascular disease?  And the modest increase in diabetes suggests that if you are taking statins, you should make sure you get tested for that, too.  (And along those lines, I should also point out that among the participants in these statin studies, liver enzymes were elevated in some (a bad thing) and depressed in others (a good thing).  So if you’re taking statins, it would be a good idea to get your liver enzymes tested per your doctor’s orders in case you’re one of the people who has a negative reaction.  Ain’t no free lunch.)

The verdict on cholesterol and cardiovascular disease?  Case closed.  I’m going to enjoy my saturated fats.