Friday, May 31, 2013

CLIMATE CHANGE (Part 1)


Unfortunately, graphs do not seem to post on this blog site, so I included weblinks where the graphs were supposed to be.

I have avoided blogging on climate change, as it has gotten politically-hot and ladden with "correct" views.  However, I've tried to stick with the "facts" as I see them with a minimum of snide comments of my own.....

++++++++++++++++++++++++


I have a photo hanging in my office of  four smiling children with looks on their faces that seem to be saying “Dad, aren’t you done yet?”  They are standing next to a marker that established the extent of Jasper National Park’s Athabasca Glacier in 1982.  Since the photo was taken in 1994 and the glacier can be seen about 100 yards behind the marker, it provides concrete evidence that the glacier had receded about 100 yards in only 12 years.  In fact, it has retreated about 0.92 miles since observations started about 125 years ago.  It’s hard to argue that glaciers are not receding when you have this one glacier’s history staring you in the face.

In the early 2000’s I became intrigued by how long the ice remained on Lake Mendota in the beautiful town of Madison, Wisconsin.  I had a bird’s eye view of the lake from my 12-floor office for many years and felt privileged to be able to watch its magnificent changes with the seasons.  I started to keep track of how long the lake was frozen and subsequently learned that the North Temperate Lakes program at U.W. Madison had ice duration records for this lake going back to 1855—the longest such record for any lake in the United States. I took this nearly 150-year history and plotted a graph of how long the ice had remained on the lake each year from 1855 to 2002.  It was obvious that in the mid- to late-1850’s there was ice on the lake for as long as five months each year, which contrasted sharply with some of my own observations in which the ice lasted only two to three months.   (Happily, the data set is continually updated, and ice data is available online through 2009*.  You can plot it yourself!)  In briefly scanning the raw data, it is clear that the trend I observed continues through 2009, with ice duration being not more than 90 days.  What a change—during the lives of my great-grandparents, southern Wisconsin winter’s lasted six months, but during my children’s lives, it has only lasted about three months.  Sometimes data is so obvious you don’t need to do a statistical analysis to tell an apple from an orange.

These personal and anecdotal observations prompted me to write an article in 2002 titled: “Grab your shorts and sandals, it’s getting warmer,” which was published in the now-defunct Wisconsin Outdoor Journal (amazing that my efforts did not reverse the Journal’s apparent decline, don’t you think?)  Writing the article was great fun, as it forced me to read a lot about climate change, starting with the Medieval Warm Period, which began in the 1200’s and ended in 1350 with the onset of the Little Ice Age, a bitterly cold period that lasted 300 years.  In 1850, the climate started to get warmer, a trend that continues to the present day.  I found that the Lake Mendota ice duration data was entirely consistent with this warming.  But interestingly, the ice duration has not changed much in the past 20 years—it seems to have plateaued at about 60 – 90 days (more on this later).

Well, it’s always more fun if you have a theoretical model in mind when you watch the world unfold.  And to this day I still ask some contacts in Madison to let me know what the ice is doing on Lake Mendota.  And my brother keeps track of ice duration for Hultman Lake in northern Wisconsin, continuing a tradition begun in the 1930s by friends and family.

So partly due to my interest in Wisconsin ice data, I’ve kept a casual eye on the ebbing and flowing of the climate change debate. As I see, it the controversy has three main thrusts:

1.            Is the planet getting warmer?

2.            If so, what is the cause—is it the so-called “greenhouse gases” such as CO2 and methane?

3.            What role has Homo sapiens played?

The first question, is the planet getting warmer, seems to be resolving into a consensus of “yes,” at least up until very recently.   The pros and cons of the argument have been interesting to watch, as they illustrate science in action: the good, the bad, and the ugly, complete with pronounced political overtones. 

When discussing climate change or global warming, as it used to be called, it is hard to avoid the “impact of humans” controversy.  However, in this blog I’m going to stick to the first and second questions—is the planet getting warmer and is CO2 involved—and postpone the issue of “anthropological forcing” (the impact of humans on climate change through the release of “greenhouse gases) until a later blog post in this series.

So, what is the scientific evidence that the planet is getting warmer?

Whether we look at glacial retreat in both the northern and southern hemispheres or the amount of time that ice remains on lakes, there seems to be pretty good observational data to support the thesis that the climate is getting warmer.  I mean, how could you explain it any other way?  Glacier ice and lake ice melts as it gets warm and freezes as it cools, and data tracking this cycle integrate both the intensity of the heat/cold and length of the warming/cooling period.

And indeed there are many, many worldwide reports of glacial retreat.  On all continents.  In all 19 glaciated regions of the world, including the Rocky Mountains, the Cascade Range, the Himalayas, the Alps, the southern Andes, and Mount Kilimanjaro.  Even the high mountains of Papua New Guinea, north of Australia, have lost most of their ice caps since the 1900’s.  There are also examples of glacial advance, but it seems the “retreats” have it by a long shot.  

Further, the huge ice sheets of Greenland and Antarctica are diminishing, and the glacier data and ice sheet data are concordant (based on information from a May 17, 2013 article in Science magazine).

So this would be pretty much a “case closed” situation in favor of global  warming, right?

Well, not exactly—because glacial retreat can be explained not only by warming but also by reduced precipitation.  Without precipitation in the form of rain, snow, and ice, a glacier can’t grow. But what is the relative importance of reduced precipitation vs. warming?  Some studies indicate that reduced precipitation can account for about half of at least SOME of the glacial retreats—like those on Mt. Kilimanjaro in Tanzania and the Great Aletsch Glacier in Switzerland.    In fact, some scientists have argued that humidity, cloudiness, air temperature, and precipitation have a greater impact on tropical glaciers (such as those on Mt. Kilimanjaro, in New Guinea, and in the South American Andes) than does air temperatures alone.

As in all areas of science, parsing out the truth is difficult.

In a perfect world we’d have surface temperature data over, say, 25,000 years.  The problem is that the thermometer, as we know it today, was not invented until 1724, and thus has been in use for less than 300 years.  Initially, at least, their distribution was not uniform around the world, and who knows when they became standardized?  So instrumentation data has its limitations.

So how DO scientists get a long-term perspective on global temperature if instruments like thermometers have only been around for a few hundred years?

They use what are called “proxies.”  That is, substitutes for actual temperature data.   Glacial data is a “proxy.”  Ice duration data is a “proxy.”  But they are only good for observational data going back, say, 150 years.  Fortunately, there are other proxies that allow us to gather temperature data going back hundreds of thousands of years.  Such as “layer-counted proxies.”

Say what—layer-counted proxies? Yes, scientists examine actual layers of something as a stand-in for thousands of years of temperature data (think of the poor graduate students doing this work!).  And what kinds of things do they count?  Layers of tree rings.  Or layers of sediments, carefully taken from lakes, oceans, caves, etc., and examined for changes from one to another.  (For example, the type of pollen in a particular sediment layer indicates the type of plant living in that place at that time, and therefore the type of environment, and therefore the type of climate).  And ice cores.  Layer upon layer of ice—because each one of them includes tiny air pockets that are, in effect, “samples” of the air that existed at the time the ice formed (more on this below).  Or even the layers in the core of a cave stalagmite or stalactite.

It appears, however, that the most powerful and widely-used temperature proxy is 18O, an isotope of oxygen.  Normal oxygen has 8 protons and 8 neutrons, so it is called oxygen-16 or 16O.  Another type of oxygen has 8 protons and 10 neutrons, so it is called oxygen-18 or 18O.  18O (“heavy” oxygen) is an isotope of 16O (“light” oxygen). 

Here’s the deal:  “heavy” oxygen is actually heavier than “light” oxygen.  So water (H2O) made with heavy oxygen weighs more than water made with light oxygen.   This means that when the ocean warms up, the water molecules with light oxygen evaporate first, leaving the water molecules with heavy oxygen behind and making  the ocean water richer in heavy oxygen.  Conversely, when temperatures fall and water vapor condenses into droplets to form precipitation, the water molecules with heavy oxygen condense first, leaving behind the water molecules with light oxygen and thus making the remaining water vapor in the atmosphere richer in light oxygen than it was before.  So, evaporation and condensation are two ways that the ratio of these two isotopes of oxygen can become altered.  Is that cool or what?

Consequently, when water evaporates from the ocean and moves to mountaintops as snow that then gets compacted into a glacier in Alaska or an ice cap in Greenland, it becomes richer in light oxygen as the temperature gets colder—because the heavy oxygen has dropped out along the way.  As a result, snow in the interior of Antarctica, for example, has about 5 percent less 18O than ocean water does.

By looking at the ratio of heavy to light oxygen preserved in individual layers of ice, scientists have been able to build up a record of temperature change over many thousands of years.  More heavy oxygen means it is getting colder.  More light oxygen means it is getting warmer.  Further, by examining the bubbles of trapped air inside each layer, scientists can determine the atmospheric concentration of other gases such as CO2 and methane at the time that particular ice was formed.  Pretty powerful stuff.


Ice cores have been drilled at many sites around the world, and reconciling them is a huge challenge, as you can imagine.  The two most famous ice core sites are in Greenland and Antarctica, and one data set from the Vostok site in Antarctica caused a bit of a shock wave in the climate community.

Here is the graph of a 400,000 year history of temperature, CO2, and dust concentrations:


http://en.wikipedia.org/wiki/File:Vostok_Petit_data.svg

The first graph shows how temperature has fluctuated (using 18O analysis), with the  valleys depicting ice ages and the peaks depicting non-ice ages.  You can see, for example, that about 25,000 years ago, we were in a deep freeze (with Wisconsin buried under a glacier that was one mile thick!).  And you can see that the time from one ice age to another is about 100,000 years.  And finally, you can see that this cyclic pattern predicts that it’s about time for another ice age.

From the second graph, it is clear that CO2 levels have fluctuated along with temperatures—when CO2 is high, it is warm, and when CO2 is low, it is cold.  It looks very much like CO2 and temperature are in “phase.”  Interestingly, the third graph  plots “dust” (whatever that is), and it seems to show that as dust increases it gets colder.  Which makes sense, I guess, as increased dust would block out more sunlight.

What makes less sense is that a very careful analysis of CO2 concentrations and temperature (hard to see on this graph) indicates that temperature increased BEFORE the CO2 concentration did—about 800 years before, according to scientists who have studied this data in minute detail.  This data set was thus rather inconvenient for those who believe that increased CO2 CAUSED the temperature increases—how can a cause occur after the effect?  This is a CO2 anomaly that has definitely caused some head scratching.

It has been partially explained very recently (March 2013) as follows:  If CO2 bubbles are mobile (rising upward over time), it is possible for CO2 that was originally trapped in an older (lower) layer to eventually end up in a younger (higher) layer.   This would create a lag in the temperature/CO2 data because the younger layer that was formed after temperatures started increasing would unexpectedly have more CO2 than the older layer that was formed at the time the temperature increased.  Or an ice layer formed at a time of higher temperatures might have decreased levels of  CO2  in comparison with a lower (earlier) ice layer that was formed at a time of colder temperatures—which is the opposite of what you would expect if the higher temperatures were caused by increased CO2.

The problem is that when all the ice core data at four different locations is corrected for this effect, there is STILL a 200-year anomaly—that is, the increase in CO2 lags behind the increased temperature by 200 years.  This throws a monkey wrench into the argument that CO2 elevation was what CAUSED temperature increases in the first place.  

In fact, some scientists believe that past ice ages were caused by changes in the earth’s orbit, spin angle, and tilt, which amazingly occur at predictable intervals.  These Milankovitch cycles, as they are called, correlate very well with ice ages, and they predict that we should now be in a cooling period.

But it is clear from the above graph that we are instead in a WARMING period.  And that long, long ago the earth was even warmer than it is right now.  And that CO2 levels in the past were greater than they are today—if you go back millions of years, they were apparently five times higher than they are at present.  However, just this week the CO2 level in the atmosphere exceeded 400 parts per million (on a volume basis) for the first time in at least 400,000 years.  As you can see from the graph, the highest previous CO2 level never exceeded 380 ppm.

From all the evidence, it seems clear that yes, the earth has been warmer in the distant past than it is now.  But just how good is the evidence (besides glacial retreat) that the planet is continuing to warm TODAY?

A January 2013 article  in Geophysical Research Letters reported on 170 different “proxies” for temperature.  These included not only ice core data, but also data from coral and lake and ocean sediments—40 biological proxies (like tree rings) in all.  So in other words, this paper pretty much examined all of the proxy data to date, from 1730 to the present.  And ALL of the data shows that warming continues today.

So it seems pretty clear to me that the preponderance of the evidence shows the planet is warming. The birds and the bees and the trees and the ice say so.  As do historic temperature readings.  In the face of all this proxy data, I don’t see how anyone can argue that the temperatures are “unchanged.”

I’d say the case is closed—the planet is getting warmer.  It also seems reasonable that a warming planet causes CO2 to de-gas from the earth’s surface, both water and land.  And if you held a gun to my head, I would say that the periodic orbital changes of the Milankovitch cycle have resulted in warming and cooling over time and that this accounts for the fact that C02 concentrations have lagged behind temperature increases in the past.

I should also point out, however, that although the overall trend seems to be one of warming, cooling periods (cycles?) still occur.  There was the Little Ice Age mentioned above, when the ice on Wisconsin’s Lake Mendota lasted perhaps 6 months and when Thames river in Great Britain froze over. The last time this occurred was in 1814—almost a hundred years ago—but previously it had frozen 25 times, starting in 1408.  All the ice-skating stories and cheery oil paintings showing canals freezing in the Netherlands are from this period.  And much more recently (from about 1950 until 1990), arctic temperatures were below those observed during the historic 100-year warming trend from 1850 to 1950 (the recent arctic warming didn’t start until about 1990).

So what the future holds remains to be seen.  Clearly the impact of human activity on greenhouse gas emissions is a new variable in earth’s geological history.  And how increased greenhouse gases such as CO2 and methane might interact with, foil, exacerbate, and complicate underlying mechanisms, such as the cooling predicted by the Milankovitch cycle, has yet to be determined.  Not to mention other phenomena that could affect the temperature of the earth, such as volcanic activity, sunspots, and ocean currents. 

But if science teaches anything, its that, as Darwin said, “nature guards her secrets well,” so we will probably be surprised.

I should also point out that in the course of researching this blog, it has been interesting to note the “tone” of the articles discussing the CO2/temperature anomaly.   Many scientists, as well as members of the press, appear almost apologetic about the fact that the anomaly even exists.  It is almost as if some scientists WANT there to be a cause-and-effect relationship between CO2 and the historic warming that we have seen in recent years. 

And that is because, as we all know, there is the widely-held theory that human activity has caused the current warming trend.  One should keep in mind, however, that just in the past 2,000 years, there has been:  a cold period from 2000 years before present (“BP”) until 1,000 years BP, a Medieval Warm Period from 1,000 BP until 650 BP, a Little Ice Age from 650 BP to 150 BP, and another warming trend from 150 BP to the present (all times approximate).  Review the following graph and tell me there is not a  cyclic pattern to our past.

 

http://en.wikipedia.org/wiki/File:2000_Year_Temperature_Comparison.png

Of course the spike in 2004 makes one wonder if the up-tick in this historic record will be with us forever, or whether it will, in time, start to fall back.  Will greenhouse gases exacerbate any natural cycling?  Are they now?

I will conclude with an article that just came out on January 15, 2013, written by James Hansen, one of the climatologists who in 1988 first postulated global warming and is its most public advocate. I mention this because if a scientist of such high stature publishes data that seems to be an exception to the position he has taken for most of his professional life, then you can pretty much believe what he says.  In this most recent report he says:  (1) the global surface temperature is 1.0 degrees Fahrenheit warmer than the 1951-1980 base average and (2) ….wait for it, wait for it….. “The 5-year mean global temperature has been flat for a decade….”

Further, and then I promise to stop, a more recent article published in March 2013 reviewed 20 different climate prediction models. That is, the authors looked at 20 mathematical models and asked how well they predicted the temperature TODAY based on data from 1950.  What they concluded is that the temperatures we are observing today are at the LOW END of what the models predict.  Or, to say it differently, the temperatures we are seeing today are lower than what was forecasted.

And if global temperatures have not changed in ten years even though CO2 concentrations in the atmosphere have continued to increase, there is obviously something OTHER than greenhouse gases causing global warming.  Right?

From the current data, it appears that this almost has to be the case, and there are certainly a great many factors influencing climate that could be the culprit.  The problem is that they interact in complex ways that make it difficult to sort out what’s really going on.

This is the way science progresses, folks.  Observation, theory, new observations, new theories.  Eventually the “truth” emerges, but probably long after the public is totally confused.  Kind of sounds like the saturated fat story I addressed in previous blogs.

Oh, and I haven’t said anything yet about sunspots.  Or Atlantic ocean currents.  And there is certainly a lot more that should be said about climate modeling….

Next up:  more climate confusion.

Useful references:

*http://nsidc.org/data/lake_river_ice/freezethaw.html








Wednesday, May 22, 2013

COMPANIES GETTING UNFAIRLY SLIMED BY BAD SCIENCE




Here is something that drives me nuts:  watching the media and courts trash companies for selling a “dangerous” product when there is no scientific evidence to back up their allegations.  I suppose the most egregious example is the decimation of Dow Corning due to the silicone breast implant debacle that began in the mid 1980’s and lasted through the 1990’s.  Now we have the “pink slime” fiasco, which is ongoing.

You may recall that in 1984 Dow Corning lost a suit on the basis that silicone breast implants caused immunological problems for the recipient.  The initial $1.7 million award of damages was just the beginning—lawsuits poured in and money poured out.  In December of 1990, the “Face to Face” TV show hosted by Connie Chung aired a special on “the dangers of silicone breast implants.”  In 1991 Ralph Nader’s Public Citizen Health Research Group sent out warnings that silicone breast implants caused cancer.   By December 1991, 137 individual lawsuits had been filed against Dow Corning.  By December 1993, the number was 12,359 .  By December 1994, it was 19,092.

So in May 1995, a beleaguered Dow Corning filed for bankruptcy protection, and in November 1998 it applied for bankruptcy reorganization, which included $3.2 billion in previously agreed-to settlements.  The bankruptcy was settled in 1999, but it took several more years for the litigation over the settlement to wind down.  Dow finally managed to extricate itself from the bankruptcy courts in 2004.

What makes this story so outrageous is that the scientific community knew by 1991 that there was no scientific basis for the claims.  And throughout the remainder of the 1990’s, report after report showed no relationship between silicone implants and systemic disease. Too bad for Dow, though.  The damage to the company had already been done. “Sorry guys!  We just wrecked your company.  Oops!  Oh, and by the way, you don’t get your money back.”

But amazingly, Dow Corning survived, and it continues today as a multinational corporation that provides over 7,000 products and services, most of which are based on silicon:  semi-conductors, solar panels, cookware, sound absorbents, etc., etc.

But it seems that Dow Corning will never be put out of its misery completely insofar as breast implants are concerned: a court in Korea just ruled against Dow in a class action suit that is apparently based on implants that have burst.  Of course, these claims are different in that they don’t seem to involve allegations that the implants cause disease.  I  don’t know what the company’s position is regarding the stability of the implants they sell, but this seems like an aesthetic issue rather than a health issue to me.

And I just have to hit one of my own hot-buttons: science illiteracy in a nation absolutely dependent on high technology and related industries.   Scary.   But since I’ve already talked about pseudo-science in my blog about the alleged link between vaccines and autism, I’ll leave it there.

But there is a more recent public “scandal” that has resulted in another company’s near-bankruptcy:   pink slime. 

Here is the deal:   meat processors routinely remove fatty areas from beef carcasses, and these trimmings often include some residual meat.   Basically, you can do three things with these trimmings:  throw them out, feed them to animals, or feed them to humans.  The slaughter business, like all industries associated with feed and food production, has become incredibly competitive—with very low profit margins.  And when you are slaughtering thousands of animals, an improvement to the process can make the difference between having a profitable or an unprofitable day, even if that improvement only amounts to an increased profit of pennies per carcass.

Now, I don’t know how many of you have actually butchered an animal—maybe you think meat comes from the supermarket, where it was spontaneously produced on the spot.  But, having done it myself many many times, and having visited slaughterhouses, I can tell you that it is a gross, messy, nasty, unappetizing, and rather sad business.   I mean, no normal person could possibly believe that seeing beautiful animals converted into hunks of meat is a pretty sight.

And no matter who does the butchering, the process is rife with possibilities for contamination.  In particular, E. coli contamination from excrement.  That’s right—if you’ve ever seen a cow, then you know it is smeared with feces.  And when you butcher that animal, it is inevitable that some of the E. coli in the feces will come in contact with the resulting meat.  Think about it—when a knife goes in to start a slit in the hide, bacteria may be dragged in along with the blade.  It is just unavoidable.  And there are some strains of E. coli that are particularly nasty and potentially lethal to humans—the bad actor being the famous strain called 0157:H7.  Most of the news articles reporting contaminated beef products, as well as vegetables and fruits, involve this one bug.

So the world has expended a lot of resources in trying to kill E. coli 0157:H7.   The process we are concerned with, which has been around since 1990, consists of taking meat trimmings, warming them up to between 107F to 109F, and spinning them in a centrifuge, which separates the fat from the meat.  This meat is flash frozen at 15F for 90 seconds and then exposed to extreme pressure that forms it into blocks or tubes.  Apparently the combination of low temperature and high pressure ruptures cell walls, thus killing the bacteria.

By the mid 1990’s, the public was becoming increasingly concerned about E. coli, and so American inventor Eldon Roth, founder of a company called Beef Products Incorporated (BPI), started experimenting.  To date he has at least 68 patents that cover various ways of sterilizing meat.  The patents that are critical to this story seem to have issued in 2002, claiming a sterilization method that involves the injection of gaseous ammonia (NH3) into meat.   Upon contact with water in the meat, NH3 forms ammonium hydroxide, a very “basic” compound (the opposite of acidic) that kills the bacteria.   The USDA’s Food Safety and Inspection Service approved this disinfection procedure in 2001.

The final product is apparently 94% to 97% meat.  Its formal name is “lean finely textured beef,” or LFTB.  I should also point out that LFTB is also produced with citric acid as a replacement for ammonia gas (sold by Cargill).  Tyson Foods also sells LFTB, apparently buying it from other producers. 

In early March 2012, LFTB was found in 70% of the ground beef sold in the U.S.   According to USDA restrictions, ground beef that is more than 15% LFTB must be labeled as containing LFTB, but if it contains less than 15%, you can’t tell just by looking at the package that there is any at all.   LFTB cannot be sold directly to consumers—it is mixed in with ground beef at, for example, grocery stores.

In 2007 there was such high confidence in LFTB disinfected with ammonia gas that the USDA exempted it from routine testing.

So, up until December 30, 2009, the public knew nothing about LFTB and had happily consumed millions of pounds of the stuff.  Then the New York Times published an article* that disparaged Eldon Roth (he wasn’t a “scientist”) and questioned the safety of the product.  It  reported that some of the LFTB produced by Beef Products Inc. had higher levels of bacterial contamination than the USDA allowed and mentioned that people had objected to the smell of ammonia in LFTB, apparently before it was mixed with ground beef.  The article also stated that two 27,000-pound batches of LFTB had been recalled:  “The meat was caught before reaching lunch-rooms [sic] trays.” Snatched from the mouths of babes!

Oops.  Then on January 12, 2010, the New York Times published a correction stating that the two “recalled” batches mentioned above had actually been discovered by Beef Products Inc. before being shipped.   The NY Times also admitted that “[n]o meat produced by Beef Products Inc. has been linked to any illnesses or outbreaks.”  I bet the editors were sorry about having to make that confession.  No story there.

The 2009 article also mentioned that LFTB was also known as “pink slime,” a name apparently coined in 2002 by an employee within the USDA.  Although “pink slime” was used as a pejorative, it really didn’t catch on with the public until March 2012, when ABC News used the term to hype a series of stories about LFTB.

What exactly the public’s concern was/is with LFTB, I can’t really fathom.  LFTB is 97% beef—so what’s the beef?  It’s ALL beef!  Was it because people actually had to think about what can go into hamburger (“trimmings”)?  Did they really believe that hamburger was just ground-up sirloin?  Was it concern about the safety of the ammonia gas used to sterilize the final product?  (There was some ridiculous media coverage involving a cook pouring a bottle of ammonia cleaner onto “trimmings” and then putting the whole mess into a washing machine to make “pink slime.”  As if ammonia is something unusual—it is a natural product in our bodies, a part of every protein molecule.)  Was it because the labels on their meat failed to state that two different forms of hamburger had been mixed together?  Was it because of LFTB’s industrial production?  (What’s wrong with that—how do they think it ought to be produced?)

Although the media tried very hard to make the case that LFTB was “contaminated,” that never really stuck, except in the minds of some particularly squeamish consumers.  The ammonia process works.  It is clean.  It is harmless to humans.  It remains USDA approved to this day.

In addition, ammonium hydroxide is used as a direct food ingredient in almost every processed food you buy in the grocery store—baked goods, cheeses, chocolates, and pastries.  Ammonia in other formulations—ammonium sulfate, ammonium alginate—is used in condiments, relishes, soy protein, snack foods, jams, jellies, and beverages.  There are hundreds of food products listed by the World Health Organization that use ammonium hydroxide, including dairy products, fruits, vegetables, cereals, eggs, fish, and BEER (and how many times have you seen a beer-drinker take a good long swig and say, “Gross, this smells like ammonia”?)

In any event, all of the negative publicity resulted in BPI closing down three of its four plants.  Safeway, SUPERVALU, and Food Lion stopped selling LFTB hamburger.  McDonald’s, Burger King, and Taco Bell announced they would discontinue using BPI’s LFTB in their products.  Wendy’s reported they never used LFTB in the first place.  Five Guys stated that they don’t use “ammoniated procedures.”  (I guess they don’t sell products made by animals, since all animals use “ammoniated procedures.”) Further, various school systems announced they would stop serving LFTB hamburger.  I guess outlawing beef is next.

And so it went through 2012, one absurd news report after another, and a public apparently unable to comprehend where their food comes from.

Anyway, BPI may just have the last word.  On September 13, 2012, BPI filed a $1.2 billion lawsuit against ABC News and three reporters—Diane Sawyer, Jim Avila, and David Kerley.  BPI claimed damages as a result of ABC’s reports on “pink slime.”  This will be an interesting case to watch.

In looking at the Dow Corning case as well as the BPI case, it seems to me what they have in common is a sometimes-ignorant press featuring sensationalized stories, an impressionable public, and a generalized antipathy towards industry.

This combination, which is so lethal to industry, may slowly be rectified by the development of laws that offer some protection from scientific fraud. 

A good place to begin is with “expert” witnesses.  Clearly Dow suffered from expert witnesses testifying that in their “expert” opinion, silicon implants caused health problems.  The problem is that not all experts are expert.

In recognition of this problem, the Supreme Court  developed guidelines, called the “Daubert standard,” for the admissibility of expert witness testimony in federal courts.  This allows judges to keep out evidence that they deem unqualified.  The Daubert standard was articulated by the Supreme Court as a result of three cases heard by the Court the 1990’s, and by the year 2000, it was codified in the Federal Rules of Evidence.  After some tinkering with the wording, Rule 702 (Testimony of Experts) now reads as follows

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:
(a)         The expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;
(b)         The testimony is based on sufficient facts or data;
(c)         The testimony is the product of reliable principles and methods; and
(d)         The expert has reliably applied the principles and methods to the facts of the case.

Isn’t it amazing that it took until the year 2000 for these rules to be adopted?  I mean, they just represent good scientific procedure.  But even at that, the Daubert standard is followed only in the federal courts and in something over half of the state courts.  Florida has passed a bill adopting the Daubert standard that is awaiting signature by the Governor.  Canada has used the Daubert standard in at least two cases, and in the United Kingdom a recommendation has been made to formulate a test that “builds” on the Daubert standard. 

In all fairness, the courts were not entirely without guidance with regard to the admissibility of expert testimony prior to Daubert.  The standard previously in use was called the Frye test, and the critical difference between the two was that Frye permitted only testimony that was “generally accepted by experts.”  That was okay, but it did not allow for new scientific findings that had not yet been widely accepted.  So Daubert substitutes a “reliability” test based on scientific principles and methods for the Frye “general acceptance” test.  This means that, under Daubert, if expert testimony is deemed reliable, the judge can allow it to come in, even if it is so new that it is not yet generally accepted by the majority of experts in the field.

The Daubert standard is often criticized for requiring that judges be scientifically literate.  I don’t know why this is a problem—I expect that judges SHOULD be more scientifically literate than jury members.  They are judges, after all.  Another criticism is that Daubert may set an evidentiary standard that is too high for many plaintiffs to reach—well, duh, that is the general idea, to get rid of spurious “evidence.”  As an indication that the new standard is working, a 2002 RAND study found that the percentage of proposed scientific testimony that was excluded by the courts actually ROSE after Daubert.  I presume that is because junk science got rejected.  That can’t be a bad thing.

I don’t know whether the Daubert standard will get used in BPI’s suit against ABC, as that case seems to primarily revolve around the issue of libel.  That is, I don’t know whether experts will testify in that case as to the safety of ammoniated LFTB since that does not seem to be the primary issue.  But ABC may call experts to prove that what it said about LFTB was true, and BPI may call experts to prove that it wasn’t.  And in that case, the Daubert standard will apply.

In any event, if the Daubert standard had been part of the Federal Rules of Evidence a little earlier, Dow Corning might still be happily making silicone breast implants since much of the so-called “expert” testimony introduced by the plaintiffs in those cases would almost certainly have been thrown out. 


Partial Reference List


















Tuesday, May 14, 2013

The Golden FLEECE Award and the Golden GOOSE Award




The Golden Fleece Award was created by Senator William Proxmire to publicize federal spending that he considered wasteful.  He gave out these awards from 1975 until 1988, and receiving one was not an honor.


I can still remember the first of the awards in 1975.  I was a graduate student at the time, and they made me mad because they were obviously “anti-science” and seemed to have no purpose other than to ridicule projects that had funny-sounding names.  I remember thinking, “How would Senator Proxmire know whether a science project had merit or not?”


Although when I decided to write about the Golden Fleece Awards, I had long forgotten the details of these projects, through the wonders of the internet I found  two 1975 Golden Fleece Awards that were given to science projects.  These were:


$84,000 from the National Science Foundation to “find out why people fall in love.”


$500,000 from the National Science Foundation, NASA, and the Office of Naval Research to “determine under what conditions rats, monkeys, and humans bite and clench their jaws.”


Now, I’m sure when the first award was made, there were a lot of guys who said things along the lines of “Hey, man, like, let ME tell you why people fall in love . . . .”  And many people thought that, yeah, this was a huge waste of federal money.  


And in justification for selecting this particular project to receive a Golden Fleece Award, Proxmire said:


“I object to this not only because no one—not even the National Science Foundation—can argue that falling in love is a science; not only because I'm sure that even if they spend $84 million or $84 billion they wouldn't get an answer that anyone would believe.  I'm also against it because I don't want the answer.  I believe that 200 million other Americans want to leave some things in life a mystery, and right on top of the things we don't want to know is why a man falls in love with a woman and vice versa.”


Now, why anyone wants any aspect of the universe to remain a mystery is beyond me.  It is completely antithetical to science.  It is the opposite of wanting to know “why.” And it is a shame that people with this viewpoint can have an influence on science funding. 


Why people fall in love has in fact become a HUGE area of research, spanning chemistry, psychology, sociology, drug development, and mental health.  The cocktail of hormones involved includes testosterone, estrogen, vasopressin, oxytocin, dopamine, serotonin, and adrenalin, not to mention pheromones.  Any study of the chemistry behind human behavior takes all or a subset of these hormones into consideration, in addition to others.  And they are central to the vast enterprise concerned with pharmaceutical developments to address dozens if not hundreds of mental health conditions, now affecting perhaps 25% of the population or more.


Of course, what I’m getting at here is that studies in one area lead to discoveries in others.  That is the nature of basic research, and what new avenues will be opened cannot be predicted at the start of any research program.


The second project, to “determine under what conditions rats, monkeys, and humans bite and clench their jaws,” has an interesting history.


The researcher involved with this project, Ronald R. Hutchinson, sued Proxmire for defamation in 1976, and the case went up to the Supreme Court.  The Court ruled 7 to 1 against Proxmire, and the parties settled out of court, with Proxmire paying Hutchinson $10,000 and all of his court costs.  The issue, in part, was that Proxmire used Hutchinson’s name, and the Supreme Court ruled that a recipient of a National Science Foundation Grant was not a public figure.  Therefore, Hutchinson did not have to show “actual malice” on Proxmire’s part.


Hutchinson’s research interest at the time was the study of emotional behavior, and he used jaw-clenching as an objective measure of aggression.  NASA and the Navy were interested in this work because they wanted to resolve problems that might arise when humans were confined in close quarters for extended periods of time.


Of course aggression is another human behavior that has become a large area of research, and Hutchinson seems to have ended up as CEO of The Foundation for Behavioral Resources, a organization that strives to enhance employability of unemployed workers who are receiving public assistance.


I think that the public’s dismay with the federal government’s funding of research projects with “funny-sounding names” is that there is a misunderstanding of what basic research really is.  Basic research is not necessarily focused on solving a societal problem.  It is research directed to understanding natural phenomena, and often there is no practical application for the results—at least not one that is known at the time the work is undertaken.


And that’s why the Golden Fleece Awards were so popular—because the research seemed silly and useless and a huge waste of money.  But since then, the world has learned that “blue sky” research does indeed have unexpected societal value.


Consequently, as a reaction to the Golden Fleece award and the fact that the media loves to play the game of mocking funny-sounding science projects, another award was established in 2012 called the Golden Goose Award.


The Golden Goose Award is the brainchild of Jim Cooper, a member of Tennessee’s House of Representatives.  It is given to science projects that started out with “funny” titles and ended up making staggering contributions to mankind.


For example, the Golden Goose Award was given to a 1961 project that proposed to study why jellyfish are fluorescent.  Now, I can imagine that this would have been a prime candidate for a Golden Fleece award.  I mean, who could possibly care why jellyfish shine in the dark?


In the years after the original research was conducted, the protein that was responsible for the fluorescence was identified, and then the gene that produced the protein was discovered.  Another scientist figured out that he could tell if certain genes were switched on or off by connecting fluorescence to genetic activity.  If the genes were active, then the protein was produced and the cells or entire organisms just lit up.  If the genes were silent, then there was no protein and the organism did not fluoresce.  Curiously, he tried this first with genes in the roundworm Caenorhabditis elegans.  What a waste of federal money!  Roundworms—who cares?


It turns out that green fluorescent protein is probably the single most useful indicator of gene activity ever discovered.  Today it is used in nearly every genetic laboratory in the world.


And the researchers who figured out how to use the green fluorescent protein from jellyfish won the Nobel Prize for Chemistry in 2008.  


Another Golden Goose Award was given to a researcher who started out just using a microscope to look at the fine structure of coral.  Funded by NIH.  But, like, who could possibly care what corals look like close up?


It turns out that another scientist learned of the work and noticed that the fine structure of coral looked like bone, and wondered if coral could be used to make an improved artificial bone (because blood vessels and nerves would grow into it).  But the coral, which is made of calcium carbonate, broke down.  So then ANOTHER scientist wondered if a naturally occurring compound called hydroxyapatite could be made to grow into the coral and replace the calcium carbonate—thus preserving coral’s fine structure.  Bingo!  The new hydroxyapatite with a microstructure like coral and bone went on to become widely used for bone grafts.  All in all, this discovery involved four scientists in completely unrelated fields building off each other’s work.  And who knows how hydroxyapatite itself was discovered—but I guarantee it had its roots in basic science too.


A few words should probably be said about how our federal agencies such as the National Science Foundation make awards.  They don’t just line up all the applicants and throw darts.  First, only about 25% of proposals sent to NSF are funded.  Second, each proposal is reviewed by at least four scientists.  The final decision is made by a program manager, with each proposal being rated for both its scientific merit as well as its potential societal impact.  And the scientific  review is done anonymously so the scientists on the review panel can really take their gloves off and be as critical as they want—without the potential hazard of being found out by a colleague who made the proposal.


And this really is the key—anonymous peer review.  Curiously, not all countries in the world make funding choices using this method—in some places, decisions are made by the “good old boy” system.  But not in the U.S., at least to the extent that such things can be avoided.


Even though I have been making fun of the Golden Fleece Awards, I do think they have actually made our federal grant system better.  Note that the second criteria—societal impact—is actually new to the process, having been added only in the last decade or so.  Federal agencies have come “under the gun” to show that the expenditure of federal monies will benefit the public.  Now, on the one hand, this forces scientists to come up with a potential use for their research.  This is probably a struggle for many good and fine basic scientists, because in truth, a lot of them probably couldn’t care less.  They are interested in doing science.  But the societal benefit requirement at least forces them to use their imaginations to articulate a use, which could in turn make them more articulate when speaking with the public.  In this age of public accountability, that can’t be a bad thing.


But like almost everything else, scoring research based on “societal impact” has a downside. What if a scientist just wants to study coral reef micro-structure and can’t think of way the public might benefit,  except aesthetically? Just because it’s cool?   After all, when this research was first funded several decades ago, nobody knew what would come of it.  In many cases, that is how it happens—the basic science comes first and the use of the resulting knowledge comes many years later.


Why not support science because whatever we learn increases our appreciation of nature?  Demystifying nature should be one of the greatest societal impacts of all.  I guess this was possible back when we had more money to spend on research, but in these days of limited budgets, the public has the right to know how it will benefit.  


Aesthetics just isn't enough.


And speaking of aesthetics, I just have to tell you about a news item I saw this morning.  Totally not useful, but awesome nevertheless, and it shows that “blue sky” research does indeed continue.  Scientists have reported that DNA sequences in the genome of the tulip tree (Liriodendron tulipifera) have remained basically unchanged over millions of year.  Thus, it has been evolutionarily “conserved” since perhaps the age of the dinosaurs.  Now, this has absolutely no practical benefit that I can think of, except, of course, in the unlikely event that some extract derived from the tree has pharmaceutical value—then maybe this research will be “practical” in some way.  But I wouldn’t count on it.  This research was done for curiosity value alone, and it contributes a little bit to our understanding of plant evolution.


Going a small way to demystifying nature. 




  

Wednesday, May 1, 2013

What if petroleum is not a “fossil fuel”? (Part 2)





In the first blog of this two-part series, I made the following points:  (1) There is a lot of carbon in the earth, probably left over from the formation of our planet; (2) theoretical calculations indicate that hydrocarbons can be stably formed from methane and C02 at geologically-realistic temperatures and pressures; (3) laboratory experiments at realistic temperatures and pressures show that a range of hydrocarbons can be formed from marble—within an HOUR.


Before I go on, though, there is one additional piece of laboratory evidence I just have to share.  There is a very famous chemical reaction called the Fischer–Tropsch process.  At high pressure and high temperatures (300F to 500F), it produces methane and higher hydrocarbons from carbon monoxide (CO) and/or carbon dioxide (CO2).  Varying the temperature and the pressure results in a different mix of end products.  Additionally, a number of different metals, including iron, nickel, cobalt, and ruthenium, can be used as catalysts.  (A catalyst is something that speeds up a chemical reaction.)


Now, the Fischer-Tropsch reaction was discovered and patented in the 1920’s by German scientists.  The Nazis used it in World War II to convert coal into fuel, since they had more coal than petroleum.  For this process to work, they first had to convert the coal into CO2 and/or CO through a process known as gasification.  The Fischer-Tropsch reaction is still used today at several refineries around the world for the purpose of making petroleum products.


So, what does the Fischer-Tropsch reaction have to do with abiogenic oil?  Maybe plenty.  That is because it shows that you can take CO2 and/or CO and make methane and higher hydrocarbons as long as you have hydrogen and suitable catalysts.  But to fit Fischer-Tropsch into our abiogenic oil theory, all the necessary ingredients have to be present deep in the earth’s mantle.  So are they?


First, let’s just assume that quantities of CO2 and/or CO are available to fuel the reaction, as I haven’t seen the presence of these gases disputed anywhere.  Next, how do we get hydrogen?   It is well known that hydrogen is produced when the mineral fayalite (iron silicon oxide) is exposed to water.  Fayalite is common in the kind of rocks found deep in the earth (igneous rocks), and we can assume the presence of water.  And last, the necessary catalysts are minerals such as olivine (one of the most common minerals by volume) and magnetite (found in almost all igneous and metamorphic rocks).   Put these ingredients in the pressure cooker that is the earth’s mantle and perhaps we now have all the conditions necessary to make hydrocarbons from simple starting materials. 


Results of laboratory experiments indicate that the abiogenic production of hydrocarbons is possible, and we discussed a couple of them in Part 1 of this blog series.  However, questions do remain concerning the ability of naturally-occurring catalysts to produce these hydrocarbons and the stability of the resulting compounds.

So is there “real life” evidence outside the laboratory that oil and gas are derived from non-biological (that is, non-fossil fuel) sources?  First, the two processes—abiogenic and biogenic—are not mutually exclusive.  Both could be going on at the same time, with each contaminating the other.  Microbes living in a petroleum reservoir whose oil was originally abiogenically produced could introduce contaminates that give the oil a biological signature.  And oil can move around—certainly fractures must exist throughout even the deepest regions of the earth’s interior—so whether or not oil is found near visible fractures would not seem to be determinative.  What we really need is some way to tell whether oil was made biogenically or abiogenically just by analyzing it.

Fortunately, there are “signatures” within the oil itself that just might do the trick.  One of them results from the ratio between two specific types (isotopes) of carbon that are present in petroleum :  12C and 13C.   To understand how these two carbon isotopes fit into the big petroleum picture, we need to know how they got in there in the first place.  It starts with plants, which extract carbon from the atmosphere in the form of CO2 (carbon dioxide) and use it to build plant tissues.  It turns out that even though plants can, and do, use either 12C or 13C for this purpose, most of them prefer 12C (with the exception of C4 plants, which don’t discriminate between the two—but were probably NOT associated with oil production anyway*).  Therefore, you would predict that petroleum originating from plants would have a particular carbon signature, reflecting the fact that when the plants were still alive, they used more 12C as a building material than 13C.   That is, you would expect petroleum with plant origins to have a 12C /13C ratio similar to that of plants..  And in general, it does.  So what is all the fuss about then?  In order to identify plant-based oil, all we have to do is determine whether its carbon signature is consistent with the oil having started out as a 12C–loving plant.  And, in fact, all oil reserves found so far DO have a 12C /13C ratio consistent with plant origins.  Case closed . . . right?


Wait just a minute—what about limestone, marble, and other rocks high in calcium carbonate that were formed primarily from the skeletons of marine animals?  Since animals can’t extract carbon from CO2 in the air, they get the carbon used in their own bodies from plants, either by eating the plants themselves or by consuming other animals that in turn got their carbon from plants.   As a result, the carbon signature of all life, on earth at least, tends to have a 12C /13C ratio reflecting the preference for 12C exhibited by most plants.   And therefore, so does limestone and marble and any other rock whose carbon originally came from something that was once alive.  For this reason, any oil made from these rocks will also tend to have a plant-based carbon signature—even if it was produced abiogenically from a chunk of marble in a laboratory.  So the bottom line is that we can’t use the  12C /13C ratio in oil to differentiate biogenic petroleum from abiogenic petroleum if the oil originated from limestone or similar rock.  Or at least so it seems to me.


Further, some Fischer–Tropsch reactions show that if you start with a material that is depleted in 13C (such as rock made from plants or animals), you can unexpectedly end up with higher hydrocarbons that have INCREASED amounts of 13C—or, as you would predict, you could end up with higher hydrocarbons that are DEPLETED in 13C.  The amount of 13C  found in hydrocarbons can depend on the reaction conditions and the type of catalysts that are used rather than the amount of 13C in the starting material.  This is important because researchers have used the presence of depleted 13C as evidence that the hydrocarbons are “fossil fuels,” or, said more properly, that they came about by “thermogenesis”—the scientific term for the process that converts plants etc. into oil by the “normal” biogenic route.  So I’m not so sure that 13C depletion is the “smoking gun” that proponents of the fossil fuel theory would like to claim that it is.


But what about methane?  Unlike oil, methane is known to have a very wide range of 13C, meaning that some sources of methane are rich in 13C and others are not.   Since almost all terrestrial life is lower in 13C than 12C, one would think that it’s safe to assume that any methane without a lot of 13C has biological origins, especially considering the many bacteria that produce methane from once-living things.  But what about methane that is high in 13C?


A meteorite fell to earth on September 28, 1969 in the vicinity of Murchison, Australia, that is both interesting and gratifying.  Interesting because it contains at least 10,000 different organic compounds, and gratifying because the methane produced from those compounds is richer in 13C than 12C.  This is exactly what we would expect to find in methane from meteorites, since the extraterrestrial carbon that produced that methane almost certainly didn’t have biological origins.  So the Murchison meteorite is a nice check on the 13C theory.  If scientists had instead found higher amounts of  12C than 13C in the Murchison methane , they’d have some explaining to do—and maybe we’d have to throw out  13C as evidence.


Knowing that methane without biological origins is high in 13C, one would posit that carbon deep in the earth (such as in graphite, and any of its degradation products such as methane) would be also be richer in 13C.


So, has anyone found sources of methane produced on earth that are rich in 13C?  They have.  It comes bubbling up from a rip in the sea floor on the side of the Atlantis Massif, 2000 feet below the surface.  These rips are called “hydrothermal vents,” and they emit high concentrations of methane and hydrogen. This particular vent is called the “Lost City” vent (lost city = Atlantis, get it?), and in 2008 it was reported that its methane was rich in 13C.   Scientists have concluded it is NOT of biological origin, but that it possibly originated from source rocks that date back to the beginnings of the earth.


An earlier 2002 study of methane from the Kidd Creek formation in Canada also shows an abiogenic signature, as does the Potato Hills gas field in southeastern Oklahoma.  Similar reports come from China.  More recently, a 2010 report concluded that gases emitted from a Socorro Island volcano (in the South Pacific about 400 miles west of central Mexico) were abiogenic.


Another element that might possibly help differentiate biogenic from abiogenic oil is the 3He isotope of Helium.  3He occurs in space at concentrations 200 times higher than it does here on earth, and scientists believe it was trapped deep underground when the earth was first formed.  It turns out that some oil fields have high concentrations of 3He, suggesting an abiogenic origin.  A 2009 report on the Songliao Basin in China looked at 3He and 13C methane, and concluded that the source of the oil was from deep crustal “kitchens.”


And so the debate goes.  Given that the non-fossil theory of oil formation is supported by theoretical results, experimental evidence, and field observations, the real question is how common is it?   If the majority of our “fossil” fuels in fact have a non-fossil origin, there might be a whole lot more oil and gas available than we now believe.  And if abiogenic oil is being produced deep in the earth at this very moment, it could in fact be a “sustainable” resource.  That would certainly change our view of the world—and it would have far-reaching political and practical ramifications.


But right now, if you were to advocate an abiogenic theory for oil and gas, you’d be in the minority.  And in general, those who take a minority position have the burden of proof—that is, it is up to the advocates of the abiogenic theory to prove that there really are hydrocarbons with abiogenic origins.  But scientists who support the “normal” fossil fuel theory don’t seem to be too interested in engaging in a debate on this question.  I guess that is since theirs is the dominant theory and they are in the majority, the game is already over as far as they are concerned.


But, speaking as a scientist with no specific expertise in this particular field, I can say that after looking long and hard, I haven’t run across any definitive test that can conclusively tell us whether a petroleum sample has biogenic or abiogenic origins.  The 13C work seems to be the most promising because finding an abundance of 13C hydrocarbons appears to constitute evidence for non-biological origins.  The problem is that the reverse is not true.  In other words, finding a paucity of 13C does not seem to constitute evidence for biological origins.  Even the existence of biosignatures in oil consisting of compounds made by plants does not prove that the oil actually came from plants since bacteria may have the chemical pathways to make many plant compounds and bacterial contamination of oil is ubiquitous.


So what is the critical evidence that supports the fossil fuel theory? I don’t know, but almost everyone seems to believe in it.


Just ask any 6th grader.



*  C4 metabolism is found only in terrestrial plants, but the majority of those who support the fossil fuel theory believe that most of our oil came from marine algae.  Further, C4 metabolism evolved 25 to 30 million years ago, and the majority of oil is found in rock formations that are 65 to 500 million years old.


References: