MIKIVERSE HEADLINE NEWS

Sunday, June 13, 2010

University of Pennsylvania Law School

ILE INSTITUTE FOR LAW AND ECONOMICS

A Joint Research Center of the Law School, the Wharton School, and the Department of Economics in the School of Arts and Sciences at the University of Pennsylvania

RESEARCH PAPER NO. 10-08 Global Warming Advocacy Science: a Cross Examination

Jason Scott Johnston

UNIVERSITY OF PENNSYLVANIA

May 2010

This paper can be downloaded without charge from the Social Science Research Network Electronic Paper Collection: http://ssrn.com/abstract=1612851GLOBAL WARMING ADVOCACY SCIENCE: A CROSS EXAMINATION

© Jason Scott Johnston* Robert G. Fuller, Jr. Professor and Director, Program on Law, Environment and Economy University of Pennsylvania Law School Initial Draft, September, 2008. May, 2010 Version.

TABLE OF CONTENTS

I. Introduction .................................................................................................................... 3

II. Key Issues in Climate Science: Uncovering the rhetorical strategy of the IPCC and the Climate Change Establishment............................................................................. 10

A. What do we Really Know about Global Mean Surface Temperatures, and Can we Really be So Sure about the Purported Warming Trend? ....................................... 11

1. Questions about the Measurement of Land Surface Temperature Trends........ 12

2. Long-Term Temperature Trends: Basic Questions about Global Warming Scientific methodology Raised by The “Hockey-Stick”Affair ........................ 15

3. The Missing Signature: Ongoing Data Disputes and the Failure to Consistently Find Differential Tropospheric Warming......................................................... 18

B. Crucial Shrouded Assumptions and Limitations of Climate Model Projections..... 21

1. Concealed Complexity: The Positive Feedbacks Presumed by Climate Models Account Entirely for very High Projected Future Temperature Increases ....... 21

2. Obscuring Fundamental Disagreement Across Climate Models in both Explanations of Past Climate and Predictions of Future Climate .................... 26

C. Distracting Attention from Empirical Studies Tending to Disconfirm Key Predictions of Climate Models and their Preferred Interpretation of Paleoclimatic Evidence ................................................................................................................... 31

1. The Ambiguous Paleoclimatic evidence on the Direction of Causality between CO2 and Temperature ....................................................................................... 31

2. What Happens to the Water? Recent Findings that Atmospheric Water Vapor and Precipitation are not Responding to a Warming Atmosphere in the Way that Climate Models Predict ............................................................................. 34

3. Climate Feedbacks: Are Clouds and Rain Really Irrelevant? .......................... 39

4. Direct Evidence on Feedback Effects ............................................................... 46

D. Compared to What? The Failure to Rigorously Test the CO2 Primacy Hypothesis Against Alternative Explanations for Late Twentieth Century ................................ 47

1. Atmospheric Circulation and Climate Change Detection, Attribution and Regional Climate Change Predictions.............................................................. 49

2. Internal Variability, or Chaotic Climate. .......................................................... 54

3. Solar variation................................................................................................... 57

E. Glossing over Serious and Deepening Controversies Over Methodological Validity: The Example of Projected Species Loss.................................................... 62

F. Exaggerate in the Name of Caution: Sea Level Scare Stories versus the Accumulating Evidence............................................................................................ 65

G. A Theory That Cannot be Disconfirmed: Sea Level Scare Stories and The Continuing Off-Model Private Prognostications of Climate Change Scientist/Advocates .................................................................................................. 68

II. Behind the Rhetoric: Apparent Uncertainties and Questions in Climate Science and their Policy Significance ............................................................................................. 72

A. Climate Model Projections: It’s all about the feedbacks ........................................ 72

B. The Ability of Climate Models to Explain Past Climate.......................................... 75

C. The Existence of Significant Alternative Explanations for Twentieth Century Warming................................................................................................................... 75

D. Questionable methodology underlying highly publicized projected impacts of global warming ........................................................................................................ 76

III. Conclusion: Questioning the Established Science, and Developing a Suitably Skeptical Rather than Faith-based Climate Policy .................................................... 77

GLOBAL WARMING ADVOCACY SCIENCE: A CROSS EXAMINATION

By Jason Scott Johnston* Robert G. Fuller, Jr. Professor and Director, Program on Law, Environment and Economy University of Pennsylvania Law School

First Draft. September, 2008 This Revision. May, 2010

Abstract

Legal scholarship has come to accept as true the various pronouncements of the Intergovernmental Panel on Climate Change (IPCC) and other scientists who have been active in the movement for greenhouse gas (ghg) emission reductions to combat global warming. The only criticism that legal scholars have had of the story told by this group of activist scientists – what may be called the climate establishment – is that it is too conservative in not paying enough attention to possible catastrophic harm from potentially very high temperature increases.

This paper departs from such faith in the climate establishment by comparing the picture of climate science presented by the Intergovernmental Panel on Climate Change (IPCC) and other global warming scientist advocates with the peer-edited scientific literature on climate change. A review of the peer-edited literature reveals a systematic tendency of the climate establishment to engage in a variety of stylized rhetorical techniques that seem to oversell what is actually known about climate change while concealing fundamental uncertainties and open questions regarding many of the key processes involved in climate change. Fundamental open questions include not only the size but the direction of feedback effects that are responsible for the bulk of the temperature increase predicted to result from atmospheric greenhouse gas increases: while climate models all presume that such feedback effects are on balance strongly positive, more and more peer-edited scientific papers seem to suggest that feedback effects may be small or even negative. The cross-examination conducted in this paper reveals many additional areas where the peer-edited literature seems to conflict with the picture painted by establishment climate science, ranging from the magnitude of 20th century surface temperature increases and their relation to past temperatures; the possibility that inherent variability in the earth’s non-linear climate system, and not increases in CO2, may explain observed late 20th century warming; the ability of climate models to actually explain past temperatures; and, finally, substantial doubt about the methodological validity of models used to make highly publicized predictions of global warming impacts such as species loss.

* I am grateful to Cary Coglianese for extensive conversations and comments on an early draft, and to the participants in the September, 2008 Penn Law Faculty Retreat for very helpful discussion about this project. Especially helpful comments from David Henderson, Julia Mahoney, Ross McKitrick, Richard Lindzen, and Roger Pielke, Sr. have allowed me to correct errors in earlier drafts, but it is important to stress that no one except myself has any responsibility for the views expressed herein.

1

Insofar as establishment climate science has glossed over and minimized such fundamental questions and uncertainties in climate science, it has created widespread misimpressions that have serious consequences for optimal policy design. Such misimpressions uniformly tend to support the case for rapid and costly decarbonization of the American economy, yet they characterize the work of even the most rigorous legal scholars. A more balanced and nuanced view of the existing state of climate science supports much more gradual and easily reversible policies regarding greenhouse gas emission reduction, and also urges a redirection in public funding of climate science away from the continued subsidization of refinements of computer models and toward increased spending on the development of standardized observational datasets against which existing climate models can be tested.

2

I. Introduction

In recent Congressional hearings, Senator John Kerry of Massachusetts stated that not a single peer-reviewed scientific paper contradicts the “consensus” view that increasing greenhouse gas emissions will lead to a “catastrophic” two degree Celsius increase in global mean temperatures.1 Senator Kerry is hardly alone in this belief. Virtually all environmental law scholars seem to believe that there is now a “scientific consensus” that anthropogenic greenhouse gas (ghg) emissions have caused late twentieth century global warming and that if dramatic steps are not immediately taken to reduce those emissions, then the warming trend will continue, with catastrophic consequences for the world.2 Indeed, even those scholars – such as Cass Sunstein, now serving as head of the bureau responsible for regulatory cost-benefit analysis within the White House -- who are somewhat leery of dramatic, and hugely costly reductions in ghg emissions, emphasize the “strong consensus” that the world as a whole will be benefit from “significant” steps to reduce ghg emissions.3

As the most authoritative and reliable evidence for the scientific “consensus” about human responsibility for and the likely future consequences of global warming, economists,4 legal scholars, 5 legislators6 and regulators7 – not to mention the more

1 See Kenneth P. Green, Countering Kerry’s Catastrophic Climate Claims, available at www.aei.org/outlook/100096 (December, 2009). 2 See, for example, Amy Sinden, Climate Change and Human Rights 27 J. Land Res. & Envtl. L. 255 (2007)(“The scientific consensus is now clear... permafrost is melting in the arctic. Glaciers around the world are receding...ecosystems across the globe are changing. ...Scientists estimate that human-induced climate change will drive a quarter of the species on earth to extinction by mid-century...we are witnessing ‘the end of Nature’”); Jody Freeman and Andrew Guzman, Climate Change and U.S. Interests 109 Colum. L. Rev. 1531, 1544 (2009) (“We take the current scientific consensus – that global warming is occurring, that its rapid acceleration in the last hundred and fifty years has been caused primarily by human behavior...and that it poses significant risks of substantial harm from a variety of impacts – as a starting point.”) 3 Eric A. Posner and Cass R. Sunstein, Climate Change Justice, University of Chicago Law School, Olin Law and Economics Working Paper No. 354, at 7 (August, 2007). 4 See the numerous examples of how prominent climate change economists have uncritically endorsed the IPCC as authoritative provided by David Henderson, Economists and Climate Science: A Critique, 10 World Econ. 59 (2009). 5 See Daniel A. Farber, Adapting to Climate Change: Who Should Pay? 3-5 (2008); Freeman and Guzman, supra note __ at1544 nn 53-55; Lisa Heinzerling, Climate Change, Human Health, and the Post-Cautionary Principle, 96 Geo. L. J. 445, 447-448 (2008); Richard J. Lazarus, Super Wicked Problems and Climate Change: Restraining the Present to Liberate the Future, 94 Cornell L. Rev. 1153, 1189-1190 (2009)(The IPCC 2007 Report has removed any serious doubt from the political arena whether both significant reduction in greenhouse gas emissions from human activities and concrete plans to adapt to climate change are now necessary. The long-awaited, and much-debated, scientific consensus regarding climate change cause and effect is now at hand.”); Christopher H. Schroeder, Global Warming and the Problem of Policy

3

popular media8 -- typically look to the most recent Assessment Reports of the U.N.’s Intergovernmental Panel on Climate Change (IPCC).

According to Clause 2 of the IPCC's original, 1988 Governing Principles, “[t]he role of the IPCC is to assess on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis of human-induced climate change...”9 The claim made by Clause 2 – that the IPCC Assessment Reports are neutral, and objective assessments of what is known about "human-induced" climate change and its impacts -- has been reiterated recently by its longstanding Chairman, the energy economist R.K. Pachauri. He has said that the IPCC:

“...mobilizes the best experts from all over the world...The Third Assessment Report (TAR) of the IPCC was released in 2001 through the collective efforts of around 2000 experts from a diverse range of countries and disciplines. All of the IPCC’s reports go through a careful two stage review process by governments and experts and acceptance by the member governments composing the panel.”10

Scientists who have been leaders in the process of producing these Assessment Reports (“AR’s”) argue that they provide a “balanced perspective” on the “state of the art” in climate science,11 with the IPCC acting as a rigorous and “objective assessor” of what is known and unknown in climate science.12

Legal scholars have accepted this characterization, trusting that the IPCC AR’s are the product of an “exhaustive review process” – involving hundreds of outside reviewers and thousands of comments.13 Within mainstream environmental law

Innovation: Lessons from the Early Environmental Movement, 108 Envt’l L. 285, 303 (2009); Sinden, Climate Change and Human Rights, supra note __ at 255 n1; 6 See, for example, the Democratic Policy Committee, Authoritative IPCC Report Confirms Existence, Consequences of Global Climate Change (May 17, 2007), available at http://www.democrats.senate.gov/dpc/dpc-new.cfm?doc_name=fs-110-1-79.

7 In its recent finding that greenhouse gas emissions may reasonably be anticipated to endanger public health, the United States Environmental Protection Agency relied heavily on conclusions in IPCC Assessment Reports. See EPA, Endangerment and Cause or Contribute Findings for Greenhouse Gases Under Section 202(a) of the Clean Air Act; Final Rule, 74 Fed. Reg. 66497, 66510-66512 (December 15, 2009).

8 Perhaps most famously, the movie The Inconvenient Truth dramatizes the IPCC as such an authoritative and indeed prophetic institution. 9 Principles Governing IPCC Work Approved at the 14th Session (Vienna, 1-3 October, 1988), amended at the 21st Session (Vienna, 3 and 6-7 November, 2003) and at the 25th Session (Mauritius, 26-28 April 2006), available at http://www.ipcc.ch/about/princ.pdf.

10 Quoted by David Henderson, Governments and Climate Change Issues: The Case for Rethinking, 8 World Econ. 183, 195 (2007). 11 Richard Wolfson and Stephen H. Schneider, Understanding Climate Science, in Climate Change Policy: A Survey 3, 43 (Stephen H. Schneider, Armin Rosencranz and John O. Niles, eds. 2002). 12 Susan Solomon and Martin Manning, The IPCC Must Maintain its Rigor, 319 Science 1457 (March 14, 2008). 13 See Farber, Adapting to Climate Change: Who Should Pay?, supra note __ at 4.

4

scholarship, the only concern expressed about the IPCC and “consensus” climate change science is that the IPCC’s process has allowed for too much government influence (especially from China and the U.S.), pressure that has caused the IPCC’s future projections to be too cautious – too hesitant to confidently project truly catastrophic climate change.14 Indeed, in a recent article, Jody Freeman – now serving as a White House Counselor on climate change – and Andrew Guzman argue that such political pressures have generated conservative IPCC Assessment Reports that “ignore” positive feedback effects such as water vapor, downplay the risk of abrupt and irreversible change in the climate system, and fly in the face of the “empirical record to date” which “shows that every surprise about climate change thus far has been in the ‘wrong direction.’”15

Thus politicians, environmental law scholars and policymakers have clearly come to have extreme confidence in the opinion of a group of scientists – many of whom play a leading role on the IPCC – who hold that the late twentieth century warming trend in average global surface temperature was caused by the buildup of anthropogenic ghg’s, and that if ghg emissions are not reduced soon, then the 21st century may witness truly catastrophic changes in the earth’s climate. In the legal and the policy literature on global warming, this view – which may be called the opinion of the climate establishment – is taken as a fixed, unalterable truth. It is virtually impossible to find anywhere in the legal or the policy literature on global warming anything like a sustained discussion of the actual state of the scientific literature on ghg emissions and climate change. Instead, legal and policy scholars simply defer to a very general statement of the climate establishment’s opinion (except when it seems too conservative), generally failing even to mention work questioning the establishment climate story, unless to dismiss it with the ad hominem argument that such work is the product of untrustworthy, industry-funded “skeptics” and “deniers.”16

Given, however, that the most significant ghg emission reduction policies are intended to completely alter the basic fuel sources upon which industrial economies and societies are based, with the costs uncertain but potentially in the many trillions of dollars, one would suppose that before such policies are undertaken, it would be worthwhile to verify that the climate establishment’s view really does reflect an unbiased and objective assessment of the current state of climate science. Insofar as the established view is that promulgated by IPCC AR’s, such verification means comparing what the IPCC has to say about climate science with what one finds in the peer-reviewed climate science literature, and then questioning apparent inconsistencies between what is said in the literature and what is said by the IPCC and other carriers of the establishment climate story. This is essentially to undertake precisely the kind of cross-examination to which American attorneys routinely subject hostile expert witnesses.

14 See Farber, supra note __ at 4 nn. 13-14; Freeman and Guzman, supra note __ at 1549-1550. 15 Freeman and Guzman, supra note __ at 1548. 16 See, e.g., Matthew F. Pawa, Global Warming: The Ultimate Public Nuisance, 39 ELR 10230, 10234-

10235 (2009)(discussing a smattering of such work under the heading “Deception and Denial of global warming by industry”).

5

This paper constitutes such a cross-examination. As anyone who has served as an expert witness in American litigation can attest, even though an opposing attorney may not have the expert’s scientific training, a well prepared and highly motivated trial attorney who has learned something about the technical literature can ask very tough questions, questions that force the expert to clarify the basis for his or her opinion, to explain her interpretation of the literature, and to account for any apparently conflicting literature that is not discussed in the expert report. My strategy in this paper is to adopt the approach that would be taken by a non-scientist attorney deposing global warming scientists serving as experts for the position that anthropogenic ghg emissions have caused recent global warming and must be halted if serious and seriously harmful future warming is to be prevented – what I have called above the established climate story. The established story has emerged not only from IPCC AR’s themselves, but from other work intended for general public consumption produced by scientists who are closely affiliated with and leaders in the IPCC process. Hence the cross-examination presented below compares what is said in IPCC publications and other similar work by leading climate establishment scientists with what is found in the peer-edited climate science literature.

The point of this exercise in cross-examination is twofold. The first is just to run a relatively simple check, as it were, on the claimed objectivity and unbiasedness of the IPCC AR’s and other work underlying the established climate story. Do IPCC AR’s, summaries and other work by leading climate establishment scientists seem to frankly and openly acknowledge key assumptions, unknowns and uncertainties underlying the establishment projections, or does work supporting the established story tend instead to ignore, hide, minimize and downplay such key assumptions, uncertainties and unknowns? To use legal terms, is the work by the IPCC and establishment story lead scientists a legal brief – intended to persuade – or a legal memo – intended to objectively assess both sides? The second and related objective of this Article is to use the cross examination to identify what seem to be the key, policy-relevant areas of remaining uncertainty in climate science, and to then at least begin to sketch the concrete implications of such remaining uncertainty for the design of legal rules and institutions adopted to respond to perceived climate change risks.

Far from turning up empty, my cross examination has (initially, to my surprise) revealed that on virtually every major issue in climate change science, the IPCC AR’s and other summarizing work by leading climate establishment scientists have adopted various rhetorical strategies that seem to systematically conceal or minimize what appear to be fundamental scientific uncertainties or even disagreements. The bulk of this paper proceeds by cataloguing, and illustrating with concrete climate science examples, the various rhetorical techniques employed by the IPCC and other climate change scientist/advocates in an attempt to bolster their position, and to minimize or ignore conflicting scientific evidence.

From the cross examination that constitutes Part I of this Article, it appears clear that the establishment story has presented climate science so as to support two prior beliefs: concerning the climate system, that anthropogenic ghg emissions have been responsible for significant late 20th century warming and that one can confidently predict

6

even more serious future warming from continued emissions; and, concerning policy, that the U.S. and other developed countries should rapidly reduce their ghg emissions and decarbonize their economies. There are, to be sure, many chapters in the IPCC Assessment Reports whose authors have chosen to quite fully disclose both what is known as well as what is unknown, and subject to fundamental uncertainty, in their particular field of climate science. Still, the climate establishment story -- comprising all of the IPCC Assessment Reports, plus the IPCC’s “Policymaker Summaries,” plus the freelance advocacy efforts of activist climate scientists (exemplified by James Hansen of NASA) – seems overall to comprise an effort to marshal evidence in favor of a predetermined policy preference, rather than to objectively assess both what is known and unknown about climatic variation and its causes.

To conclude that establishment global warming science is not an objective or unbiased assessment, but instead is an attempt to support the prior belief that human ghg emissions are causing global warming and that such emissions must be dramatically and quickly reduced, is not to say that the established view will ultimately be disconfirmed. The problem is not the global warming advocacy science is wrong – something that in any event I lack the expertise to determine – but that by overselling models and evidence, global warming advocacy science has created some very serious misimpressions among many people about what is known and understood about global climate, and has directed media and policy attention solely to greenhouse gas emissions as the sole cause of climate change. For example, in their forthcoming article,17 Jody Freeman and Andrew Guzman -- the first of whom is now managing America’s climate negotiations as the President’s Climate Counselor -- argue that climate models ignore many important positive feedback effects. As I discuss in more detail below, however, it is only because they presume that there are so many positive feedback effects that climate models get their large projected temperature increases – indeed without such positive feedbacks, climate models predict that a doubling of CO2 relative to the standard pre-industrial baseline would lead to only about a 1 degree centigrade increase in global temperature. And when one looks closely at the scientific literature, it turns out that some of the most crucial (and actually testable) predictions or assumptions underlying predictions of dangerous climate change are not in fact being confirmed by observations. Newspapers are full of stories about melting ice sheets; what they neglect to report is that recent work shows that changes in clouds and precipitation – crucial to the predictions of climate models – are not what those climate models have assumed.18

The reader should be warned that the cross-examination presented in Part I does entail actually discussing the substance of climate science. It is of course possible that despite efforts to ensure accuracy, there remain mistakes in my interpretation of the climate science literature, so that some of the questions I believe to be raised by that literature are actually not well put. It is true that the possibility of such error is a primary justification for what seems to be the dominant position in legal scholarship, that legal scholars and policymakers more generally should simply “ask the scientists,” and then

17 Freeman and Guzman, Seawalls are Not Enough, supra note __ at __. 18 Such evidence is explained in a very concise and accessible way by Roy Spencer, Climate Confusion (2008).

7

accept on faith whatever conclusions are presented by such scientists. In the climate science area, such a position – which becomes more specifically the recommendation that policymakers simply follow the IPCC – has been defended on the ground that the alternative, of “juxtaposing” IPCC conclusions against a “few contrarian opinions” risks “muddying” the “public and political process...because few understand the very different relative credibility of these various claimants to state-of-the-art knowledge.”19 This position uses the inability of laypeople to fully assess the “relative credibility” of various expert claims to accord extreme deference to the conclusions of institutions for the assessment of regulatory science such as the IPCC.

The present article responds to this “relative credibility” problem by drawing only from work published in top, peer-edited journals in the climate-related sciences, and/or more popular work by scientists at the very best universities who routinely publish in such peer-edited journals. Virtually all of the climate scientists whose work I discuss would be characterized as part of the climate science mainstream and of unimpeachable credibility, rather than “deniers” of questionable qualifications. When one looks at this decidedly mainstream literature, one discovers a number of facts and findings that seem not to well understood and which are rarely if ever even mentioned in the climate change law and policy literature:

- There seem to be significant problems with the measurement of global surface temperatures over both the relatively short run – late 20th century – and longer run – past millennium – problems that systematically tend to cause an over- estimation of late 20th century temperature increases relative to the past;

- Continuing scientific dispute exists over whether observations are confirming or disconfirming key short-run predictions of climate models – such as an increase in tropospheric water vapor and an increase in tropical tropospheric surface temperatures relative to tropical surface temperatures;

- Climate model projections of increases of global average surface temperature (due to a doubling of atmospheric CO2) above about 1 degree centigrade arise only because of positive feedback effects presumed by climate models;

- Yet there is evidence that both particular feedbacks -- such as that from clouds – and feedbacks in total may be negative, not positive;

- Confidence in climate models based on their ability to causally relate 20th century temperature trends to trends in CO2 may well be misplaced, because such models do not agree on the sensitivity of global climate to increases in CO2 and are able to explain 20th century temperature trends only by making arbitrary and widely varying assumptions about the net cooling impact of atmospheric aerosols;

- Similar reason for questioning climate models is provided by continuing scientific dispute over whether late 20th century warming may have been simply a natural climate cycle, or have been caused by solar variation, versus being caused by anthropogenic increases in atmospheric CO2;

- The scientific ability to predict what are perhaps the most widely publicized adverse impacts of global warming – sea level rise and species loss – is much

19 Wolfson and Schneider, Understanding Climate Science, supra note __ at 43-44. 8

less than generally perceived, and in the case of species loss, predictions are based on a methodology that a large number of biologists have severely criticized as invalid and as almost certain to lead to an overestimate of species loss due to global warming;

- Finally, many of the ongoing disputes in climate science boil down to disputes over the relative validity and reliability of different observational datasets, suggesting that the very new field of climate science does not yet have standardized observational datasets that would allow for definitive testing of theories and models against observations.

The cross examination conducted below not only uncovers these findings in the climate science literature, but shows that they are almost completely ignored by the IPCC in its communications to policymakers and the media (Summaries for Policymakers and Technical Summaries that accompany its full Assessment Reports) and by other members of the climate establishment in their popularly accessible reviews and assessments of the state of climate science. Rather than laying out contrasting positions that one finds in the literature, the IPCC and other leading establishment climate scientists either simply ignore or tersely dismiss scientific work that disputes or casts doubt upon the assumptions underlying or projections made by climate models and establishment climate science more generally. My cross examination clearly reveals a rhetoric of persuasion, of advocacy that prevails throughout establishment climate science.

Perhaps the most straightforward justification for this rhetorical stance is that the IPCC’s job is to assess the science – to adjudicate whatever disputes or disagreements may exist in the literature -- and to then make a decision as to which side is most likely correct. Having made such a decision about which is the “best” science currently available, and in particular decided that there is “unequivocal” evidence that anthropogenic ghg emissions have caused recent global warming, the IPCC’s job is then to present that science in as persuasive a way as possible. Especially when potentially planet-saving policy responses are on the line, to present the science in a way that instead highlights questions and uncertainties would be to encourage doubt and potentially harmful delay in adopting policies to reduce ghg emissions.

The problem with this justification is that the optimal policy to adopt with respect to reductions in anthropogenic ghg emissions itself depends upon a fine, rather than coarse-grained understanding of the state of scientific understanding. The more certain and immediate is the threatened harm from continuing increases in anthropogenic ghg emissions, the more will the cost-benefit policy calculus tip in favor of very expensive, immediate and irreversible policy commitments to ghg emission reduction (and also, although often overlooked, to adaptation investments). The more questionable is the magnitude, timing and even existence of harm from continuing increases in human ghg emissions, the greater the case for policies toward ghg emission reduction that are less costly in the short run and more easily reversible in the long run. If policymakers are to craft the correct policy, then they must understand the nature of the threat posed. The rhetorical strategy that has come to dominate establishment climate science is not designed to promote such fine-grained understanding; it is designed instead to convince the public of what some, but by no means all, climate scientists have come to believe by conveying a very scary and also very simple picture of the state of the science. Such coarse understanding leads to a very coarse policy prescription: “Do something, anything,

9

now!” Such a policy prescription justifies virtually any policy, however costly or inefficient, that can plausibly be argued to lead to ghg emission reductions at some point in the future. The cross examination undertaken in this Article clearly reveals important questions, uncertainties and disputes in climate science. It is hard to imagine that any policymaker who becomes aware of these and of the overall complexity of climate science could rationally advocate the “Do something, anything, now!” policy prescription so easily drawn from the alternative picture painted by the climate change establishment.

The bulk of this article, consisting of Part II, undertakes the substantive cross- examination of establishment climate science. In Part III, I summarize what the Part II cross examination has revealed about the state of knowledge regarding the key, policy relevant issues to be answered by climate science. Then I provide a few examples of prominent legal scholarship that constructs policy arguments on the basis of very partial or incomplete misunderstandings of the climate science literature. Finally, I conclude by advocating a redirection in public funding for climate science.

II. Key Issues in Climate Science: Uncovering the rhetorical strategy of the IPCC and the Climate Change Establishment

There are now a large number of books that give readers a basic introduction to climate change science, and I have neither the expertise nor the ambition to provide such an introduction here.20 What is important for the present purpose is that the reader recall that the basic story told by climate change advocates is that a human-generated increase in the atmospheric (tropospheric) concentration of human-generated greenhouse gases – primarily carbon dioxide, CO221 – has already begun causing an increase in average global surface temperature (as well as in the temperature of higher levels of the troposphere). This story – which I call the “CO2 primacy” story – predicts also that these temperature increases will continue and even accelerate if the global emission of greenhouse gases (and atmospheric concentration of such gases) continues to increase. These predictions are based primarily on computer models of the earth’s coupled oceanic and atmospheric circulation system (such models are therefore called Coupled Ocean Atmosphere General Circulation Models, or COAGCM’s or just GCM’s for short). Such model predictions are often said to be supported by paleoclimatic data on what the earth’s

20 A very clear and entirely non-mathematical explanation of the key mechanisms of both weather and climate from the point of a view of a meteorologist who believes that weather mechanisms are of fundamental significance in predicting climate change is provided by Roy W. Spencer, Climate Confusion 45-84 (2008). For a short and relatively non-technical introduction to some of the key physical mechanisms of climate, see John E. Frederick, Principles of Atmospheric Science (2008), and for a more comprehensive treatment see Dennis L. Hartmann, Global Physical Climatology (1994), while the physics is succinctly covered by F.W. Taylor, Elementary Climate Physics (2005). For a comprehensive introduction to three-dimensional climate models, see Warren M. Washington and Claire L. Parkinson, An Introduction to Three-Dimensional Climate Modeling (2d ed. 2005).

21 Other strong greenhouse gases include methane, nitrous oxide and chloroflourcarbons (CFC’s). Some of these are much stronger greenhouse gases than CO2 but also much less concentrated in the atmosphere: CFC’s, for example, are 20,000 times more powerful in absorbing infared radition than is CO2, but also a million times less concentrated in the atmosphere. See William R. Cotton and Roger A. Pielke, Sr., Human Impacts on Weather and Climate 158 (2d ed. 2007).

10

climate looked like in the near and far distant past when CO2 and temperature were different than today. Such paleoclimatic data are recovered by taking ice cores from the polar regions and sediment cores from the oceans (and elsewhere) using carbon and other forms of dating to assign ages to different levels of the core, and then using the relative proportion of different oxygen isotopes as a proxy for surface temperature and various other proxies for atmospheric carbon dioxide levels. The models yield increasingly concrete predictions regarding the impact of rising levels of greenhouse gases on global mean surface temperature, but also on many dimensions of regional climate. The IPCC Assessment Reports then go on to prognosticate about the possible impact of such a changing climate not only on humans but also on non-human species.

There is much more detail to the story than this. That detail will emerge as I question systematically this seemingly straightforward story.

A. What do we Really Know about Global Mean Surface Temperatures, and Can we Really be So Sure about the Purported Warming Trend?

In AR4, the IPCC makes a number of unambiguous factual assertions regarding global average temperature trends. The core assertions, which constitute the basic “story” about global warming past and present told by the climate change establishment, include the following:

i) global mean temperatures have risen about .74 degrees C over the last 100 hundred years, with the warming rate doubling over the last 50 years; the impacts of urbanization and land use change on the land-based temperature record are “negligible;”

ii) both the troposphere and the surface have warmed;

iii) in the vast majority of land regions, there has been an increase in the number of extremely warm days and a reduction in the number of extremely cold days, a dramatic illustration of the former being the European heat wave of 2003;

iv) since about 1970, here has been an increase in the number of intense tropical storms, “culminating” in the North Atlantic in the “record-breaking” 2005 hurricane season, and this increase is correlated with an increase in sea surface temperatures (SST’s);

v) Now that many measurement errors have been corrected, satellite observations of temperatures in the lower troposphere are “broadly consistent” with the surface measurement trend, although a cooling bias likely persists in the tropics;22

vi) Research since the 2001 IPCC Assessment Report has “strengthened the conclusion of ‘exceptional warmth of the late 20th century, relative to the past 1000 years,...’”23 and “[a]verage Northern Hemisphere temperatures during the second half of

22 Assertions (i) through (iv), IPCC, Climate Change 2007: The Physical Science Basis 237-239. 23 IPCC, Climate Change 2007: The Physical Science Basis 436.

11

the 20th century were very likely higher than during any other 50-year period in the last 500 years and likely the highest in at least the last 1,300 years.”24

Every one of these assertions – all made with enormous confidence -- seems to mask substantial and increasing scientific disagreement and uncertainty. Leaving for another time the discussion of the seemingly enormous gap between what the IPCC continues to assert or imply about global warming and hurricanes and what the hurricane science community is actually saying, I focus here on the core assertions about temperature made by the IPCC in its 2007 Report.

1. Questions about the Measurement of Land Surface Temperature Trends

The global mean temperature data that the IPCC reports are a particular temperature dataset put together and jointly maintained by the Climatic Research Unit (CRU) at the University of East Anglia and the United Kingdom Meteorological Office’s Hadly Center, a dataset known by the acronym Had CRUT. This dataset extends back to 1850.25 The data are published as monthly averages, which in turn are formed from daily averages. A daily average is computed as the midpoint between the nighttime minimum and daytime maximum temperature. From the mid 1950’s to the mid 1990’s, nighttime minimum daily surface temperatures in this dataset have risen about twice as fast as daytime maximum daily surface temperatures.26 What is important to see is that most of the increase in global daily average surface temperature that is reported by the IPCC is due to an increase in nighttime minimum temperatures27 -- that is, daytime maximums have not increased by much over the period reported on by the IPCC.

Over land, the surface temperatures in this dataset are measured at between 1.5 and 2 meters (between about 5 and 7 feet) above ground at official weather stations, sites run for various scientific purposes, and by volunteer observers.28 Now typically, the daily maximum temperature occurs during the daytime. During the daytime, when there is lots of vertical mixing of air, temperatures at even the relatively low 1.5-2 meter height are representative of temperatures higher up in the troposphere and are thus a good proxy for the “content of a substantial mass of the lower atmosphere.”29 Thus the daily maximum temperature proxies the temperature of a large quantity of air. This is not true of daily minimum temperatures, which typically occur during the nighttime. At night, the air is typically much less turbulent and there is a large temperature change (gradient) as one increases in height above the surface. For this reason, a minimum temperature

24 IPCC, Climate Change 2007: The Physical Science Basis 9. 25 Climate Change 2007, the Physical Science Basis, at 241-242. 26 Philip J. Klotzbach et al. An Alternative Explanation for Differential Temperature Trends at the Surface and in the Lower Troposphere, forthcoming J. Geo. Res. (2009). 27 Roger Pielke Sr. et al., Uresolved issues with the assessment of multidecadal global land surface temperature trends, 112 J. Geo. Res. D24S08, doi:10.1029/2006JD008229 (2007)(hereafter referred to as “Unresolved issues with global land surface temperature trends”). 28 Roger Pielke Sr. et al., Uresolved issues with the assessment of multidecadal global land surface temperature trends, 112 J. Geo. Res. D24S08, doi:10.1029/2006JD008229 (2007)(hereafter referred to as “Unresolved issues with global land surface temperature trends”). 29 John R. Christy et al., Surface Temperature Variations in East Africa and Possible Causes, 22 J. Climate 3342 (2009).

12

measured at only 1.5-2 meters above the ground represents the temperature of only a very shallow layer of air and is extremely sensitive to land surface properties that affect turbulence near the surface. At the typical 1.5-2 m observational height, anything that impacts mixing between very low and higher layers of air may have an enormous impact on such nighttime temperatures.30 For instance, at this height (compared with even a few meters higher), nighttime minimums are much lower on calm nights than on windy nights. Importantly, anything that causes increased turbulence (and more mixing with higher layers) in this bottom nighttime layer of air will tend to increase minimum nighttime temperatures.31 Among the things that could cause such increased turbulence are trees or buildings (by increasing surface roughness),32 things leading to increased surface heat capacity, such as irrigation and pavement,33 and anything that causes enhanced downward longwave radiation and hence more downward nighttime mixing of warm air, such as increases in water vapor and thermally active aerosols.34

Now if one were certain that over the period from the 1950’s to the 1990’s, there had been no systematic trend in the vicinity of surface temperature observation stations in any of these very basic land use and atmospheric variables, then one can confidently say, as does the IPCC, that there is no need worry that things like the urban heat island effect have caused an overestimate (upward bias) in the supposed increase in temperatures over this period. Global warming scientists have attempted to correct the observed surface temperature dataset for changes that might be expected to have caused an upward bias in temperature trend data, such as the urban heat island effect (this is referred to as insuring that the temperature trend data are homogeneous).35 They apply a set of adjustments to what they deem to be non-spatially representative observations, which includes other observations that are temporally and geographically proximate.36 Such adjustments typically involve estimating a linear regression equation with many regional temperature observation points, and avoiding using observations that are outliers in the sense that they are far from the regression line. However, such homogeneity correction techniques cannot offset a bias in temperature measurements caused by a shared upward trend in the key factors that might be expected to introduce a warm bias into the minimum nighttime

30 See B.J.H. Van de Wiel et al., Intermittent turbulence and oscillations in the stable boundary layer over land, Part II: A system dynamics approach, 59 J. Atmos. Sci. 2567 (2002); J.T. Walters et al., Positive surface temperature feedback in the stable nocturnal boundary layer, 34 Geo. Res. Lett. L12709, doi:10.1029/2007/GL029505

31 Klotzbach et al., An Alternative Explanation of Differential Temperature Trends at the Surface and in the Lower Troposphere, supra note __. 32 R.T. McNider et al., On the predictability of the stable atmospheric boundary layer, 52 J. Atmos. Sci. 1602 (1995).

33 X. Shi, On the behavior of the stable boundary layer and role of initial conditions, 162 Pure Appl. Geophysics 1811 (2005). 34 Mark Z. Jacobsen, Development and applications of a new air pollution modeling system – Part III. Aerosol-phase simulations, 31 Atmos. Environ. 587 (1997).

35 The adjustment factors and methodology used to homogenize temperature data have changed over time. For a description of the method used from the late 1980’s until recently, see Roger A. Pielke, Sr. et al., Problems in Evaluating regional and local trends in temperature: An example from eastern Colorado, USA, 22 Int. J. Climatol. 421, 423-424 (2002); for the most up-to-date methods, see Matthew J. Menne et al., The United States Historical Climatology Monthly Temperature Data – Version 2, Bull. Amer. Meteor. Soc. (in press).

36 See Pielke et al., Unresolved Temperature Trend Issues, supra note __. 13

1.5 – 2 meter temperature trend -- such as increased irrigation and development, and the resultant changes in the vertical mixing of heat by turbulence. To be more concrete, if over the period 1950-1990, an entire area – consisting of a number of temperature observation stations – had been subject to increasing development including irrigation, plus increasing forcing due to increases in atmospheric water vapor content and/or aerosols from pollution, then it would not be possible to adequately eliminate “inhomogeneities” in the temperature dataset by these adjustments. Such observational trends would be tainted.

There is substantial evidence for long term differences in the time trend of daily minimum and maximum temperatures. In a number of places, ranging from north Alabama37 and central California38 to East Africa,39 Christy et al. have found pronounced differences in the time trend of daily minimum and daily maximum temperatures, with no significant increase in daily maximum temperatures over this period (and various sub- periods) but a large and statistically significant increase in daily minimum temperatures. These results contrast sharply with recent work40 by teams including IPCC lead authors that has showed that over the period 1979-2004, daily maximum and minimum surface temperatures both warmed at nearly the same rate. However, for East Africa, it appears that the increase in daily minimum temperatures found by the IPCC teams was derived entirely from a single temperature observation station located at the airport in Nairobi, Kenya.41

There seems to be more and more evidence that there has indeed been a systematic trend upward since 1950 in the kind of variables that would have caused nighttime minimum temperatures to overstate actual surface warming. In California, Christy et al. found that while there was a significant increase in daily minimum temperatures over their study period in the Central Valley, where large land use changes occurred, there was no such increase in the foothills and the Sierras, where there was much less land development. Similarly, Mahmood et al.42 found an increase in daily minimum temperatures in irrigated areas of the Great Plains but no significant increase in non-irrigated areas in that region. Over an even larger region of the U.S., looking at stations representing areas for which various temperature inhomogeneities had been removed using the same methods employed by IPCC lead authors, significant warming trends were found at over 90% of the observation stations after periods of change in the

37 J.R. Christy, When was the hottest summer? A state climatologist struggles for an answer, 83 Bull. Amer. Meteor. Soc. 723 (2002). 38 J.R. Christy et al., Methodology and Results of calculating central California surface temperature trends: Evidence of human-induced climate change?, 19 J. Climate 548 (2006). 39 J.R. Christy et al., Surface Temperature Variations in East Africa and Possible Causes, supra note __. As Christy et al. note, their findings contrast sharply with recent work by teams including IPCC lead authors, R.S. Vose et al., Positive Surface Temperature feedback in the stable nocturnal boundary layer, 34 Geo. Res. Lett. L12709, doi:10.1029/2007/GL029505, finding that over the period 1979-2004, daily maximum and minimum surface temperatures both warmed at nearly the same rate. 40 R.S. Vose et al., Positive Surface Temperature feedback in the stable nocturnal boundary layer, 34 Geo. Res. Lett. L12709, doi:10.1029/2007/GL029505. 41 J.R. Christy et al., Surface Temperature Variations in East Africa, supra note __. 42 R. Mahmood et al., Impacts of irrigation on 20th century temperature in the northern Great Plains, 54 Glob. Plan. Change 1 (2006).

14

dominant type of land cover. 43 Further evidence for potential bias in the increase in land surface temperatures comes from the finding of Klotzbach et al. of a significantly greater increase in minimum than maximum temperatures in high latitude areas, where boundary layers are shallower and have a proportionally larger response due to changes in the vertical turbulent mixing of heat.

2. Long-Term Temperature Trends: Basic Questions about Global Warming Scientific methodology Raised by The “Hockey-Stick”Affair

Especially prominent in the presentation style and structure of the IPCC’s 2001 Assessment Report was the so-called “hockey stick,” a graph depicting global mean surface temperatures back to the year 1000 which dramatically showed that mean global surface temperature had remained roughly stable until the twentieth century, when it rapidly began ascending. The hockey stick graph appeared six times at various places in the IPCC’s 2001 Assessment Report, repeatedly and prominently displayed to support the IPCC’s 2001 assertion that in the Northern Hemisphere “the 1990’s has been the warmest decade and 1998 the warmest year of the millennium.”44

In the IPCC’s 2007 Assessment Report, there is no hockey stick graph. Instead, in the 2007 Report one finds a relatively complex time series consisting of a large number of attempted reconstructions of past climate that tend to show that 20th century temperatures were high, but no higher than temperatures during the medieval warm period of roughly 1000 years ago.45 Still, in both the Summary for Policymakers and Technical Summary, the IPCC asserts that “[a]verage Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years and likely the highest in at least the last 1,300 years. Some recent studies indicate...that cooler periods existed in the 12th to 14th, 17th and 19th centuries.”46

Why would the IPCC both delete the famous (or infamous) hockey-stick graph and yet continue to assert (albeit with lessened confidence) that 20th century temperatures were the highest in the last 1,300 years? According to the IPCC, after the 2001 Assessment Report had come out, McIntyre and McKitrick reported that they were unable to replicate the hockey stick results found by Mann et al.47, and had “raised further concerns” about the technique (principal components analysis) that Mann et al. had used to extract the “dominant modes of variability present in a network of western North American tree ring chronologies.”48 Despite this, the IPCC says, other researchers had

43 R.C. Hale et al., Land use/land cover change effects on temperature trends at U.S. Climate Normals stations, 33 Geo. Res. Lett. L11703, doi:1029/2006GL026358. 44 See Ross McKitrick, What is the “Hockey Stick” Debate About, APEC Study Group Presentation, April, 2005. 45 IPCC, Climate Change 2007, the Physical Science Basis 467-468 (2007). 46 IPCC, Climate Change 2007, the Physical Science Basis 9, 54 (2007). 47 Michael E. Mann, et al., Global-Scale Temperature Patterns and Climate Forcing Over the Past Six Centuries, 392 Nature 779 (1998); Michael E. Mann et al., Northern Hemisphere Temperatures During the Past Millenium: Inferences, Uncertainties and Limitations, 26 Geo. Res. Lett. 759 (1999). 48 IPCC, Climate Change 2007: the Physical Science Basis 466.

15

shown that by using the correct “implementation,” they could replicate the original hockey stick data; that even if Mann et al.’s methods were flawed, they had only a minor (.05 degrees C) impact on reconstructed temperatures; and, finally, that there have been a large number of new proxy temperature reconstructions using regional averaging methods and statistical “transfer functions,” methods that “preserve multi-decadal and centennial time scale variability” in tree-ring and bore hole data used to reconstruct past temperatures.49 While the IPCC now admits that their proxy temperature reconstructions are too uncertain to “gauge the significance, or precedence, of the extreme warm years observed in the recent instrumental record, such as 1998 and 2005, in the context of the last millennium,” it believes that the “weight of the current multi-proxy evidence” supports its basic conclusion that the 20th century was “likely” the warmest in the last 1,300 years.50

Were one to take all of this at face value, one would think that while McIntyre and McKitrick had pointed to a few problems with the original “hockey-stick” studies by Mann et al., lots of new studies have been done and while there may be more uncertainty, the basic story -- a “hockey stick” type relationship with unprecedentedly high and rising temperatures in the 20th century – remains. A closer look at the “hockey stick” controversy reveals instead some fundamental questions about the methodology underlying long-term (paleoclimatic) temperature reconstructions and about the kind of scientific process that the IPCC relies upon in reaching its conclusions. Since actual temperature measurements extend only from the 19th century, when climate scientists attempt to measure temperatures many centuries ago, they use proxies for temperature, such as tree ring data. Tree ring widths are influenced by local temperature, and so if one can appropriately control for other things that influence annual tree growth, tree ring data can be used as a proxy for past temperatures. What McIntyre and McKitrick showed was that the “hockey stick” temperature reconstruction derived by Mann et al. was due entirely to using a statistical method for trend identification that essentially gave enormous weight to a very few tree ring datasets that exhibited a 20th century hockey stick – that is, growth spurts in the 20th century that could have been caused by warming.51 More precisely, when they reanalyzed the data used by Mann et al., McIntyre and McKitrick found that the upward 20th century temperature trend reported by Mann et al. was due entirely to the application of a trend identification program to one particular tree ring data series, a bristlecone pine tree ring dataset developed as part of a research project undertaken in the 1980’s. In published articles, the researchers who developed the bristlecone pine dataset themselves had said that the 20th century growth spurt found in that data did not match local temperature trends and probably was instead due to CO2 fertilization.52

49 IPCC, Climate Change 2007: The Physical Science Basis 466-474, 472. 50 IPCC, Climate Change 2007: The Physical Science Basis 474. 51 See Stephen McIntyre and Ross McKitrick, Hockey Sticks, Principal Components, and Spurious Significance, 32 Geo. Res. Lett. L03710, doi:10.1029/2004GL021750 (2005); McIntyre and McKitrick, Reply to comment by von Storch and Zorita on “Hockey Sticks, Principal Components, and Spurious Significance,” 32 Geo. Res. Lett. L20714, doi:10.1029/2005GL023089 (2005). 52 See the discussion in Stephen McIntyre and Ross McKitrick, The Hockey Stick Debate: Lessons in Disclosure and Due Diligence 10 (George Marshall Institute, May 2005).

16

McIntyre and McKitrick’s was not the only critique of the Mann et al. “hockey stick” studies. An unrelated team of researchers led by von Storch used climate models to generate estimates of temperatures over the past millennium (so-called “pseudo- proxies”) and found that the methods employed by Mann et al. failed to take into account the long term persistence of temperatures (generating what is called statistical “red noise”) and so severely underestimated the long term variability of climate.53 This failure to properly take into account long term persistence of climate trends while using regression methods on 20th century data to “calibrate” the proxy measures can be shown to itself generate a “hockey stick” type graph for long-term temperatures.54

Mann et al. have responded to the criticism of their work and presented what they say are improved, statistically robust proxy reconstructions of past temperatures,55 and the debate between Mann et al. and their critics in fact continues today.56 While one’s view as to which side has the better of the debate turns significantly on issues of statistical methodology, the debate itself brings to light several important and disturbing features of global warming science. The first and perhaps most important thing is simply to recognize that there is a debate about where 20th century temperatures fall in relation to temperatures over the past millenium. As with any debate, there are clearly sides, with scientists allied with the IPCC, such as Mann, on one side, and other climate scientists, such as von Storch, on the other. For example, Stefan Rahmstorf, an IPCC lead author, argued in a short “Letter” published in Science magazine that von Storch et al. were wrong to think that their results actually tended to cast doubt on the Mann et al. hockey stick.57 Von Storch et al. replied that Rahmstorf’s reply itself “illustrates a common confusion in our field...We showed that the [Mann et al.] method implemented in the simulations leads to pseudoreconstructed temperatures being too warm and with differences from the target temperature larger than our calibration uncertainty ranges.”58 What this means is that the Mann et al.’s method tended to exaggerate the pronounced upward trend in twentieth century temperatures that gives the “hockey stick” graph its name.

The second general lesson from the hockey-stick debate is that it indicates a seeming tendency by establishment climate science to systematically underestimate uncertainty, a tendency that may partly be related to what seems to be a relative lack of knowledge and expertise with statistical methodology, as well as a bias in favor of reporting results that will prove useful in future IPCC assessments. For example, at Congressional urging, the National Academy of Science convened an inquiry into the

53 Hans von Storch et al., Reconstructing Past Climate from Noisy Data, 306 Science 679 (2004). 54 This point is demonstrated elegantly and simply by David R.B. Stockwell, Reconstruction of Past Climate Using Series with red Noise, 8 AIG News 314 (2006). 55 Michael E. Mann et al, Proxy-based reconstructions of hemispheric and global surface temperature over the past two millennia, 105 Proc. Nat’l Acad. Sci USA 13252 (2008). 56 Compare, for instance, Stephen McIntyre and Ross McKitrick, Proxy Inconsistency and other problems in millennial paleoclimate reconstructions, 106 Proceed. Nat’l Acad. Sci. USA E10 (2009) with Michael E. Mann et al., Reply to McIntyre and McKitrick: Proxy-Based Temperature Reconstructions are Robus, 106 Proceed. Nat’l Acad. Sci. USA E11 (2009). 57 Stefan Rahmstorf, Testing Climate Reconstructions, 312 Science 1872 (2006). 58 Haus von Storch et al, Response, 312 Science 1872 (2006).

17

Mann et al. “hockey stick.” While the National Research Council (NRC) report concluded that while it could be said with a “high level of confidence” that global mean temperatures “were higher during the last few decades of the 20th century than during any comparable period during the preceding four centuries,...less confidence can be placed in large-scale surface temperature reconstructions for the period from A.D. 900 to 1600.”59 According to a Science news story, when questioned, members of the NRC committee that authored the report opined that they had concluded that Mann et al. “had underestimated the uncertainty” in distant temperature reconstructions, that “[i]n fact, the uncertainties aren’t fully quantified,” but the committee thought it was “more at the level of 2:1 odds” that 20th century temperatures were the warmest over the last 1,000 years.60 With particular reference to the criticisms raised by McIntyre and McKitrick, the National Research Council report concluded that “taken together, they are an important aspect of a more general finding of this committee, which is that uncertainties of the published reconstructions have been underestimated.”61 As for statistical methodology, a report by a group of statisticians commissioned by two committees of Congress was more blunt, stating that the original Mann et al. work had “misused” the statistical trend detection technique (called “principal components analysis”) that it had relied upon.62

3. The Missing Signature: Ongoing Data Disputes and the Failure to Consistently Find Differential Tropospheric Warming

The lapse rate is the rate at which a packet of air cools as it rises in the atmosphere. On a global scale, this cooling rate is determined by radiative processes (short wave radiation downward from the sun, longwave radiation upward from the earth’s surface) and large scale dynamical processes and convection in the atmosphere.63 In the tropics, the lapse rate closely follows the moist adiabatic lapse rate, which is the rate that a water-saturated air packet cools (due to reduced pressure) as it rises. The moist adiabatic lapse rate decreases with increasing surface temperature.64 Hence, climate models that predict an increase in surface temperature also predict that the tropical lapse rate will fall relative to the rate that prevails before a ghg-induced surface

59 National Research Council, Surface Temperature Reconstructions for the Last 2,000 years 3 (2006). 60 Richard A. Kerr, Yes, It’s been Getting Warmer in here since CO2 started to rise, 312 Science 1854 (2006). 61 See NRC, Surface Temperature Reconstructions, supra note __ at 107.

62 Ad Hoc Committee Report on the “Hockey Stick” Global Climate Reconstruction 49 (available at ...). 63 See National Academy of Sciences, Understanding Climate Change Feedbacks 24 (2003). 64 National Academy of Sciences, Understanding Climate Change Feedbacks 24 (2003). 64 This is because unlike the dry abiabatic lapse rate, the moist adiabatic lapse rate is effected by the moisture content of the air, with the release of latent heat decreasing the lapse rate. When tropical surface temperature increases, as predicted by climate models, surface air packets hold more moisture and so the moist adibiatic lapse rate near the surface falls, whereas the higher one goes in the troposphere, the colder and drier the air, and so the less effected is the lapse rate by increasing surface temperatures, the higher one goes in the troposphere. For a clear online explanation, see Jeff Haby, Why the MALR is not a constant, available at http://www.theweatherprediction.com/habyhints/161. Note that as a consequence of these basic facts, the change in the tropical moist adiabatic lapse rate predicted by climate models is a direct consequence of what they predict about changes in water vapor at low levels of the troposphere.

18

warming.65 This in turn implies that in the tropics, any given surface warming will be amplified in the troposphere. For example, using arbitrary lapse rates for purposes of demonstration, if the prior lapse rate between the surface and a given level of the troposphere was 1⁄2, so that a 1 degree temperature increase at the surface would translate into a 1⁄2 degree warming at the given level of the troposphere, then if the lapse rate fell to 1/3, then a 1 degree increase in temperature at the surface would translate into a 2/3 degree increase in temperature at the given level of the tropical troposphere. The prediction of amplified warming in the tropical troposphere relative to the tropical surface is one of the central empirically testable propositions generated by climate models.

In the chapter of its 2007 AR4 entitled “Understanding and Attributing Climate Change,” the IPCC notes that in the tropics, “where most models have more warming aloft than at the surface...most observational estimates show more warming at the surface than in the troposphere.”66 In other words, what the IPCC is saying rather obliquely here is that a crucial empirically testable proposition generated by climate models – that there should be more warming in the tropical troposphere than at the surface -- has not been confirmed by the existing data. Now there are two possibilities: either the data are bad, or something is wrong with the models. In its 2007 report, the IPCC is quite clear that the data, not the models, must be the problem. The IPCC explains that since on short term time scales (monthly and annual), variations in tropical surface temperatures are indeed amplified in tropospheric observed temperature as the models predict, the fact that on longer time scales only one data set is consistent with the models’ predictions means that the observational record must be afflicted by “inhomogeneities,” that is, that there are errors in tropical tropospheric temperature observations.67

In the Summary for Policymakers accompanying the 2007 AR4, no mention is made of the potentially quite troubling discrepancy between model predictions and observations in the tropical troposphere relative to the surface. In the “Technical Summary” accompanying the full report, the IPCC says that there are likely errors in all of the existing measurements of tropospheric temperature trends, but stresses that many errors have been eliminated since the previous 2001 AR, leading to improved tropospheric temperature estimates and a “tropospheric temperature record...broadly consistent with surface temperature trends...”68 Hence according to the IPCC’s 2007 report, progress was continuing to be made in getting more accurate measurements of tropospheric temperature trends, and the more accurate measurements confirmed model predictions. It was bad measurement, not bad models, that had created earlier discrepancies.

65 See Lennart Bengtsson and Kevin I. Hodges, On the evaluation of temperature trends in the tropical troposphere, Clim. Dyn., DOI 10.1007/s00382-009-0680-y (2009). 66 IPCC, Climate Change 2007: The Physical Science Basis 701. 67 IPCC, Climate Change 2007: The Physical Science Basis 701. More detailed discussion and analysis of problems with atmospheric temperature data, and the conclusion that “it is uncertain whether tropospheric warming has exceeded that at the surface because the spread of trends among tropospheric data sets encompasses the surface warming trend” appears in the “Observations: Surface and Atmospheric Climate Change” chapter in Climate Change 2007, supra note __ at 271.

68 IPCC, Climate Change 2007, supra note __ at 36.

19

It seems clear, however, that even before that IPCC’s 2007 AR appeared, a number of articles appeared in the top peer-edited geophysical journals in which climate scientists presented what they believed to be reliable tropospheric temperature data disconfirming the climate model prediction that tropical tropospheric temperatures will increase by more than tropical surface temperatures.69 Perhaps even more significantly, the publication of the IPCC’s AR4 in 2007 has not ended the controversy, but seems to have merely heightened it. In 2008, Douglass et al.70 showed that certain satellite and balloon-based measurements of tropical surface and tropospheric temperature trends had measured tropospheric temperature trends that were more than two standard deviations away from those predicted by the mean linear trend estimate generated by climate models that predicted surface temperature trends well; in the very same issue of the very same journal, a group of authors lead by Benjamin Santer and constituting a virtual “who’s who” of leading IPCC climate modelers published an article71 in which they looked at completely different satellite and balloon-based temperature data and found that the linear trend in tropical tropospheric temperatures was well within two standard deviations from mean72 trend estimates from a suite of no fewer than 49 individual climate models as well as within two standard deviations of the mean estimated linear trend when standard deviations were properly inflated.73 Finally, an even more recent paper by a third group of researchers, unrelated to the Douglass and Santer groups, examined yet a third, newer dataset on tropospheric temperature, and (employing a climate model that is calibrated on a 500 year dataset and which captures the periodic ENSO-induced cycles in tropical temperatures) found that the observations did not confirm the model’s prediction of differential warming in the tropical troposphere (versus tropical sea surface temperatures).74

The authors of these various articles differ both in the measurements of tropical tropospheric temperature that they deem to be reliable, in the climate models they test against observations, and in the statistical methods employed to detect differences between the trends predicted by models and the trends actually measured. Especially on the former two questions – of which tropospheric temperature observations are to be given credence and which climate models tested– it is very difficult for a layperson to

69 David H. Douglass, et. al, Altitude dependance of atmospheric temperature trends: climate models versus observation, 31 Geo. Res. Lett. L13208, DOI:10.1029/2004020103 (2004); David H. Douglass et. al, Disparity of tropospheric and surface temperatures: new evidence, 31 Geo. Res. Lett. L13207, DOI:10.1029/2004GL020212 (2004). 70 David H. Douglass et. al, A comparison of tropical temperature trends with model predictions, 28 Int. J. Climatology 1693 (2008). 71 Benjamin D. Santer et. al, Consistency of modeled and observed temperature trends in the tropical troposphere, 28 Int. J. Climatology 1703 (2008). 72 The mean for each model is used because the models are subject to large number of simulations, or runs, each generating a different temperature trend because each model includes noise term plus nonlinearities. 73 Inflated by decreasing the number of sample periods used to calculate the sample variance, a method employed by the authors to account for the autocorrelation in the error terms of the estimates due to regular – but assumed random – shifts in climate induced by ENSO cycles. 74 See Bengston and Hodges, On the evaluation of temperature trends in the tropical troposphere, supra note __.

20

make any sort of judgment whatsoever.75 What one can see quite clearly, however, is an enormous gap between ongoing controversies in the peer-edited climate literature and the IPCC’s confident assertions in the 2007 AR4 that errors in satellite and balloon-based temperature observations were quickly being corrected leading to increasing confidence that observations confirmed climate models’ prediction of more warming in the tropical troposphere than tropical surface. It seems instead that there are continuing disagreements over which temperature observations are reliable and how to test them against model predictions.

B. Crucial Shrouded Assumptions and Limitations of Climate Model Projections

Perhaps the most important and yet generally downplayed fact about climate models is that although the last thirty years have seen a huge increase in computing power, in climate observations, and in the number of climate modelers, climate model predictions have changed hardly at all. In the 1970’s, the predicted equilibrium global mean temperature increase resulting from a doubling of atmospheric CO2 relative to preindustrial levels was between 1.6 and 4.5 degrees centigrade; in the 2007 IPCC Assessment Report, the predicted range of likely temperature increase is now between 2 and 4.5 degrees centigrade.76 As climate scientist Stephen Schwartz summarizes the relatively lack of scientific progress on this issue, “...despite extensive research neither the best estimate nor the estimated range for Earth’s climate sensitivity has changed markedly in the last 39 years.”77

In terms of their policy significance, there are three even more important features of climate change models that are not commonly known and which climate scientists virtually never even mention in presentations or work intended for the more general public: 1) that although various positive feedbacks account for the high temperature increases that would generate large amounts of harm, so little is known about many of these feedback mechanisms that the most important positive feedbacks could actually be negative – cooling the planet – rather than enhancing warming; 2) climate models do not agree on how sensitive the climate is to increases in CO2 and manage to replicate twentieth century temperature trends only by inferring whatever aerosol cooling effect is necessary to “explain” observed temperatures; and, 3) the most concrete and therefore policy-relevant projections of climate change models, about what global warming will mean in particular regions of the world – hinge entirely upon predictions about how a warming climate will cause changes in global circulation patterns, but the models do not agree at all on how such circulation patterns will change.

1. Concealed Complexity: The Positive Feedbacks Presumed by Climate Models Account Entirely for very High Projected Future Temperature Increases

75 This is not true of the statistical methods used to search for a statistically significant difference between observed linear trend coefficients and modeled linear trend coefficients. See ...[add citation] 76 See both Myles R. Allen and David J. Frame, Call off the Quest, 318 Science 582 (2007) and Gerard H. Roe and Marcia B. Baker, Why is Climate Sensitivity So Unpredictable?, 318 Science 629 (2007).

77 Stephen E. Schwartz, Uncertainty in climate sensitivity: causes, consequences, challenges, 1 Energy Env. Sci. 430, 432 (2008).

21

A crucial but apparently little known feature of climate models is that the higher end temperature change predictions are not caused by CO2 emissions but by positive climate feedbacks built into the models – without feedbacks, a doubling of CO2 leads to a predicted increase of only about 1.2 degrees centigrade.78 Perhaps even more importantly, it has recently been shown that models will always attach some positive probability to very high possible temperature increases because “the sum of the underlying climate feedbacks is substantially positive.”79 That is, climate model predictions will always be very uncertain, and skewed toward high temperature increases, and “foreseeable improvements in the understanding of physical processes, and in the estimation of their effects from observations, will not yield large reductions in the envelope of climate sensitivity.”80 Even if we significantly improve our understanding of the various climate feedback processes – narrowing our uncertainty regarding their individual impacts – this will have “little effect” in making more certain the predicted sensitivity of climate to CO2 doubling: the models will still say that potentially very high temperature increases are possible (in the sense of occurring with positive probability).81

These results are elegantly derived in a recent paper by Roe and Baker, who show how the predominance of positive feedbacks in climate models logically and necessarily means that those models will always attach some positive probability to very high temperature increases due to CO2 forcing. To put this in a more policy-relevant way, consider the general belief that there is a great deal of uncertainty regarding the temperature increase likely to result from a doubling of CO2, and that there is some chance – however small and difficult to estimate – of catastrophically large surface temperature increases in the 5 – 10 degree Celsius range. What Roe and Baker show is that the possibility of catastrophic temperature increases is not the output of climate analysis but is an input, in that it is a direct logical consequence of the climate models’ assumption of a net positive feedback from CO2 – induced global warming. The prediction that catastrophically large temperature increases may result is due to assuming a big positive feedback effect, not from some known climatic mechanism.

In the very same issue of Science magazine in which Roe and Baker’s article appeared was included a response by leading climate modelers Myles Allen and David Frame, the former of whom served respectively as a contributing author and as Review Editor on the two key chapters of the IPCC’s 2007 AR dealing with the modeled attribution of ongoing climate change and with future projections.82 Allen and Frame’s response did not fundamentally challenge the mathematical points that Roe and Baker are making: those points follow quite directly from the basic mathematical structure of the climate prediction problem, and are not controversial.83 On the statistical structure of climate models, Allen and Frame indeed pointed out that variations in certain statistical

78 Roe and Baker, Why is Climate Sensitivity so Unpredictable?, supra note __ at 630. IPCC, Climate Change 2007: The Physical Science Basis, supra note __ at 631. 79 For a very clear derivation of this result, see Gerard Roe, Feedbacks, Timescales, and Seeing Red 15-18 (draft of September, 2007). 80 Roe and Baker, Why is Climate Sensitivity so Unpredictable?, supra note __ at 631. 81 Roe and Baker, Why is Climate Sensitivity so Unpredictable?, supra note __ at 632. 82 See Climate Change 2007: The Physical Science Basis, chapters 9 and 10. 83 Myles R. Allen and David J. Frame, Call off the Quest, 318 Science 582 (2007).

22

assumptions could lead to even higher upper bounds on predicted temperature increases.84 Rather than challenging Roe and Baker’s scientific point – a point about the basic statistical structure of climate models – Allen and Frame instead argue that Roe and Baker’s point is really not very important, because the goal of “avoiding dangerous anthropogenic interference in the climate system” does not mean that we have to be able to “specify today a stabilization concentration of carbon dioxide...for which the risk of dangerous warming is acceptably low.”85 What Allen and Frame argue is that if in the future, people adapt target atmospheric CO2 concentration levels by taking the future current (say 2050) concentration level (say 450 ppm) and multiplying by the (inverse) ratio of observed to predicted warming at that concentration level, then by adjusting target concentration levels down if warming is greater than predicted, they will never actually observe the very large temperature increases. As they put it: “[i]f [climate sensitivity S turns out to be toward the upper end of the current uncertainty range, we may never find out what it is...[b]ut provided our descendants have the sense to adapt their policies to the emerging climate change signal, they probably won’t care.” Hence Allen and Frame say that it is time to “call off the quest” for what has previously been considered the “holy grail” of climate research, “[a]n upper bound on climate sensitivity.”86 As for policy, what is most important on their argument is that future target concentration levels adapt and that “we resist the temptation to fix a concentration target early on. Once fixed, it may be politically impossible to reduce it.”87

There are several rather odd things about Allen and Frame’s response. First, their response quite thoroughly mixes science and policy, basically arguing that scientific uncertainty won’t matter if future policy ignores the uncertainty and uses the models’ predictions to adjust down CO2 emissions (and concentration levels) if observed temperatures exceed predicted temperatures. But of course the problem is that it requires that policymakers use a model to adjust policy that observations have proven to generate incorrect predictions. In particular, if a model under-predicts temperature change over a certain period given stabilization of CO2 at a particular level, then is it true that the model also likely under-predicted the longer term response to that concentration? If so, policymakers might use the current error as the best guess as to future error, and adjust targets accordingly. But this implicitly assumes that it is very unlikely that a model would under-predict the short term temperature increase while accurately or over- predicting the long term temperature increase. But the question is, how much information do short term temperature changes show about the likely long-term equilibrium temperature response. If they show a lot, then Allen and Frame are correct; but if the short term response really reveals little about the likely accuracy of long term predictions, then the adaptive strategy they imagine would essentially be to react to short

84 In particular, Allen and Frame, Call off the Quest, supra note __ at 583, point out that Roe and Baker implicitly assume a uniform prior distribution over f, the total positive feedback, and then derive the distribution of temperature increase (climate sensitivity, S), but an even higher upper bound on S would result if Roe and Baker assumed a uniform prior over S. The sense of this remark is a bit unclear, however, as the point of Roe and Baker’s analysis is to derive the distribution of S from the distribution of f.

85 Allen and Frame, Call off the Quest, supra note __ at 583. 86 Allen and Frame, Time to Call off the Quest, supra note __ at 583. 87 Allen and Frame, Time to Call off the Quest, supra note __ at 583.

23

term temperature changes for their own sake, to go “off model” as it were even though the short term temperature changes showed little.

Perhaps even more problematic is Allen and Frame’s really quite perplexing comment that “[o]paque decisions about statistical methods, which no data can ever resolve, have a substantial impact on headline results.”88 Of course, all kinds of “opaque” methodological choices can substantially impact “headline results,” but this does not mean that methodology is irrelevant from a scientific point of view. What traditional philosophy of science teaches is that decisions about the appropriate statistical method should be based on the data; however, on this, Allen and Frame easily concede that “no data can ever resolve” the statistical choices at the heart of climate modeling. But this is true only of the genre of climate models considered by Roe and Baker: that is, models in which positive feedbacks predominate.

To be sure, the Science article by Roe and Baker that I have been discussing appeared a few months after the release of the IPCC’s 2007 AR, and so it is perhaps not surprising that the 2007 AR failed to cite to the Roe and Baker paper. Interestingly, however, the IPCC’s 2007 AR does eventually note that differences in climate model sensitivity are due to differences in how what models assume about feedback effects – primarily the cloud feedback effect discussed in more detail below.89 But that notation is made in the context of a discussion concerned with explaining why GCM’s come up with quite widely varying sensitivities, and no mention is made in the 2007 AR about how the assumed positive feedbacks virtually create the possibility of very high temperature increases (very high model sensitivity). Indeed, in the IPCC AR4 “Climate Science” documents intended to influence the public and the media – the Policymaker Summary and Technical Summary – no mention whatsoever is made of the positive feedback effects that account for projected temperature increases above 1.2 degrees C.

The obvious question raised by Roe and Baker’s 2007 paper is whether the positive likelihood of very high temperature increases from CO2 doubling that they show is a necessary consequence of the models’ assumption of strong positive feedbacks would be present even if there were important negative feedbacks. A more recent paper by Baker and Roe90 answers this question, and shows also that Allen and Frame were perhaps too gloomy about the possibility of getting some more precise mathematical insight into how learning about climate change feedbacks might progress over time. The first thing shown by Baker and Roe is that – as one would have expected intuitively – the addition of an important negative feedback causes the probability of large temperature increases due to CO2 forcing to fall, with probability concentrating instead around more moderate temperature increases. As a corollary, when there is a negative feedback, reducing uncertainty in positive atmospheric feedbacks does have a “significant” impact in

88 Allen and Frame, Time to Call off the Quest, supra note __ at 583. 89 IPCC, Climate Change 2007: The Physical Science Basis 633-638, 806-807.

90 Marcia B. Baker and Gerard H. Roe, The Shape of Things to Come: Why is Climate Change so Predictable?, 22 J. Clim. 4574 (2009). The particular negative feedback that they consider – uptake of heat by the deep ocean – is often modeled not as a feedback per se but as a linear dampening effect.

24

reducing uncertainty about the ultimate (or equilibrium) temperature increase.91 As for the timing of learning about climate sensitivity, Baker and Roe show that when there is a negative feedback that dissipates only very slowly, then even for a known forcing (e.g. CO2 increase) there is enormous uncertainty over how long it will take for large temperature increases to be realized (while smaller temperature increases should, with very high probability occur quite quickly, e.g. around 100 years for a 1 degree C increase).

Taken together, the papers by Baker and Roe would seem to indicate that getting better empirical evidence on the direction and magnitude of crucial climate feedbacks is absolutely essential for projections of the sensitivity of global climate to increasing ghg concentrations. As for the positive feedbacks that predominate in the current set of climate models, the most straightforward and biggest positive feedback in climate models is due to increased water vapor.92 Due to various assumptions about convection, turbulent transfer and the deposition of latent heat within various levels of the atmosphere, the climate models relied upon by the IPCC all generate relatively constant relative humidity (amount of water vapor in the air divided by the saturation water vapor amount at the higher temperatures) at different levels of the troposphere even as the troposphere warms due to increasing CO2.93 Since air holds more water (the saturation vapor pressure of air increases) as temperature goes up, the climate models imply that water vapor increases with warming temperatures. There are large differences in the size of the water vapor response across different climate models,94 but the predicted increase in water vapor is crucial to the size of predicted global warming, because it is estimated that water vapor is 14 times more powerful a greenhouse gas than CO2.95 With constant relative humidity, and a large water feedback, as generated by climate models, predicted surface temperature increases always exceed 1 degree Celsius; if there were instead substantial decreases in tropospheric relative humidity (evidence for which is discussed below), then the temperature increase from CO2 doubling would be more in the range of .5 degree centigrade, or less than 1 degree Farenheit.96

Water vapor is closely related to another crucial feedback in climate models, clouds. Cloud formation, and the impact of a warmer, wetter atmosphere on cloud formation is simply not yet understood. While I discuss the significance of this fundamental uncertainty about cloud feedbacks in much more detail below, suffice it for the present to note for the present purpose that according to one recent and widely

91 Baker and Roe, The Shape of things to come, supra note __ at 4583. 92 Indeed, “...a strong positive water vapour feedback is a robust feature of GCM’s.” IPCC, Climate Change 2007: The Physical Science Basis 633 (Susan Solomon, et. al., eds. 2007). See also S. Bony et al., How well do we understand and evaluate climate change feedback processes?, 19 J. Climate 3445 (2006). 93 See Brian J. Soden and Isaac M. Held, An Assessment of Climate Feedbacks in Coupled-Ocean Atmosphere Models, 19 J. Climate 3354, 3357 (2006). 94 Soden and Held, An Assessment of Climate Feedbacks, supra note __ at 3357 (paraphrased in IPCC, Climate Change 2007, Physical Science, 633). 95 Jih-Wang Wang et al., Towards a Robust Test on North America’s Warming Trend and Precipitable Water Increase, 35 Geo. Res. Lett. L18804 (2008). 96 Garth Paltridge et al., Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data, 98 Theo. Appl. Climatol. 351, 355-356 (2009).

25

influential review essay, the assumption in climate models that global mean temperature determines cloud feedback effects is essentially without foundation:

“we have no clear theory that suggests the accumulated effects of cloud feedbacks are in any way a function of global-mean temperature or, as posed, T...the (usually unstated) assumptions about the nature of the system and its feedbacks, and how feedback processes relate specifically to surface temperature, dictate almost entirely the quantitative results from climate feedback analysis. This is alarming as...different assumptions about the system, applied to the same model output, produce feedback measures that not only differ in magnitude but also in sign.”97

In the next section, I discuss recent empirical findings that seem to case substantial doubt on whether climate change models are accurately predicting both the positive water vapor feedback effect and the positive cloud feedback effect.

2. Obscuring Fundamental Disagreement Across Climate Models in both Explanations of Past Climate and Predictions of Future Climate

Confidence in future climate projections generated by computer-based Coupled Ocean Atmosphere General Circulation Models (or GCM’s for short) is based largely on the purported ability of such models to accurately match temperatures actually observed, especially those observed since the 1970’s. However, the IPCC Assessment Report fails to openly reveal several crucial features about the purported agreement between modeled and observed past temperatures. Prominent among these is the fact that the models disagree in very major and fundamental ways in how they manage to account for past observed temperatures and climate patterns. Climate modelers are well aware of these problems with the models, and themselves say that the disagreements are so fundamental that they preclude the kind of certainty in future projections expressed by the IPCC.98

The first and perhaps most central way in which climate models disagree is in how they account for the two primary factors that determine global warming: the strength of climate forcing, and the sensitivity of global temperature to such forcing. Now by a “forcing,” climatologists mean pretty much any external influence – external, that is, to the intrinsic variability of the non-linear climate system itself – that can impact the amount of solar radiation that reaches the surface and/or the amount of longwave (or infared) radiation that is emitted from the top of the troposphere. Climate forcings include

97 Stephens, Cloud Feedbacks in the Climate System, 18 J. Clim. 237, 240-241. 98 In AR4, the IPCC consistently expresses much stronger certainty about projections than in previous

AR’s. The Summary for Policymakers states, for example, that “[a]nalysis of climate models together with constraints from observations enables an assessed likely range to be given for climate sensitivity for the first time...,” IPCC, Climate Change 2007, supra note __ at 12, and “[c]ontinued greenhouse gas emissions at or above current rates would cause further warming and induce many changes in the global climate system during the 21st century that would very likely be larger than those observed during the 20th century,” Id. at 13.

26

both anthropogenic and natural influences.99 The primary natural forcings are volcanic eruptions and solar variation. Anthropogenic forcings include not only the emissions of all the various greenhouse gases – including not only CO2 but also methane (CH4) and nitrous oxide (N2O) and chloroflourocarbons (CFC’s) – but also changes in land use that alter the earth’s surface albedo (or reflectivity). In addition to the warming greenhouse gases there is another kind of anthropogenic forcing that is crucial to climate models: the emission of aerosols, such as the fine particulates contained in sulfur dioxide emissions from coal-burning industrial facilities. Unlike greenhouse gases, that warm the earth by reducing the emission of longwave radiation back into space, climate models presume that aerosols cool the earth, both by directly reflecting solar radiation back to space before it has a chance to reach the earth’s surface and by increasing the reflectivity of low level clouds.100

As discussed above, climate sensitivity means the sensitivity of global mean temperature to various natural and anthropogenic forcings. Also as previously discussed, in any given climate model, sensitivity depends upon what the builders of the model have assumed about various feedback effects -- most prominently, how clouds and water vapor respond to an increase in surface temperature. The basic relationship between various climate forcings and a predicted temperature change can be expressed quite simply, as:

∆T = S∆Q – H, (1)

Where ∆T is the predicted change in temperature, ∆Q is the total forcing, S is the sensitivity of temperature with respect to such forcing, and H is the amount of heat that is stored in the oceans.101

Now it is a demonstrated mathematical fact that climate models differ tremendously in what they assume about various climate feedbacks and so also in their value for climate sensitivity, S; indeed, models differ by a factor of 2 to 3, or 200 to 300%, in their climate sensitivity.102As for the climate forcings, the main problem is that while “there are established data for the time evolution of the well-mixed greenhouse gases, there are no established standard datasets for ozone, aerosols, or natural forcing factors.”103 Uncertainty regarding the possible level of past forcings is indeed so great that the IPCC’s Fourth Assessment Report gives a range of between .6 and 2.4 watts/meter squared – or a factor of 4 or 400% -- for anthropogenic forcings.104 Now given that the

99For a summary of the various forcings that depicts also the degree of uncertainty regarding their magnitude, see IPCC, Climate Change 2007: The Physical Science Basis, supra note __ at 4. 100 See Stephen E. Schwartz, Robert J. Charlson and Henning Rodhe, Quantifying Climate Change – Too Rosy a Picture?, 2 Nature Reports: Climate Change 23 (2007). Note that the indirect effect of aerosols on cloud reflectivity is more properly considered a feedback, and therefore captured in the climate sensitivity parameter, than an external forcing. See the discussion in Reto Knutti, Why are climate models reproducing the observed global surface warming so well?, 35 Geo. Res. Lett. L18704 (2008). 101 This formulation, with some slight notational changes to preserve consistency with my earlier discussion, is taken from Jeffrey T. Kiehl, Twentieth Century Climate Model Response and Climate Sensitivity, 34 Geo. Res. Lett. L22710 (2007). 102 Kiehl, Twentieth Century Climate Model Response, supra note __. 103 Kiehl, Twentieth Century Climate Model Response, supra note __. 104 Schwartz et. Al., Quantifying Climate Change, supra note __ at 24.

27

models differ hugely in their climate sensitivity, the combined uncertainty, due to the product of both forcing uncertainty and sensitivity uncertainty, ought to be larger than either type of uncertainty alone. In other words, the variation in the past temperature record predicted by climate models “would be expected to be larger than that arising from the uncertainty in the forcing as it would also reflect uncertainties arising from the differences in the multiple climate models used.”105 However, “[c]ontrary to such an expectation, the range in modeled global mean temperature change...is much smaller than that associated with the forcings, which is a factor of four.”106 In other words, and in terms of equation (1), even though the models have widely varying climate sensitivities, S, and little is known about the forcings, ∆Q, the models are all pretty close in simulating past temperature changes, ∆T.

The answer to the obvious question – “How can this be?”107 – has been supplied by recent research clearing showing that climate modelers pair high sensitivity with low forcing coefficients, and vice versa. There is a “strong inverse correlation between total anthropogenic forcing used for the 20th century and the model’s climate sensitivity. Indicating that models with low climate sensitivity require a relatively higher total anthropogenic forcing than models with higher climate sensitivity (sic).”108 In other words, the reason why “models with such diverse climate sensitivity can all simulate the [late twentieth century] anomaly in surface temperature” is because the model builders choose the “magnitude of applied anthropogenic total forcing” so as to “compensate” for the model’s sensitivity parameter S.109 More precisely, the nearly threefold range in the magnitude of (climate cooling) aerosol forcing assumed by major climate models causes a similarly wide range in total anthropogenic forcing assumed by such models.110 Further, and perhaps most strikingly,

“[i]n many models aerosol forcing is not applied as an external forcing, but is calculated as an integral component of the system. Many current models predict aerosol concentrations interactively within the climate model and this concentration is then used to predict the direct and indirect forcing effects on the climate system.111

But remember that aerosols are emissions from human activities, not something generated by the climate system itself. Hence models that are deriving aerosol concentrations “interactively...within the climate model” are essentially calculating a value for the forcing, ∆Q, that allows the model to predict late 20th century warming, given the model’s presumed climate sensitivity S.

The issue at hand is whether the observational record could, in principle, be used to test a climate model’s assumption that the climate has high sensitivity to forcings such as

105 Schwartz et. Al., Quantifying Climate Change, supra note __ at 24. 106 Schwartz et. Al., Quantifying Climate Change, supra note __ at 24. 107 Posed by Schwartz et. Al., Quantifying Climate Change, supra note __ at 24. 108 Kiehl, Twentieth Century Climate Model Response, supra note __. 109 Kiehl, Twentieth Century Climate Model Response, supra note __. 110 Kiehl, Climate Model Response and Sensitivity, supra note __. 111 Kiehl, Climate Model Response and Sensitivity, supra note __ (emphasis supplied).

28

CO2 increases. By using compensating parameters for forcing and sensitivity, climate modelers guarantee that such testing against the observational record cannot happen: models are effectively immunized from empirical challenge. More precisely, as Retto Knutti, a contributing author to the IPCC’s 2007 AR112 explained in a paper published after that report’s publication, “models with high sensitivity (strong feedbacks) avoid simulating too much warming by using a small net forcing (large negative aerosol forcing), and models with weak feedbacks can still simulate the observed warming with a larger forcing (weak aerosol forcing)...”113 Put slightly differently, the reason why the major climate models can all reproduce the late 20th century warming pretty well even though they don’t agree at all on the fundamental question of how climate responds to various forcings (the parameter S in equation (1)) is because they make whatever assumption about aerosols is necessary to adjust the radiative forcing ∆Q so as to be able to reproduce temperature changes ∆T observed during the late 20th century.114 But these assumptions are far from innocuous. As recent work has shown, if the (negative) aerosol forcing turns out to be much smaller than assumed, then the ensemble of GCM’s used by the IPCC would have to have a much larger climate sensitivity (with the mean moved up a full 2 degrees centigrade) in order to remain consistent with observations. On the other hand, if the negative aerosol forcing is even larger (more negative), then the ensemble GCM’s would fail on the other side, simulating too little warming. This “mismatch” between observed and simulated 20th century warming would mean that “current agreement between simulated and observed warming trends would be partly spurious, and indicate that we are missing something in the picture of causes and effects of large scale 20th century surface warming.”115

Presumably there is a true or correct climate sensitivity parameter S. If so, then some of the models are closer to the true climate sensitivity parameter S while others get this way wrong. In determining present day policy, we should rely upon the future climate projections from models with the most accurate number for climate sensitivity S. But as things stand now with the climate modeling relied upon by the IPCC – where the modelers simply get to choose whatever value for forcings, ∆Q, lets their model reproduce past temperatures -- there is no way to tell which model is closest to the true S. Most importantly, not only does the “narrow range of modeled temperatures give a false sense of the certainty that has been achieved” in climate models:”116 There is no reason to assume that the model that most closely reproduces past temperatures is by any means the most accurate representation of the reality (the key parameter S) of the climate system.

Climate modelers have some ideas about how we should respond to this state of affairs, and how it came about. In terms of response, one idea proposed by several

112 To be more precise, to chapter 9, on the attribution of climate change. 113 Retto Knutti, Why are Climate Models Reproducing the Observed global surface warming so well?, 35 Geo. Res. Lett. L18704 (2008). 114 Retto Knutti, Why are Climate Models Reproducing the Observed global surface warming so well?, 35 Geo. Res. Lett. L18704 (2008). 115 Retto Knutti, Why are Climate Models Reproducing the Observed global surface warming so well?, 35 Geo. Res. Lett. L18704 (2008). 116 Schwartz, et. Al, supra note __ at 24.

29

modelers is that climate models would be tested by having all the models make the same assumption about twentieth century forcings and then comparing how they do at reproducing twentieth century temperatures.117Other ideas -- much more difficult computationally – are to test the models over the “full range” of possible forcing values or to narrow the range of possible forcings (in other words, “constrain the forcings.”)118

The state of the art in climate modeling, to be clear, is one in which the ability of a model to reproduce the late 20th century warming is not informative as to whether or not that model has accurately modeled the feedbacks that will primarily determine the sensitivity of climate to increases in CO2 and other greenhouse gases. As for how such a state of the art developed, climate modelers have had some rather interesting and revealing things to say. Intuitively enough, one climate scientist has recently observed that:

“While it is impossible to know what decisions are made in the development process of each model, it seems plausible that choices are made based on agreement with observations as to what parameterizations are used, what forcing datasets are selected, or whether an uncertain forcing (e.g. mineral dust, land use change) or feedback (indirect aerosol effect) is incorporated or not. ...[m]odels differ because of their underlying assumptions and parameterizations, and it is plausible that choices are made based on the model’s ability to simulate observed trends.” 119

That the models are essentially using aerosol parameterizations to offset variations in presumed climate sensitivity is far from an innocuous technical detail. As Richard Lindzen has explained, because a high climate sensitivity implies (other things equal) a big CO2 – induced warming, in order to have significant policy relevance, climate models “cling” to high climate sensitivities.120 And yet as just discussed here, the sensitivities are so high that the models simulate too much 20th century warming. To get a better reproduction of past temperatures, the models cancel out about half of simulated warming by imposing a compensating assumption about the cooling effect of aerosols. But then apparently to preserve “alarm” about the future, climate models assume that the aerosols will soon disappear.121 Even if the models are correct that aerosols have had a net cooling effect in the twentieth century,122 this series of parameter adjustments and assumptions about future changes in aerosols can hardly inspire confidence in climate models.

117 Kiehl, Climate Model Response and Sensitivity, supra note __ calls this requiring models to “employ standard emissions for aerosol gas precursors and particulate emissions.” Schwartz et al., supra note __, call this “subjecting [the models] to the same forcing profile over the twentieth century.” 118 Schwartz et al., at 24.

119 Retto Knutti, Why are Climate Models Reproducing the Observed global surface warming so well?, 35 Geo. Res. Lett. L18704 (2008). 120 Richard S. Lindzen, Taking Greenhouse Warming Seriously, 18 Energy & Env. 937, 946 (2007). 121 Lindzen, Taking Greenhouse Warming Seriously, 18 Energy & Env. 937, 946 , 948 (2007).

122 Recent work suggests that aerosol cooling is so significant that reduction in aerosols due to pollution control and attendant solar brightening was responsible for two thirds of the warming that occurred since the mid-1980’s over Switzerland and Germany. Rolf Philipona, How Declining Aerosols and Rising Greenhouse Gases Forced Rapid Warming in Europe Since the 1980’s, 36 Geo. Res. Lett. L02806 (2009).

30

C. Distracting Attention from Empirical Studies Tending to Disconfirm Key Predictions of Climate Models and their Preferred Interpretation of Paleoclimatic Evidence

One of the most striking features of the established climate change story is the seemingly increasing tendency to simply ignore even the most rigorous, peer-reviewed scientific evidence when it tends to disconfirm either customary interpretations of paleo- climatic data or predictions of the coupled ocean-atmosphere general circulation models (GCM’s) that are used to generate virtually all of the IPCC’s quantitative predictions.

1. The Ambiguous Paleoclimatic evidence on the Direction of Causality between CO2 and Temperature

Consider first the long-term, or paleoclimatic relationship between CO2 and global mean temperature. What seems to be clearly an undisputed finding from ice core data is that over the broad 500 million year plus time frame of the Phanerozoic era, there has been a positive correlation between atmospheric CO2 levels and global temperature.123 Even a casual review of the climate literature, however, reveals that there is far less agreement on the direction of causality. That is, apparently cool and confident statements such as this -- “the climatic impacts of CO2 variations are large enough that they appear to be a primary driver over the Phenerozoic, rather than simply a passive response to changing climate”124 – in fact conceal considerable uncertainty over the long-term (that is, Phanerozoic era) causal contribution of changes in CO2 to changes in global temperature. On the climate side, there is an apparently very methodologically robust finding that over the last 600 million years, earth’s climate has cycled over periods of about 135 million years between warm and cool modes.125 There is much less agreement in attempts to reconstruct atmospheric CO2 levels over the entire Phanerozoic period126, but the various reconstructions do agree that atmospheric CO2 levels have been low and decreasing over the last 175 million years.127 As this period of time has also been a period of cool global climate, for this most recent period, cool global climate and relatively low levels of CO2 coincide.128 But over the Phanerozoic period as a whole, at least one long-term CO2 reconstruction finds “no correspondence” between atmospheric CO2 levels and global climate, while other studies find periods of up to 100 million years when high levels of CO2were accompanied by cold temperatures in at least some regions of the world129 (indeed so many such periods that one review has characterized this finding as one of “perisistent Phanerozoic decorrelation”130 between tropical (low- latitude) temperature and modeled CO2-induced radiative forcing).

123 Thomas J. Crowley and Robert A. Berner, CO2 and climate change, 292 Science 870 (2001). 124 Scott D. Doney and David S. Scimel, Carbon and Climate System Coupling on Timescales from the Precambrian to the Anthropocene, 32 Ann. Rev. Env. & Res. 31, 39 (2007). 125 L.A. Frakes, et. Al., Climate Modes of the Phanerozoic (1992); J. Veizer et. Al., 408 Nature 698 (2000). 126 Compare Daniel H. Rothman, Atmospheric Carbon Dioxide Levels for the Last 500 million years, 99 Pro. Nat’l. Acad. Sci. 4167, 4170 (2002) with Robert A. Berner, 294 Am. J. Sci. 56 (1994). 127 See Rothman, Atmospheric Carbon Dioxide Levels for the Last 500 Years, supra note __ at 4170. 128 Rothman, Atmospheric Carbon Dioxide Levels for the Last 500 million years, supra note __ at 4170.

129 J. Veizer, et. Al., 408 Nature 698 (2000). 130

31

There are of course inherent problems in trying to explain the relationship between CO2 and climate over the hundreds of millions of years of the Phanerozoic – among others, the fact that tectonic forces have shifted not only continents but also global ocean circulation patterns over this period.131 For some climatologists, given the “considerable uncertainty” in both the model of how CO2 should be expected to influence climate over such widely varying configurations and states of the planet as well as in proxy measures of CO2 itself,132 “the first-order agreement between the CO2 record and continental glaciation continues to support the conclusion that CO2 has played an important role in long-term climate change.”133 This conclusory statement comes in a review article by two climatologists whose work has consistently supported a climate-CO2 link for almost two decades (and whose lead author in particular is very heavily cited by the IPCC in its chapter on paleoclimate).134 And although they concede that “there are substantial gaps in our understanding of how climate models distribute heat on the planet in response to CO2 changes on tectonic time scales,” the fact that we need “better confidence” in the paleoclimate data and that tectonic shifts seem to cause “unanticipated complications” make it “hazardous to infer that existing discrepancies between models and data cloud interpretations of future anthropogenic gas projections.”135

One should note how remarkable are these series of statements. After stating that scientists really have no idea how CO2 might have affected climate in the “widely varying configuration and states of the planet” that have prevailed over the past several hundred million years, Crowley and Berner then say that because extremely crude and imprecise proxy measures suggest a correlation between climate and CO2, we should continue to presume that CO2 played a role in causing climate change. Absent any other showing, this seems to be faith, not logic.

Crowley and Berner’s review article is very widely cited, yet the more typical and (to my mind, at least) reasonable response of scientists arguing for the primacy of CO2 in climate change to the complex Phanerozoic climate-CO2 relationship is to simply say that precisely because of the massive changes in the earth and its system over such a long time period, the Phanerozoic record really is not very relevant to predicting the future impacts of present day CO2forcings. On the CO2 primary view, sedimentary data that show a close relationship between warming sea surface temperatures and increasing CO2 around the time of the end of the last ice age136 support the theory that it was the increase in CO2 itself that caused the warming and deglaciation.137 However, because of

131 See Crowley and Berner, Enhanced CO2 and Climate Change, supra note __ at 871. 132 Crowley and Berner, Enhanced CO2 and Climate Change, supra note __ at 871. 133 Crowley and Berner, Enhanced CO2 and Climate Change, supra note __ at 872. 134 See IPCC, Climate Change 2007: The Physical Science Basis 486.

135 Crowley and Berner, Enhanced CO2 and Climate Change, supra note __ at 872. 136 Prominent sediment core studies have found that between 17,000 and 14,500 years ago, atmospheric CO2 levels increased in “close association” with an increase of about 2 degrees centigrade in Western Pacific Warm Pool sea surface temperatures. Lowell Stott, et. Al., Southern Hemisphere and Deep-Sea Warming Led Deglacial Atmospheric CO2 Rise and Tropical Warming, 318 Science 435 (2007) (citing, D.W. Lea, D.K. Pak, H.J. Spero, 289 Science 1719 (2000). 137 See, for example, Nicholas J. Shackleton, 289 1897 (2000).

32

uncertainty in dating the CO2 records (both from sedimentary and ice core records), this data did not permit scientists to isolate the exact timing of CO2 changes versus temperature changes.138 A recent study addresses part of this problem by sampling sediment core data at a very fine, centimeter level scale that accumulated at a location where sediments contain evidence on the temperature of both western tropical Pacific surface water and deep Pacific water the time of the last glacial transition (the end of the last ice age). This method allowed the researchers to overcome the shortcoming in previous research that there was no way to date temperature and CO2 changes.139

By independently measuring data on southern ocean (that is, Antarctic) temperature and tropical sea surface temperature, Stott et. Al. found that:

“nearly all of the warming in glacial/interglacial deep-water warming occurred before 17,500 years ago, and therefore before both the onset of deglacial warming in tropical Pacific surface waters and the increase in CO2 concentrations...together that the onset of deglacial warming throughout the Southern Hemisphere occurred long before deglacial warming began in the tropical surface ocean...[and this means] that the mechanism responsible for initiating the deglacial events does not lie directly within the tropics itself, nor can these events be explained by CO2 forcing alone. Both CO2 and the tropical SST’s did not begin to change until well after 18 kyB.P., approximately 1000 years after the benthic 18O record indicates that the Southern Ocean was warming.”140

Stott et. al. “suggest that the trigger for the initial deglacial warming around Antarctica was the change in solar insolation over the Southern Ocean during the austral spring that influenced the retreat of the sea ice,” that in turn led to decreased stratification of the Southern Ocean, promoting “enhanced ventilation of the deep sea and the subsequent rise in atmospheric CO2.”141 In other words, their best guess is that an increase in CO2 did not cause warming in the period of deglaciation that they studied, but rather that an increase in the energy from the sun that caused the warming that eventually led to an increase in CO2.

The 2007 paper by Stott et al. does not appear to be an outlier. Roughly contemporaneous work by Ahn and Brook142 constructed temperature and CO2 records covering both episodes of abrupt warming followed by cooling (Dansgaard-Oeschger events) and long separating cold periods (Heinrich events) that occurred during the last glacial. Ahn and Brook found a correlation between increases in CO2 and warming periods, but also found that unlike the large increases in methane that are known to have

138 Lowell Stott, et. Al., Southern Hemisphere and Deep-Sea Warming Led Deglacial Atmospheric CO2 Rise and Tropical Warming, 318 Science 435, 437 (2007). 139 Paraphrasing, roughly and summarily, the argument at Stott, et. Al., Southern Hemisphere and Deep Sea Warming, supra note __ at 438.

140 Stott, et. Al., Southern Hemisphere and Deep-Sea Warming Led Deglacial Atmospheric CO2 Rise, supra note __ at 438. 141 Stott, et. Al., at 438. 142 Jinhho Ahn and Edward J. Brook, Atmospheric CO2 and Climate from 65 to 30 ka B.P., 34 Geo. Res. Lett., L10703, doi:10.1029/2007GL029551 (2007).

33

immediately preceded temperature increases, “CO2 does not lead temperature, [and] CO2 variations were not a direct trigger for the climate changes that occurred during the last glacial period.” Oddly, although the IPCC’s 2007 AR cites many articles that were published as late as 2007, neither the paper by Sott et al. nor that by Ahn and Brook are mentioned in the chapter on paleo-climatology in the 2007 AR.

More recent work by Saikku et al.143 confirms a similar pattern for Antarctica, in which atmospheric and deep water temperatures “begin warming and reach peak values in advance of rising CO2.” Like Ahn and Brook, Saikku et al. hypothesize that changes in wind strength and sea ice extent in the southern ocean may have accounted for the increased release of CO2 from the oceans to the atmosphere. Other recent work covering the current interglacial period suggests that changes in the earth’s orbit, and hence solar energy strength in Antarctica, as the original driving force behind changes in CO2 and temperature.144

2. What Happens to the Water? Recent Findings that Atmospheric Water Vapor and Precipitation are not Responding to a Warming Atmosphere in the Way that Climate Models Predict

A second example of the climate establishment’s rhetorical strategy of simply ignoring disconfirming evidence is provided by the treatment of studies tending to disconfirm conceptually and numerically central predictions of climate models: that atmospheric water vapor will increase with rising atmospheric temperatures, but global precipitation will increase only relatively modestly with warming.

To understand the importance of climate model projections about water vapor and precipitation requires reviewing a bit about water in the atmosphere. It is important first to clarify the different ways to measure the water content of a given parcel of air. The vapor pressure measures the water content using the partial pressure of the water vapor in the air (the pressure of the water vapor as opposed to the other molecular constituents of an air parcel). An alternative measure, specific humidity, measures the mass of water vapor for a given mass of air (typically expressed as grams of water vapor per kilograms of air). However, the maximum amount of water vapor that a given parcel of air can hold – its saturation water vapor pressure – increases with temperature. A way to measure the amount of water vapor in the air that normalizes for air temperature is relative humidity – defined as the ratio of the amount of water vapor in air of a given temperature to the saturation amount of water vapor in the air at that temperature.

Water vapor in the atmosphere comes from the evaporation of water from earth’s surface, especially the oceans.145 When air is at its saturation point for a given

143 Reetta Saikku et al., A bi-polar signal recorded in the western tropical Pacific: Northern and Southern hemisphere climate records from the Pacific warm pool during the last ice aga, 28 Quart. Sci. Rev. 2374 (2009). 144 Axel Timmerman et al., The roles of CO2 and orbitral forcing in driving southern hemispheric temperature variations during the last 21,000 years, 22 J. Clim. 1626 (2009).

145 See F.W. Taylor, Elementary Climate Physics 56 (2005). 34

temperature, any cooling of the air will result in condensation of water (in other words, condensation occurs when the water vapor equilibrium is exceeded). When water condenses in the atmosphere, it produces clouds and precipitation. Dynamically, because the density (mass/volume) of water vapor is considerably less than that of dry air, moist air masses tend to rise and cool, reaching a point where they are over-saturated, causing clouds and precipitation to form, drying the air masses, which then fall. The net result of this process is that the latent heat of vaporization is released to the atmosphere.

In climate models, global warming means two things: an increase in surface temperature, and an increase in the temperature of the lower levels of the atmosphere. Now because air holds more water, the higher is the temperature of the air (the saturation vapor pressure increases with temperature), by increasing the temperature of the atmosphere, global warming is predicted to lead to an increase in the amount of water vapor in the atmosphere. The increase in water vapor pressure as a function of temperature increase is in fact precisely predicted by a fundamental relationship in thermodynamics known as the Clausius-Clapeyron equation.146 This equation predicts an approximately exponential increase in specific humidity (mass of water vapor per mass of air) as temperature increases as well as a (smaller) increase in relative humidity (ratio of the amount of water vapor in the air compared to the amount required for saturation at a particular temperature). Other things equal, the increase in humidity should lead to an increase in precipitation. At the same time, one might well expect an even bigger increase in precipitation, because warmer surface temperatures might well mean more evaporation, especially from the oceans and hence even more water vapor being put into a warmer atmosphere.

Quite surprisingly, this is not what climate change models predict. The current set of coupled ocean-atmosphere GCM’s predict substantial increases in atmospheric water vapor (in the range of 7% per degree centigrade) as a consequence of CO2 forced temperature increases.147 Direct application of the Clausius-Clapyeron relation would imply that such an increase in atmospheric water vapor would lead to a predicted increase in precipitation of roughly the same magnitude, around 6-7% per degree centigrade.148 However, the climate models predict substantially smaller increases in precipitation, in the range of only a 1-3.5%.149 The discrepancy between increases in precipitation predicted by climate models and what one would predict based on fundamental thermodynamic relations is in fact even greater than this because, as noted above, other things equal, evaporation should increase as the surface temperature warms. For

146 On this equation, the change in pressure with a change in temperature, dp/dT is to a good approximation equal to Lp/T2, where L is the latent heat of evaporation, p is pressure, and T is temperature. See Thayer Watkins, The Clausius-Clapeyron Equation: Its Derivation and Application, available at www.sjsu.edu/faculty/watkins.clausius.html.

147 Frank J. Wentz, et. al., How Much More Rain will Global Warming Bring?, 317 Science 233 (2007). 148 citing as an examplar of the climate model results, Miles R. Allen and William J. Ingram, Constraints on Future Changes in Climate and the Hydrologic Cycle, 418 Nature 224 (2002). 149 See Wentz, et. al., How Much More Rain will Global Warming Bring?, 317 Science 233 (2007), Thomas G. Huntington, Evidence for Intensification of the Global Water Cycle: Review and Synthesis, 319 J. Hydrology 83 (2006), citing as an examplar of the climate model results, Miles R. Allen and William J. Ingram, Constraints on Future Changes in Climate and the Hydrologic Cycle, 418 Nature 224 (2002).

35

example, a 1 degree centigrade increase in global surface air temperature should cause a 5.7% increase in evaporation (denoted by E).150

Climate models do not in fact predict such big increases in evaporation, and it is apparently in large part due to the relatively modest increases in evaporation that such models predict that the models also generate relatively small predicted increases in precipitation. As the models predict that air-sea temperature differentials and relative humidity will remain relatively constant, the only variable that can change to lower the predicted increase in evaporation (and hence precipitation, under the long-term constraint that precipitation, P, equals evaporation, E, in the global, closed system) is the surface wind stress. That is, to get the “muted response of precipitation to global warming” predicted by GCM’s “requires a decrease in global winds”151 brought about by changing global atmospheric circulation patterns.

It is not possible to measure evaporation over large areas,152 and water vapor, wind, and precipitation are not measured over the entire closed global system. However, in recent years, climate scientists have begun to collect data on atmospheric water vapor, wind and precipitation over large regions. And what they are finding is that at least on regional scales, water vapor, wind and precipitation are not moving in the direction predicted by GCM climate models.

Using satellite observations of precipitation, total water vapor and surface wind stress over the oceans, supplemented by a blend of satellite and rain gauge measurements over land areas, over the period 1987 to 2006 (during which time the Earth’s surface temperature warmed by about .19 degrees C per decade), Wentz et. al. evaluated these various GCM predictions. Using the satellite dataset, they found that over their study period, winds over the 30 degree north to 30 degree south tropics increased by .04 meters per second per decade and over all oceans at a rate of .08 meters per second per decade. These observations were “opposite to the GCM results, which predict that the 1987 to 2006 warming should have been accompanied by a decrease in winds on the order of magnitude of ....8% per decade.”153 When Wentz et. al. looked at the variability of precipitation and evaporation over their study period, they found a “pronounced difference between the precipitation time series from the climate models and that from the satellite observations.” Climate models under-predicted both the amplitude of interannual variability, decadal trends and the response to El Ninos by a factor of 2 to 3.

150 Wentz, et. al., How Much More Rain will Global Warming Bring?, 317 Science 233 (2007).

151 Frank J. Wentz, et. al., How Much More Rain will Global Warming Bring?, 317 Science 233, 234 (2007). 152 As Thomas G. Huntington, Evidence for Intensification of the Global Water Cycle: Review and Synthesis, 319 J. Hydrology 83, 88 (2006) explains, there are only a few dozen evapotranspiration monitoring sites in the world, and an indirect measurement, pan evaporation over the period 1950 to 1990 in the U.S. and the former USSR, has actually been decreasing over the period. Of course an ad hoc explanation for this observed decrease consistent with global warming is that atmospheric water vapor has increased, inhibiting evaporation, but this is generally inconsistent with surface warming unless particular values of surface, versus atmospheric, warming are assumed.

153 Wentz, et. al., How Much More Rain will Global Warming Bring?, 317 Science 233, 234 (2007). 36

Even more important, the observed values of evaporation, E, precipitation, P and wind, W, all exhibited similar responses to the two El Ninos, similar magnitudes of interannual variability, and similar decadal trends, suggesting an “acceleration in the hydrologic cycle of about 6% [per degree centigrade], close to the value” derived from the basic Clausius-Clapeyron relationship. However, there was “no evidence in the observations that radiative forcing in the troposphere is inhibiting the variations in E, P, and W. Rather, E and P seem to simply vary in unison with the total atmospheric water content.”154

Wentz et. al.’s conclusion is striking, for it certainly does not increase one’s confidence in the current generation of GCM models. As they present their summary interpretation, the most likely explanation for the discrepancy between observations and GCM predictions is that:

“the climate models have in common a compensating error in characterizing the radiative balance for the troposphere and the Earth’s surface. For example, variations in modeling cloud radiative forcing at the surface can have a relatively large effect on the precipitation response, whereas the temperature response is more driven by how clouds affect the radiation at the top of the atmosphere. ...The difference between a subdued increase in rainfall and a C-C [Clausius-Clapeyron] increase has enormous impact, with respect to the consequences of global warming. Can the total water in the atmosphere increase by 15% with CO2 doubling but precipitation increase by only 4%? Will warming really bring a decrease in global winds? The observations here suggest otherwise...”155

The study by Wentz, et al is by no means the only recent empirical work to cast doubt on climate model predictions of water vapor and precipitation. Utilizing monthly data on water vapor and lower tropospheric temperature from the North American Regional Reanalysis over the period 1979 to 2006, Wang et al. found that while atmospheric temperature significantly increased at a rate of about .08 degrees centigrade per decade, neither of the measures of water vapor content that they used showed any significant increase.156 While the authors are careful to note that their study was mostly over land and an open (regional) system – versus the closed global land/ocean system – as they clearly state, the North American water vapor trends that they found are “inconsistent with the trends projected by a rising temperature if a constant relative humidity is assumed,” as climate models do.157

More recently still, Paltridge et al. have looked at absolute and relative humidity tends over the period 1973-2007 at different altitudes and in both tropical and mid- latitude zones and found little support for the constant relative humidity feature that is

154 Wentz, et. al, How Much More Rain will Global Warming Bring?, supra note __ at 235. 155 Wentz, et. al, How Much More Rain will Global Warming Bring?, supra note __ at 235 (emphasis supplied). 156 Jih-Wang Wang, et al., Towards a Robust Test on North America Warming trend and Precipitable Water Content Increase, 35 Geo. Res. Lett. L18804, doi:10.1029/2008GL034564 (2008). 157 Wang, et al., Towards a Robust Test on North America Warming trend and Precipitable Water Content Increase, 35 Geo. Res. Lett. L18804, doi:10.1029/2008GL034564 (2008).

37

crucial to the large water vapor feedback effect in climate models.158 More precisely, climate models predict that even with surface and tropospheric warming, relative humidity at any given height in the troposphere remains roughly constant. Paltridge et al. find by contrast that for all the latitude zones that they studied, for all altitudes above the convective boundary layer, relative humidity “decreased over the past three or four decades as the surface and atmospheric temperatures have increased.”159

To be sure, the Paltridge study was published after the IPCC’s 2007 Assessment Report. However, the general strategy of the IPCC’s most recent report was to stress conflicting, satellite data tending to show roughly constant relative humidity, in line with climate model implications, and to emphasize the problems with the atmospheric humidity measurements relied upon by Paltridge et al.160 Since the publication of the 2007 IPCC AR, a similar strategy has continued to be taken by the scientists who were influential in shaping the discussion of water vapor feedback in that Report. In a 2009 review essay, Dessler and Sherwood161 argue that recent observations of a strong positive water vapor feedback from short term climate perturbations162 show that “the water vapor feedback is virtually certain to be strongly positive, with most evidence supporting a magnitude sufficient to roughly double the warming that would otherwise occur.” Dessler dismisses work such as that by Wang et al.163 as focusing only on regional, as opposed to global measurements of humidity, and dismisses the study by Paltridge et al. because its central findings are not achieved when one uses newer, “more modern and sophisticated reanalysis data sets.”164 On the view maintained by Dessler and Sherwood and the IPCC, to generate “virtually certain” predictions of the large positive water vapor feedback that will result from an increase CO2, one doesn’t need to have an accurate model of how clouds and rainfall (involving “detailed microphysics and other small-scale processes”) will change with such an increase in water vapor. Instead, a “surprisingly” simple model accurately predicts the water vapor feedback: an increase in atmospheric CO2 increases tropical surface temperatures, leading to increased water vapor in the tropical region, where most of the water vapor is transported by convection far above the cloud layer into the upper troposphere (because the difference in temperature between the surface and

158 Garth Paltridge et al., Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data, 98 Theo. Appl. Climatol.. 351 (2009). 159 Paltridge et al., Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data, 98 Theo. Appl. Climatol. 351, 355 (2009).

160 IPCC, Climate Change 2007: The Physical Science Basis __ (arguing that the satellite measurements of humidity showing constant relative humidity have become steadily more reliable, whereas the radiosonde data that make up most of the measurements in the NCEP reanalysis dataset used in the Paltridge study are not reliable).

161 Andrew E. Dessler and Steven C. Sherwood, A Matter of Humidity, 323 Science 1020, 1021 (2009). 162 Such as Dessler and Wong, Estimates of the Water Vapor Climate Feedback during the El Nino Southern Oscillation, 22 J. Climate 6404 (2009). 163 As well as Chunquiang Wu, Tianjun Zhou and De-Zheng Sun, Atmospheric Feedbacks over the tropic Pacific in Observations and atmospheric general circulation models: an extended assessment, __ J. Clim. __ (2010).

164 See Guest Post by Andrew Dessler on the Water Vapor Feedback, available at

http://pielkeclimatesci.wordpress.com/2010/01/06/guest-post-by-andrew-dressler-on-the-water-vapor- feedback.

38

troposphere is so great in the tropics), where the large scale global circulation transports the (increased) water vapor across the global troposphere.165

3. Climate Feedbacks: Are Clouds and Rain Really Irrelevant?

Even if Dessler and Sherwood are correct that the crucial water vapor feedback effect can be estimated without the necessity of having an accurate model of how a warmer, and wetter, atmosphere will affect cloud formation and other climatic “microprocesses,” cloud and rainfall changes are still among the primary feedback effects that determine the sensitivity of climate models. On clouds, as with water vapor, there is evidence in the peer reviewed literature that cloud changes observed during the late twentieth century are not consistent with what climate models are generally presuming. In this section, I briefly compare what the IPCC Report has to say about what is currently known regarding the crucial cloud feedback effect with what one can find in the peer- edited journal literature.166 This comparison shows that on cloud feedbacks, the IPCC is relatively candid regarding the existence of scientific uncertainty. But while the literature strongly suggests that uncertainty regarding cloud feedbacks is so fundamental that it virtually eliminates any ability to generate reliable quantitative predictions about the impact of elevated CO2 on climate, the IPCC’s most recent Fourth Assessment Report takes great pains to do exactly the opposite by adopting rhetoric that minimizes and downplays the significance of uncertainty over cloud feedbacks.

a) The IPCC on Cloud Feedback

The IPCC report explains in a succinct (if rather vague) way the two counteracting effects of cloud on surface temperature:

“By reflecting solar radiation back to space (the albedo effect of clouds) and by trapping infrared radiation emitted by the surface and the lower troposphere (the greenhouse effect of clouds), clouds exert two competing effects on the Earth’s radiation budget. These two effects are usually referred to as the SW and LW components of the cloud radiative forcing (CRF)....In the current climate, clouds exert a cooling effect on climate (the global mean CRF is negative). 167

The IPCC Report then admits quite quickly that clouds are quantitatively very significant in determining the earth’s radiative fluxes (or flows) and there is great uncertainty over how the balance between cloud cooling and cloud warming might be impacted by CO2- induced global warming:

“At the time of the TAR [the Third Assessment Report, issued in 2001], clouds remained a major source of uncertainty in the simulation of climate changes (as they still are at present: e.g. [various sections cited])...

165 Dessler and Sherwood, A Matter of Humidity, supra note __ at 1020. 166 Climate Change 2007: The Physical Science Basis (Susan Solomon, etl. al. eds, 2007)(hereafter cited as “IPCC 2007”). 167 IPCC 2007 p. 635.

39

“...the amplitude and even the sign of cloud feedbacks as noted in the TAR as highly uncertain, and this uncertainty was cited as one of the key factors explaining the spread in model simulations of future climate for a given emission scenario. This cannot be regarded as a surprise...Clouds, which cover about 60% of the earth’s surface, are responsible for up to two-thirds of the planetary albedo, which is about 30%. An albedo decrease of only 1%, bringing the Earth’s albedo from 30% to 29%, would cause an increase in the black-body radiative equilibrium temperature of about 1 degree C, a highly significant value, roughly equivalent to the direct radiative effect of a doubling of the atmospheric CO2 concentration...The strong effect of cloud processes on climate model sensitivities to greenhouse gases was emphasized through a now classic set of General Circulation Model (GCM) experiments, carried out by Senior and Mitchell (1993). They produced global average surface temperature changes (due to doubled atmospheric CO2 concentration) ranging from 1.9 degrees C to 5.4 degrees C, simply by altering the way that cloud radiative properties were treated in the model.”168

To its credit, the IPCC Report admits that different assumptions regarding cloud feedbacks largely account for the differences across models in predicted climate sensitivities, in that without cloud feedbacks, current GCM’s would predict a climate sensitivity (+/- 1 standard deviation) of roughly 1.9 degrees C +/- .15 degree C, whereas with cloud feedbacks, the actual mean and standard deviation of climate sensitivity estimates from current GCM’s are “several times this”, 3.2 degrees C +/- .7 degree C.169 The IPCC Report admirably both discloses how uncertainty over cloud feedback effects is largely responsible for the spread in predictions across different climate models, and explains where that uncertainty is coming from:

“...the spread of climate sensitivity estimates among current models arises primarily from inter-model differences in cloud feedbacks...cloud feedbacks remain the largest source of uncertainty in climate sensitivity estimates...170 “inter-model differences in cloud feedbacks are mostly attributable to the SW [short wave] cloud feedback component, and that the responses to global warming of both deep convective clouds and low-level clouds differ among GCM’s. Recent analyses suggest that the response of boundary-layer clouds constitutes the largest contributor to the range of climate change feedbacks among current GCM’s. It is due both to large discrepancies in the radiative response simulated by models in regions dominated by low level cloud cover...and to the large areas of the globe covered by these regions....However, ...the spread of model cloud feedbacks is substantial at all latitudes, and tends to be larger in the tropics.” 171

While the IPCC Report is admirably candid in acknowledging uncertainty about cloud feedbacks, it fails to acknowledge, and indeed even conceals, the enormous gap

168 IPCC 2007 pp. 1113-1116. 169 IPCC 2007, p. 633. 170 IPCC 2007 pp. 636-637. 171 IPCC 2007p. 637.

40

between the scientific literature on cloud feedbacks and the assumptions about cloud feedbacks in the GCM models it relies upon for climate change predictions. The Report notes that “the GCM’s all predict a positive cloud feedback but strongly disagree on its magnitude.”172A little later, the Report discusses, as if it were an unrelated topic, recent debate over 1) (climate scientist Richard Lindzen’s) conjecture that the tropical area covered by high topped anvil clouds might decrease with rising temperature, leading to a negative climate feedback; 2) conjectures that low level, cooling boundary layer clouds over the ocean might increase, leading to cooling; and, 3) the hypothesis that an increase in extratropical storm strength would dominate decreases in storm frequency, producing increased reflection of SW radiation and decreased emission of LW radiation, where this final debate has been prompted by studies (discussed below) reporting observed decreases in cloud thickness.173 The IPCC Report does declare that new observational data has “revealed systematic biases in the current version of GCM’s, such as the tendency to over-predict optically thick clouds,” and that these “errors... cast doubts on the reliability of the model cloud feedbacks. ...under-prediction of low and mid-level clouds presumably effects the magnitude of the radiative response to climate warming in the widespread regions of subsidence.”174

Were one approaching the climate change prediction problem for the first time, one might well conclude from the IPCC AR4’s own discussion that the GCM’s are probably wrong in assuming a positive cloud feedback, and that their projected temperature increases are consequently biased upward. Yet the Report’s discussion of clouds concludes by saying only that “...it is not yet possible to assess which of the model estimates of cloud feedback is the most reliable. However, progress has been made in the identification of the cloud types, the dynamical regimes and the regions of the globe responsible for the large spread of cloud feedback estimates among current models.”175

b) The Peer-Reviewed Scientific Literature on Cloud Feedback

For quite some time, climate scientists have understood the basic mechanisms by which clouds affect climate. On the vertical dimension, high, cold cirrus clouds capture and reflect back longwave radiation, thus heating the earth’s surface and the atmosphere, an effect especially prominent at low latitudes, while low, warmer clouds reflect solar shortwave radiation, thus cooling the earth’s surface and atmosphere, an effect that is especially likely at higher latitudes.176 Thus by tending to warm the tropical atmosphere and warm the polar atmosphere, “...clouds enhance the latitudinal gradient of column cooling and reinforce the meridional heating gradients responsible for forcing the mean meridional circulation of the atmosphere.”177

172 IPCC p. 632. 173 IPCC pp. 636-637. 174 IPCC 2007 p. 638. 175 IPCC p. 638. 176 Graeme L. Stephens, Cloud Feedbacks in the Climate System: A Critical Review, 18 J. Clim. 237, 246 (2005). 177 Stephens, Cloud Feedbacks in the Climate System, at 243.

41

The literature explains the reason why the cloud feedback effect is so hard to fully understand and, thus far, impossible to predict: the relationship between clouds and the earth’s climate system runs in both directions; clouds impact the earth’s radiation budget, but are themselves dependent upon climate. Almost two decades ago, climate scientist Albert Arking noted that while the first relationship, the “radiative impact of clouds on climate is at least understood in principle...the dependence of cloud cover on the variables of the climate system is only understood in isolated areas under rather limited conditions.”178 Crucially, and as the IPCC rightly points out in its 2007 AR, the variation among climate models’ sensitivity – the temperature increase predicted to result from a CO2 doubling—is “widely believed to be due to uncertainties in cloud feedbacks.”179 Relatively small temperature increases are predicted by models that predict increased low-level cloudiness, while big temperature increases are predicted by models that predict decreased low level cloudiness.180

According to a very widely cited and seemingly authoritative assessment of the state of our knowledge about cloud feedbacks by climate scientist Graeme Stephens, advancing our understanding of cloud feedback effects depends upon recognizing that “[i]t is the atmospheric circulation that broadly determines where and when clouds form and how they evolve. Cloud influences, in turn, feed back on the atmospheric circulation through their effects on surface and atmospheric heating, ...Therefore the basis for understanding this important feedback, in part, lies in developing a clearer understanding of the association between atmospheric circulation regimes and the cloudiness that characterizes these ‘weather’ regimes.”181 In his view, climate models must be evaluated by their ability to “reproduce the observed present-day distribution of clouds, their effects on the earth’s energy budgets, and their relation to other processes, as well as to be able to reproduce observed climate variability.”182 Importantly, as Stephens explains, existing tests that compare modeled versus observed cloud cover and/or top of the atmosphere (TOA) cloud radiative forcing are inadequate, because:

“merely reproducing distributions of observed parameters independent of one another is not an adequate test of models since it is possible to tune to the observations using any one of many combinations of cloud parameters that, individually, might be unrealistic...Simple comparisons of model and observed cloud parameters does not provide any insight into the realism of those processes essential to feedback. “183

As an example of this problem, Stephens discusses how one widely used climate model (the ECMWF – European Center for medium-range weather forecasts) predicts lower amounts of high clouds but thicker boundary layer clouds than observed, and yet has a

178 Albert Arking, The Radiative Effects of Clouds and their Impact on Climate, 71 Bull. Am. Meteoro. Soc. 795 (1991). 179 Stephens, Cloud Feedbacks, supra note __ at 239. 180 Stephens, Cloud Feedbacks in the Climate System, supra note __ at 240.

181 Stephens, Cloud Feebacks in the Climate System, supra note __ at 241. 182 Stephens, Cloud Feedbacks in the Climate System, supra note __ at 260. 183 Stephens, Cloud Feedbacks in the Climate System, supra note __ at 261.

42

total cloud shortwave flux that is close to the observed value (although its longwave cloud flux is not close to the observed value). 184

In summarizing our present understanding of cloud feedbacks and, in particular, cloud feedbacks in the GCM climate models, Stephens offers the following remarks:

“GCM climate and NWP models represent the most complete description of all the interactions between the processes that establish the main cloud feedbacks, [but] the weak link in the use of these models lies in the cloud parameterization embedded in them. Aspects of these parameterizations remain worrisome containing levels of empiricism and assumptions that are hard to evaluate with current global observations.

“...[o]ut of necessity, most studies make ad hoc assumptions about the overriding importance of one process over all others generally ignoring other key processes, and notably the influence of atmospheric dynamics on cloudiness. Generally little discussion is offered as to what the system is let alone justification for the assumptions given...Most analysis of feedback concentrates on the global-mean climate system and global-mean surface temperature defining cloud feedbacks as those processes that connect changes in cloud properties to changes in global- mean temperature. There is, however, no theoretical basis to define feedbacks this way nor any compelling empirical evidence to do so....Comparisons of feedback diagnostics applied to the same GCM but derived using different analysis methods with different assumptions about the nature of the system...produce estimates of feedbacks that not only vary in strength but also in sign. Thus we are led to conclude that the diagnostic tools currently in use by

the climate community to study feedback, at least as implemented, are problematic and immature and generally cannot be verified using observations.”185

Notably, Stephens’s article explaining the seemingly important limitations on scientific knowledge about cloud feedback effects is never even cited in the IPCC’s 2007 AR4. This failure to mention and discuss such a widely cited article, and one that appeared some years before the 2007 AR, seems to indicate that the 2007 AR was not a full and complete assessment.

184 Stephens, Cloud Feedbacks in the Climate System, supra note __ at 261. 185 Stephens, supra at 268 (emphasis supplied).

43

c) Cloud Feedback: The Evidence for Natural Cooling that Offsets the CO2 Greenhouse Effect

Recently, climate scientists have taken Stephens’ advice—that the “blueprint for progress must follow a more arduous path that requires carefully orchestrated and systematic combination of model and observations”186—and have attempted to combine theoretical predictions with observations not at the large scale of months, years or decades, but at the level of daily observations of clouds, radiation, temperature and other variables that can then be compared with GCM predictions.

Studying six years of daily data on tropical (20 degrees north to 20 degress south) oceanic averages during fifteen tropical intraseasonal oscillations, Spencer et. al.187 found a negative feedback, with “enhanced radiative cooling of the ocean-atmosphere system during the tropospheric warm phase of the oscillation.” Spencer et. al. traced this “unexpected” transition from net cloud warming to net cloud cooling during the oscillation’s rainy, tropospheric warming phase to a decrease in ice cloud coverage.188 Spencer et. al. repeatedly caution that the time scales they studied were much shorter than climate change time scales. Yet they also observed that as “all moist convective adjustment occurs on short time scales,” and “intraseasonal oscillations represent a dominant model of convective variability in the tropical troposphere, their behavior should be considered when testing the convective and cloud parameterizations in climate models that are used to predict global warming.”

Another study in a similar spirit begins by observing that:

“Low-level clouds combine a small greenhouse effect with a generally high albedo and thus contribute significantly to the overall net cooling role of clouds in earth’s climate. Currently, lack of both resolution and appropriate physical parameterizations prohibit reliable large-scale prediction of cloud-climate feedbacks...A good strategy for improving our understanding of climate mechanisms and their numerical simulation is to carry out focused studies that elucidate specific ocean-atmosphere-cloud relationships and that can inform and constrain model results. By examining the interannual variability of low-level clouds in the eastern equatorial Pacific, an area of high atmospheric and oceanic variability located on the edge of a persistent stratiform cloud deck, we aim to uncover sometimes-subtle details of marine low-level cloud processes...”189

This study, by Mansbach and Norris, found that especially in the region extending 1500 km west of the Galapagos Islands, interannual low level cloud variability was explained neither by variation in sea surface temperatures nor in lower troposphere static stability,

186 Stephens, supra note __ at 269. 187 Roy W. Spencer, et. al., Cloud and Radiation Budget Changes Associated with Tropical Intraseasonal Oscillations, 34 Geo. Res. Letters L15707, (2007). 188 Spencer et. al., Cloud and Radiation Budget Changes, supra note ___. 189 David K. Mansbach and Joel R. Norris, Low-Level Cloud Variability over the Equatorial Cold Tongue in Observations and Models, 20 J. Climate 1555 (2007).

44

but rather primarily by SST advection and atmospheric surface stability (with warm SST [sea surface temperature] advection stabilizing the atmospheric surface layer, inhibiting upward mixing of moisture from the sea surface and resulting in a decrease in cloud amount and more frequent absence of low level clouds).190 Mansbach and Norris explain that “although beyond the scope of the present study to quantify, we note that the observed inverse relationships between SST and SST advection and between cloud and SST advection imply the existence of a negative cloud feedback on and about the near- equatorial SST front.”191 Notably, climate models do not accurately simulate the SST advection – cloud relationships found by Mansbach and Norris: for varying reasons, both the Geophysical Fluid Dynamics Laboratory GCM and the National Center for Atmospheric Research coupled OAGCM’s are inaccurate.

d) Clouds and The Relationship between Weather and Climate

A look at the literature thus reveals that the IPCC has just hinted at the scientific controversy over cloud feedback effects, and at just how much the climate predictions it advances rely upon what are basically just guesses about whether cloud feedback is likely to be positive or negative, and big or small. But clouds are actually even more central to the climate change debate than this. Just how central can be grasped by looking at the views of climate scientist Roy Spencer.192

Spencer takes issue with a basic assumption maintained by most climate modelers and researchers, which is that “an increase in the greenhouse effect from manmade greenhouse gases causes a warming effect that is similar to that from an increase in sunlight.” The difference, in Spencer’s view, is that the natural greenhouse effect (which comes mostly from water vapor and clouds), “is under the control of weather systems – especially precipitation systems – which are generated in response to solar heating. Either directly or indirectly, those precipitation systems determine the moisture (water vapor and cloud) characteristics for most of the rest of the atmosphere.” Basically (and colloquially), the more efficient is the precipitation system’s response to rising atmospheric temperature (and hence water vapor), the more water vapor is recycled back to the surface as rainfall and the less water vapor remains in the atmosphere. Since water vapor is by far the most important greenhouse gas, the more efficient are precipitation systems at removing water vapor, the lower the equilibrium temperature increase from any radiative forcing (such as an increase in a different greenhouse gas, such as CO2). In the words of climate scientists, “the climate equilibrium depends crucially on cloud microphysical processes. Clouds with high precipitation efficiency produce cold and dry climates. This happens because most of the cloud condensed water falls out as rain, leaving little available to moisten the atmosphere.”193 The great defect in GCM climate studies can now be understood:

190 Mansbach and Norris, Low Level Cloud Variability, supra note _- at 1567. 191 Mansbach and Norris, Low Level Cloud Variability, supra note __ at 1567. 192 Roy W. Spencer, Global Warming and Nature’s Thermostat: Precipitation Systems (updated October, 2007). Available at www.weatherquestions.com/Roy-Spencer-on-global-warming.html. 193 Nilton O. Rennó, et. al., Radiative-Convective Model with an Explicit Hyodrologic Cycle 1. Formulation and Sensitivity to Model Parameters, 99 J. Geo. Res. 14,429, 14,440 (1994).

45

“Since climate equilibrium can be very sensitive to the cloud microphysical processes, any cumulus convection scheme adequate for use in GCM’s should be strongly based on them. Considering that the cumulus convection schemes currently in use in GCM’s are based on somewhat arbitrary moistening assumptions, they are probably inadequate for climate change studies.”194

In slightly less technical language, the huge problem with GCM climate models is that these models do not actually model the physical processes that produce precipitation systems, but instead set various parameters at values such that the models accurately reproduce spatial and temporal patterns in average precipitation. These models cannot shed any light at all on how the efficiency of precipitation systems might change due to a forced atmospheric warming.195 While it is still a minority view, there is now evidence published in peer-edited scientific journals – the studies discussed above—that supports the view of Spencer and other climate scientists that precipitation systems act in effect as a kind of atmospheric thermostat and would cause the atmosphere to cool in response to a temperature increase due to CO2 increases.

4. Direct Evidence on Feedback Effects

The difficulty of measuring changes in atmospheric water in response to warming surface and tropospheric temperatures should not be taken to indicate that more direct measurements of feedback effects are not possible. Since the mid-1980’s, radiative flux measurements have been taken by radiometers placed in satellites as part of the Earth Radiation Budget Experiment (ERBE). Although there are limitations in the data, Lindzen and Choi of MIT recently reported ERBE radiometer measurements for sea surface temperature (SST), outgoing longwave radiation and reflected shortwave (solar) radiation in the tropical region (20 degrees south to 20 degrees north) for the period 1985-1999.196 Even taking account of the known uncertainty in the ERBE data, Lindzen and Choi find that the ERBE show a net negative feedback as SST rose during their study period, primarily due to increased reflection of solar radiation (as discussed in more detail below, this would occur if there were an increase in high level clouds due to CO2-forced warming). Given this negative feedback, the implied temperature increase from a doubling of CO2 – which as discussed earlier is referred to as climate sensitivity – is, according to Lindzen and Choi, about .5 degrees Celsius. Given that climate models generate a sensitivity of between 1.5 and 5 degrees Celsius, the ERBE data would seem to indicate that the climate models used by the IPCC vastly overstate climate sensitivity. The reason they do so, as explained earlier, is because they assume large positive feedbacks. The results reported by Lindzen and Choi would seem to suggest that the

194 Nilton O. Rennó, et. al., Radiative-Convective Model with an Explicit Hyodrologic Cycle 1. Formulation and Sensitivity to Model Parameters, 99 J. Geo. Res. 14,429, 14,440 (1994). 195 See Roy W. Spencer, Global Warming and Nature’s Thermostat: Precipitation Systems (updated October, 2007). Available at www.weatherquestions.com/Roy-Spencer-on-global-warming.html.

196 Richard S. Lindzen and Yong-Sang Choi, On the Determination of Climate Feedbacks from ERBE Data, 36 Geo. Res. Lett. L16705, doi:10.1029/2009GL039628 (2009).

46

fundamental assumption in climate models used by the IPCC – of large positive feedbacks – indeed the assumption that by itself is responsible for potentially catastrophic large temperature increases – is strongly disconfirmed by the existing evidence.

D. Compared to What? The Failure to Rigorously Test the CO2 Primacy Hypothesis Against Alternative Explanations for Late Twentieth Century

What is called “detection and attribution” by the IPCC is actually the crucial scientific question of whether increases in anthropogenic greenhouse gas emissions, as opposed to something else, can be rigorously and confidently identified as responsible for the warming trend that began in the early twentieth century and accelerated in its latter decades. Notably, in the “Summary for Policymakers” to the 2007 physical science Assessment Report, the IPCC does not even mention the topic of detection and attribution. To repeat, in the summary released to the media well in advance of the full Report, the IPCC did not find it worthwhile to state its conclusion that increases in human greenhouse gases (ghg’s), versus other potential causes, were responsible for recent temperature increases.

The summary explanation of how the IPCC identified increased human ghg emissions as the culprit appeared first in the “Technical Summary” that accompanied the full Report. Here, the IPCC said that it had concluded that it is “extremely unlikely” that warming over the past 50 years “can be explained without external forcing,” because:

“these changes took place over a time period when non-anthropogenic forcing factors (i.e. the sun and volcanic forcing) would be likely to have produced cooling, not warming...it is very likely that these natural forcings alone cannot account for the observed warming. There is also increased confidence that natural internal variability cannot account for the observed changes, due in part to improved studies that warming occurred in both oceans and atmosphere, together with observed ice mass losses.”197

Remarkably, on the very same page, the IPCC wrote that:

“...uncertainties remain in estimates of natural internal variability...internal variability is difficult to estimate from available observational records since these are influenced by external forcing, and because records are not long enough in the case of instrumental data, or precise enough in the case of proxy reconstructions, to provide complete descriptions of variability on decadal and longer time scales.”198

By “natural variability” the IPCC was referring to the fact that because the climate system is chaotic (that is, highly non-linear), it will exhibit cycles and swings that are entirely internal to the climate system, caused quite literally by nothing that is new or outside the system itself.

197 IPCC, Climate Change 2007: The Physical Science Basis 60. 198 IPCC, Climate Change 2007: The Physical Science Basis 60.

47

What is remarkable about the two passages just quoted is that the IPCC is nakedly applying dramatically different standards of proof to the two competing alternative hypotheses that recent warming is due to elevated atmospheric levels of CO2versus natural or internal variability. Even the partial survey of the literature that follows immediately below seems to suggest that the observed changes in global average surface temperature are quite possibly due to internal variability. It would seem that the short observational record and inaccuracy in proxy reconstructions affect the ability to test to the CO2 primacy hypothesis no more and no less than they effect the ability to test the internal variability hypothesis. If this is not the case, then one would certainly like to see an explanation of why, for the ultimate question is one of identifying CO2 as the culprit.

That forces other than increases in atmospheric CO2 may have contributed to the observed late 20th century warming of global surface temperatures is strongly suggested by recent improved data on temperature trends in the troposphere. This is clearly explained in a recent non-technical article by climate scientist Richard Lindzen.199 Lindzen begins200 by noting that precisely because water vapor is such a strong greenhouse gas, the idea that the surface of the earth cools primarily by thermal radiation is highly misleading. Surface heat escapes through the action of the fluid motions of convection and planetary scale circulation patterns. These motions move heat upward and poleward to a level of the atmosphere – called the characteristic emission level – where it can escape to space. Increasing concentrations of greenhouse gases raise the characteristic emission level. At this colder level of the atmosphere, the outgoing longwave radiation no longer balances the net incoming solar radiation. For equilibrium to be restored, the temperature at the characteristic emission level must increase. This, precisely speaking, is what is called radiative forcing in the climate science literature. As Lindzen explains, how warming at the characteristic emission level relates to surface warming is “not altogether clear,” but regardless of the particular climate model, the signature or “fingerprint” of greenhouse warming is that the “greenhouse contribution to warming at the surface should be between less than half and one third of the warming seen in the upper troposphere,” with an upper bound of about 1/2.5.201 Recent observations depict a warming trend in the troposphere of about .1 degree centigrade per decade, which if due to greenhouse warming should have been associated with a surface trend of .04 degrees centigrade per decade or about .4 degrees centigrade over the 20th century. Surface data, however, show a warming of about .13 degrees centigrade over the latter part of the 20th century. This implies that only about 1/3 of the observed surface warming is due to greenhouse gases.

Given this basic analytical starting point, the question, as Lindzen succinctly puts it, is “How then did the IPCC Summary for Policymakers reach its conclusion that most of the surface warming over the past 30 years is due to anthropogenic forcing?”202 The

199 Richard S. Lindzen, Taking Greenhouse Warming Seriously, 18 Energy & Env. 937 (2007). 200 The following discussion is taken from Lindzen, Taking Greenhouse Warming Seriously, 18 Energy & Env. 937, 940-944. 201 Lindzen, Taking Greenhouse Warming Seriously, 18 Energy & Env. 937, 942. 202 Lindzen, Taking Greenhouse Warming Seriously, 18 Energy & Env. 937, 944.

48

literature reviewed seems to raise more questions than to answer Lindzen’s question, thus tending to weaken rather than strengthen confidence in the IPCC’s conclusion.

1. Atmospheric Circulation and Climate Change Detection, Attribution and Regional Climate Change Predictions

The detection and attribution of anthropogenic forcing (greenhouse gas emissions) as the global warming culprit is based on comparing spatio-temporal temperature observations – rather than just a temperature time series -- with modeled patterns.203 Yet recent work suggests that just as they manage to reproduce past temperature time series despite failing to agree at all on climate sensitivity, so too do the ensemble of models used by the IPCC manage to reproduce the spatio-temporal pattern of global temperature despite fundamental disagreement over how global warming will alter global atmospheric circulation patterns.

The general circulation of the earth’s atmosphere is driven by two primary forces: the heating of the low latitudes relative to higher latitudes, and the rotation of the earth on its axis.204 The relative heating of the tropics accounts for the Hadley circulation, a simple equator-to-pole circulation in which warm air rises in the tropics and flows toward the poles at relatively high altitudes, and is replaced (as the law of the conservation of mass requires) by cooler air flowing down from the polar regions at lower altitudes, where warmer air correspondingly descends. The Hadley circulation cannot itself explain global atmospheric circulation, because it ignores the earth’s rotation, and is insufficient (standing alone) to generate an equilibrium wind speed and explain west to east (or east to west) winds. To get a basic approximation of atmospheric movement, one needs to take account of the acceleration due to the earth’s spinning about on its axis. This account is provided by the (somewhat misnamed) Coriolis Force, the name given to the acceleration of air parcels due to the earth’s rotation. Given the direction of the earth’s rotation, and the Hadley cell movement of warm air out from the equator and toward the poles, the Coriolos force bends air parcels to the east (rightward relative to the direction of parcel motion in the northern hemisphere, leftward in the southern) in both hemispheres. Because of the Coriolos force, major global winds move from the west to the east (and because of the basic pressure gradient) along lines of constant pressure.

The earth’s rotation is in fact sufficiently strong that the Coriolis force is so strong that a single equator to pole circulation cell – a single Hadley cell – is unstable. The direct meridional circulation named after Hadley extends only to about 30 degrees latitude in each hemisphere. Instead, in each hemisphere, the Hadley cell breaks apart, as it were, into three cells: moving poleward, a tropical Hadley cell, a Ferrell cell, and, finally, a polar cell, each of which replicates the basic circular poleward flow of warm air toward cold.205 The latter two are caused by large scale eddy fluxes – cyclones and anti-

203 Retto Knutti, Why are Climate Models Reproducing the Observed global surface warming so well?, 35 Geo. Res. Lett. L18704 (2008). 204 This sentence, and much of what is contained in this and the next paragraph, is a summary and paraphrasing of John E. Frederick, Principles of Atmospheric Science 116-147 (2008).

205 F.W. Taylor, Elementary Climate Physics 50 (2005). 49

cyclones.206 At least half of the total poleward heat transport in the atmosphere is accomplished by such eddies -- mid-latitude storm systems and waves and other kinds of turbulence.207 The processes are complex and chaotic, and it seems that the particular equator to pole temperature gradient that results from their interaction is that which maximizes the entropy (or disorder) of the climate system.208 In any event, in the extratropics, the transport of mass, energy and momentum in the atmosphere are driven by these fundamentally turbulent eddies, rather than by the relatively simple Hadley circulation that is the dominant poleward force in the tropics.209

In explaining regional climate, the large scale circulation patterns are crucial. As a consequence of the Hadley circulation, the surface pressure at 30 degrees latitude (where air is subsiding) is generally greater than at the equator (where air is rising), and the tropical surface tradewinds blow generally toward the equator from both hemispheres, meeting in the Intertropical Convergence Zone (ITCZ) where there is low surface pressure and large scale upward motion with latent heat release.210 In both hemispheres, the ITCZ moves with the seasons: the large area of convection centered over Indonesia moves southward, extending as far south as 30 degrees south, during the southern hemisphere summer; in the Amazon, the heaviest rains fall during the early months of the year, and the lowest in August, when the ITCZ moves northward.211 Moving longitudinally, regions within the moist ITCZ are not, of course, identical. In particular, the equatorial region surrounding Malaysia, Indonesia and New Guinea – where there are few large land areas and shallow seas – is one of especially intense convection and precipitation, and the rising motion driving by latent heat release in this region generates a powerful circulation system of the tropical atmosphere characterized by east-west circulation cells along the equator with large regions of rising warm and moist air in the Indonesian, South American and African regions, and subsiding, dry air in between. The largest of these east-west equatorial circulation cells, known as the Walker Circulation, extends across the Pacific Ocean.212

Where air is subsiding along the belt between 10 to 40 degrees latitude, rainfall is suppressed, and it is in this region where many of the world’s great deserts are found. Moving further poleward, much seasonal climate is driven by the differential response of oceans and continents to seasonal variations in solar insolation: relative to the oceans, land surfaces warm up more rapidly in the summer and cool more rapidly in the winter,

206 See Tapio Schneider, The General Circulation of the Atmosphere, 34 Ann. Rev. Earth Planet. Sci. 655, 658 (2006). 207 Taylor, Elementary Climate Physics 50. 208 Taylor, Elementary Climate Physics 50-51.

209 Rodrigo Caballero, Hadley Cell bias in the climate models linked to tropical eddy stress, 35 Geo. Res. Lett. L18709 (2008). As explained by Dennis L. Hartman, Global Physical Climatology 145 (1994), there are both transitory and stationary eddies, where the latter result from variations in surface elevation and temperature associated with various continental features, such as mountain ranges, and oceans. The significance of eddies in moving heat poleward peaks, according to Hartman, at around 50 degrees of latitude in the winter hemisphere, with transient eddy fluxes dominating except in the Northern Hemisphere in winter, when stationary eddy fluxes contribute up to half of the total flux.

210 Dennis L. Hartman, Global Physical Climatology 155. 211 Dennis L. Hartman, Global Physical Climatology 164. 212 Dennis L. Hartman, Global Physical Climatology 163.

50

giving rise to a long term predictable pattern where high pressure centers form over the oceans in summer and over the continents in winter (and vice versa for low pressure centers). The seasonal movement of maximum insolation across hemispheres likewise accounts for the monsoon, a seasonal change in wind direction that generates dramatic shifts in precipitation across many parts of Africa, Asia and Australia. In the Asian monsoon, heating of the Tibetan plateau in the summer generates persistent low pressure that sucks in warm, moist air from the ocean, generating large amounts of rainfall; in the winter, the pattern is reversed.

However brief and incomplete, this summary of how the general circulation patterns of the earth’s atmosphere account for regional climate should suffice to show the tremendous complexity that climate models need to incorporate in order to predict how surface and tropospheric warming will alter regional climate. An accumulating body of work seems to indicate that climate models have no consistent ability to reproduce past general circulation patterns, and correspondingly have huge variation in their future projections. For example, Tanaka et al. show that while the multi-model ensemble mean reproduction of the 20th century Hadley circulation intensity is only slightly weaker than the best currently available actual observed value for that variable, the ensemble mean of both the Walker circulation and the Asian monsoon circulations are “considerably weaker” than those which were observed during the 20th century.213 GCM models predict potentially very large weakening in the Hadley, Walker and monsoon circulations during the 21st century (e.g., an ensemble mean predicted weakening of 9% for the Hadley circulation, with one model predicting a 54% weakening). Tanaka et al. conclude that both past reproductions and future projections of these key tropical circulation patterns have a “high degree of model-dependent sensitivity,” and GCM models have a “poor capability of reproducing and predicting the topical circulation.”214

As explained above, to account for global circulation patterns, GCM models would need to accurately account for both tropical Hadley cell circulation as well as higher latitude stationary and transient eddy fluxes. Recent work by Caballero,215 however, demonstrates enormous intermodal variation – with ranges exceeding 50% -- in simulated 20th century Hadley cell and tropical eddy flux. There are, consequently, enormous variations across models – of up to 8 degree centigrade and 40% respectively – in simulated tropical temperatures and humidity. 216 Caballero shows that there is a strong correlation between a model’s simulated Hadley cell strength and its simulated stationary eddy stress, so that bias in one implies bias in both and hence a “significant bias in

213 H.L. Tanaka, et al., Intercomparison of the Intensities and Trends of Hadley, Walker and Monsoon Circulations in Global Warming Projections, 1 SOLA 77 (2005). 214 H.L. Tanaka, et al., Intercomparison of the Intensities and Trends of Hadley, Walker and Monsoon Circulations in Global Warming Projections, 1 SOLA 77, 80 (2005). Reaching the same conclusion C.M Mitas and A. Clement, Has the Hadley Cell been strengthening in recent decades?, 32 Geo. Res. Lett. L03809, doi:10.1029/2004GL021765.

215 Rodrigo Caballero, Hadley Cell Bias in climate models linked to tropical eddy stress, 35 Geo. Res. Lett. L18709 (2008). 216 Rodrigo Caballero, Hadley Cell Bias in climate models linked to tropical eddy stress, 35 Geo. Res. Lett. L18709 (2008).

51

tropical temperature and humidity.”217 Although the multi-model mean simulation of subtropical eddy stress does agree “very well” with observations, there “currently appears to be no useful observational constraint of the Hadley cell.” Caballero’s study concludes with a series of puzzles and currently unanswerable questions: why do transient eddy stresses, of comparable magnitude in the southern and northern hemisphere, seem to explain a lot of the variation in southern hemisphere Hadley cell strength across models but explain very little of the variation in simulated northern hemisphere Hadley cell strength? Are biases in simulated tropical eddy stress due to errors in simulating extratropical eddy strength, or do biases in tropical wave simulations indirectly generate biases in subtropical eddy flux simulations? These are vitally important questions, according to Caballeros, for as he explains “[i]f it turns out that tropical biases are in fact mostly forced from the extratropics, then ‘tuning’ of model parameterisations (sic) locally in the tropics will, at best, give the right climate for the wrong reasons.”

Regional climate projections – a hallmark of the most recent IPCC Fourth Assessment perhaps the most widely publicized and policy relevant of all IPCC projections – hinge crucially upon the changes in global circulation patterns predicted by GCM models. While it is true that the Caballeros study just discussed was published after the publication of the 2007 AR, that report emphasized (in the Summary for Policymakers) that “there is now higher confidence in projected patterns of warming and other regional-scale features, including changes in wind patterns, precipitation and some aspects of extremes.” 218 Yet it seems that well before the 2007 AR was put out, climate scientists well understood the lack of agreement among climate models regarding the circulation pattern predictions underlying regional climate projections.

Consider, for example, the intensively publicized projections that global warming will cause the western and southwestern United States to become much more drought prone. In the most recent generation of GCM climate models, southwestern and western drought would not be a direct consequence of global warming, but rather of changed global atmospheric general circulation patterns induced by global surface and sea surface temperature increases.

As explained in more detail by climate scientist Richard Seager:

“Global average precipitation increases with global warming induced by rising greenhouse gases. This occurs because increased infrared radiation from the atmosphere to the surface has to be balanced by increased surface heat loss, which occurs primarily by increased evaporation. For the global average increased evaporation must be balanced by increased precipitation. Regionally, precipitation can be reduced as a consequence of greenhouse climate change due to changes in atmosphere circulation that suppress precipitation by inducing subsidence. For the American West the important question is whether rising greenhouse gases will induce an El Nino-like or La Nina-like response in the tropical Pacific. The

217 Rodrigo Caballero, Hadley Cell Bias in climate models linked to tropical eddy stress, 35 Geo. Res. Lett. L18709 (2008). 218 IPCC, Climate Change 2007: The Physical Science Basis at 15.

52

former will mean increased precipitation and the latter decreased precipitation in addition to whatever other changes are induced by warming and other circulation responses...Currently climate models are all over the map in how the tropical Pacific Ocean responds to rising greenhouse gases.”219

Now climate models cannot accurately forecast El Nino events. As explained by Richard Lindzen, climate scientists are “pretty sure” that this predictive inability “involves the fact that the oceans are never in equilibrium with the surface. Irregular exchanges of heat between the deep abyssal waters and the near surface thermocline regions imply that the oceans serve as large sources and sinks of heat for the atmosphere, and these exchanges take place over time scales from months to centuries or longer...”220 Whatever the reason for the models’ failure, because the El Nino-La Nina phenemoenon is such a major determinant of interannual rainfall patterns in especially the southwestern U.S., it might seem that Seager would have to conclude that climate models simply cannot say anything credible about whether or not global warming will lead to drought in the southwestern U.S.

This is not so. First, although clearly in the minority, there are GCM’s that predict that global warming will lead to more frequent La Nina conditions that themselves generate drought in the southwestern U.S.221 Most recently, Seager and his colleagues have found support for the hypothesis of a more drought prone southwestern U.S. in the projection of some GCM’s that global warming will move the Hadley cell circulation and mid-latitude westerlies poleward, thus robbing the southwestern U.S. of ocean moisture and subjecting it to very stable drying descending air.222 Seager et. al. conclude that “while the most severe future droughts will still occur during persistent La Nina events, ...they will be worse than any since the Medieval period, because the La Nina conditions will be perturbing a base state that is drier than any experienced recently.”223

219 Richard Seager, Persistent Drought in North America: A Climate Modeling and Paleoclimate Perspective, Lamont-Doherty Earth Observatory, Drought Research, available at http://www.Ideo.columbia.edu/res/div/ocp/drought/. 220 Richard Lindzen, Taking Greenhouse Warming Seriously, 18 Energy & Env. 937, 947 (2007). 221 As Seager explains, supra note __, “the climate modeling group at Lamont has argued that rising greenhouse gases will warm the western tropical Pacific Ocean by more than the eastern ocean because, in the west, the increased downward infrared radiation has to be balanced by increased evaporative heat loss but in the east, where there is active upwelling of cold ocean waters from below, it is partially balanced by an increase in the divergence of heat by ocean currents. As such, the east to west temperature gradient increases and a La Nina-like response in induced. This is the same argument for why, during Medieval times, increased solar irradiance and reduced volcanism could have caused a La Nina-like SST response, as seen in coral based SST reconstructions...If the Medieval period is any guide to how the tropical Pacific Ocean and the global atmosphere circulation responds to positive radiative forcing then the American West could be in for a future in which the climate is more arid than at any time since the advent of European settlement.”

222 Richard Seager, et. al., Model Projections of an Imminent Transition to a More Arid Climate in Southwestern North America, 316 Science 1181, 1183 (2007). Ironically, such drying is caused by the fact that a warmer atmosphere will also be a more humid one. Basically, the warmer the global mean temperature, the higher the latitude necessary to get cool enough temperatures for water to precipitate out as rain. 223 Seager et. al., Model Projections, supra note __ at 1184.

53

Hence when one takes the time to really look at the climate science literature, one finds that the highly publicized projection that global warming will make the southwestern U.S. much more drought-prone depends upon projected changes in global circulation patterns. But that literature also reveals that climate models fail to reproduce most of the important observed global circulation patterns – especially in the tropics – and that there is enormous disagreement across models. One wonders how the public and policymakers would react to projections of increased drought if they were simultaneously told by media messengers that those projections rest upon climate model projections of changes in global circulation patterns that are surrounded by such fundamental uncertainty.

2. Internal Variability, or Chaotic Climate.

By the 1990’s, climatologists had recognized distinct cycles in the Pacific ocean- atmosphere system, one occurring every fifty years or so (multi-decadal) and another occurring at the frequency of about every one or two decades (decadal).224 There are a variety of competing explanations for the decadal cycle,225 with perhaps the best- supported being that warm sea surface temperature anomalies begin in the eastern (northern) tropical Pacific and then propagate eastward (through anomalous Northern Pacific atmospheric cyclonic activity).226 Climate scientists do not yet know what causes the subsurface sea temperature anomalies.227 To the contrary, the literature suggests that both the decadal and multi-decadal cycles are not caused by anything external to the global climate system, but rather are simply a manifestation of a natural oscillation in the ocean-atmosphere system.228

What climate scientists seem to be very certain about is that changes in tropical Pacific sea surface temperatures have a major impact on global climate,229 and that major

224 Shoshiro Minobe, Resonance in Bidecadal and Pentadecadal Climate Oscillations Over the North Pacific: Role in Regime Shifts, 26 Geo. Res. Letters 855 (1999). 225 See the summary of competing theories outlined by Jing-Jia Luo and Toshio Yamagata, Long-term El Nino-Southern Oscillation (ENSO)-like Variation with Special Emphasis on the South Pacific, 106 J. Geo. Res. 22,211, 22,212 (2001). 226 See Luo and Yamagata, Long-Term ENSO-like Variation, supra note ___ and Christina L. Holland et. Al., Propagating Decadal Sea Surface Temperature Signal Identified in modern Proxy Records of the Tropical Pacific, 28 Clim. Dyn. 163, 177 (2007). Recent work supports the hypothesis that the changes in northern tropical sea surface temperatures originate in the tropical South Pacific. See Amy J. Bratcher and Benjamin S. Giese, Tropical Pacific Decadal Variability and Global Warming, 29 Geo. Res. Lett. 24 (2002); Benjamin S. Giese, et al., Southern Hemisphere Origins of the 1976 Climate Shift, 29 Geo. Res. Lett. 1 (2002). 227 Bratcher and Giese, Decadal Variability and Global Warming, supra note __ at 24-3. 228 On the intrinsic nature of the decadal Pacific oscillation, see Giese et al., Southern Hemisphere Origin of the 1976 Climate Shift, supra note __ at 1-3; on the intrinsic nature of the 50-70 year oscillation, see Minobe, 50-70 Year Oscillation over North Pacific and North America, supra note __ at 686. 229 See, e.g., Bratcher and Giese, Decadal Variability, supra note __ at 24-3, and Holland et al., Propagating Decadal Sea Surface Temperature Signal, supra note __ at 177 (noting that ice core records show “atmospheric teleconnections, on these [decadal] time scales, between the eastern tropical Pacific and Greenland.”)

54

climatic regime shifts230 have occurred in the 1920’s, the 1940’s and the late 1970’s.231 It has been shown that these periods of very rapid and major climate regime shifts have been times when the decadal and multi-decadal Pacific cycles synchronized and interacted.232 Significant cooling was observed in regions of North America, Canada and Alaska observed in the 1940’s, and significant warming observed over those places after the 1970’s regime shift.233

The most recent climate regime shift occurred in 1976/77. In 1976 – in an event termed the “Great Pacific Climate Shift”234 -- sea surface temperatures in the tropical Pacific abruptly increased by nearly 1 degree centigrade over the period of just one year.235 Within a few years, global mean surface air temperature increased by about .2 degrees centigrade, this after a period of almost 30 years when global mean surface air temperature had been stable or slightly falling.236 This abrupt .2 degree centigrade temperature increase accounts for 40 per cent of the roughly .5 degree centigrade increase in global mean surface air temperature over the last 50 years.237 The late 1970’s warming “brought sweeping long-range changes in the climate of [the] northern hemisphere. Incidentally, after ‘the dust settled,’ a new long era of frequent El Ninos superimposed on sharp global temperature increase begun (sic).”238

On the model of intrinsic climatic cycles,239 whether there is an abrupt climate regime shift in the offing depends upon whether or not the next bi-decadal Pacific phase shift does or does not occur simultaneously with the next multi-decadal phase shift. If the two cycles are indeed linked, then according to some climate scientists, there is reason to

230 According to Minobe, A 50-70 Year Climatic Oscillation over the North Pacific and North America, supra note __ at 683, “a climatic regime shift is defined as a transition from one climatic state to another within a period substantially shorter than the lengths of the individual epochs of climate states.”) 231 See Minobe, A 50-70 Year Oscillation over the North Pacific and North America, supra note __ at 683. See also Y. Chao et al., Pacific Interdecadal Variability in this Century’s Sea Surface Temperatures, 27 Geo. Res. Lett. 2261 (2000); C. Deser et al., Pacific Interdecadal Climate Variability: Linkages between the Tropics and the North Pacific during Boreal Winter Since 1900, 17 J. Clim. 109 (2004).

232 Minobe, Resonance in Bidecadal and Pentadecadal Climate Oscillation over the North Pacific: Role in Climate Regime Shifts, 26 Geo. Res. Lett. 855857 (1997), Luo and Yamagata, Long-Term Enso-Like Variation, supra note __ 22,226 (“[t]he present analysis suggests that the 1976-77 climate regime shift is due to combined effects of the 1976-1977 ENSO event, positive phase of the ENSO-like decadal variability from 1976 to the early 1980’s, and a positive interdecadal ENSO-like phase since the the early 1980’s.”)

233 Minobe, A 50-70 Year Climatic Oscillation over the North Pacific and North America, 24 Geo. Res. Lett. At 683. 234 T.P. Guilderson and D.P. Schrag, Abrupt shift in subsurface temperatures in the tropical Pacific associated with changes in El Nino, 281 Science 240 (1998).

235 Bratcher and Giese, Tropical Pacific Decadal Variability, supra note __ at 24-2. 236 Bratcher and Giese, Tropical Pacific Decadal Variability, supra note __ at 24-2. 237 Bratcher and Giese, Tropical Pacific Decadal Variability, supra note __ at 24-2. 238 Anastasios A. Tsonis, et al., A New Dynamical Mechanism for Major Climate Shifts, 3 Geo. Res. Lett. L13705 (2007).

239 Tsonis, et al., A New Dynamical Mechanism for Major Climate Shifts, supra note __, present an alternative but related model of the climate system as a network with four coupled sub-systems – the Pacific Decadal Oscillation, the North Atlantic Oscillation, ENSO, and the North Pacific Oscillation. They estimate the coupling strength among them, and also predict periodic synchronization that leads to new climate states that themselves eventually destroy the synchronous state – another form of intrinsic climate regime shifts.

55

believe that they may have occurred synchronously within the past few years, and that we are already in the midst of a multi-decadal cooling period.240

Regardless of whether such a large scale synchronization of the earth’s various regional circulation systems has occurred, climate scientists now have very good evidence that the El Nino-Southern Oscillation (ENSO) cycle can itself account for a great deal of the variation in both global and regional surface temperatures that has occurred since 1960. McLean et al.241 show that regardless of which lower tropospheric temperature measure is used, there is a distinct delayed relationship between the state of the ENSO cycle and tropospheric temperatures across the globe. McLean et al. find that the onset of an El Nino triggers an increase in global surface temperatures, while La Nina events are followed by falling average surface temperatures. Strikingly, they find that the global impact of El Nino events extends to the Arctic, correlating very strongly with periods of Arctic warming and decreases in sea ice extent. While the direction of causality is, according to McLean et al., “unclear,” what is clear is that since the Great Pacific Climate Shift of 1976, the ENSO cycle has exhibited a pronounced bias toward warming El Nino events.

McLean et al. conclude by distinguishing their findings from those of Lean and Rind,242 who use techniques of linear regression to argue that no natural processes – not even ENSO – could account for overall warming in late 20th century global temperatures. According to McLean et al., the temperature data set used by Lean and Rind, which was different than that used by McLean et al., caused them to underestimate temperatures immediately following a very important El Nino event and to overestimate temperatures following an important La Nina event of that century, thus suppressing the impact of ENSO cycles on global temperature observed by Lean and Rind.

Thus, as we have seen with other climate science controversies, resolution of the role of ENSO in explaining 20th century climate variation would seem to await agreement on a standardized temperature dataset used by all researchers. But it is crucial to understand that if further research reveals that ENSO or other cycles that are intrinsic to the global climate system are indeed a primary driver of global climate cycles, then the utility of climate models may be very limited. This is succinctly but comprehensively explained by climate scientist Richard Lindzen:

“There are, in fact, numerous phenomena that current models fail to replicate at anywhere near the magnitudes observed. These range from the Intraseasonal Oscillations in the tropics (sometimes referred to as Madden-Julian Oscillation, and

240 Minobe, Resonance About 20 and 50 Year Oscillations, supra note __ at 858. See also Bratcher and Giese, Decadal Variability and Global Warming, supra note __ at 24-3 (“Several possibilities exist as to why the 1976 climate regime shift occurred and what could trigger a shift back to pre-1976 conditions. These include, but are not limited to, a multi-equilibria system, and the synchronous phase of ENSO, decadal variability, and interdecadal variability.”)

241 J.D. McLean et al., Influence of the Southern Oscillation on tropospheric temperature, 114 J. Geo. Res. D14104, doi:10.1029/2008JD011637 (2009). 242 J.L. Lean and D.H. Rind, How natural and anthropogenic influences alter global and regional surface temperature: 1889 to 2006, 35 Geo. Res. Lett., L18701, doi:10.1029/2008GL034864.

56

having time scales on the order of 40-60 days) to El Nino (involving time scales of several years) to the Quasi-biennial Oscillation of the tropic stratosphere to longer time scale phenomena of the Little Ice Age and the Medieval Warm Period (involving centuries). Under the circumstances, it seems reasonable to suppose that some things must exist that account for these model failures ...There is, in fact, no reason to suppose current models are treating such matters adequately...the current models fail to describe many known climate changes, and...therefore, the models’ failure to account for the recent warming (largely confined to the period 1976- 1995) hardly requires the invocation of anthropogenic forcing. It is nonetheless commonly argued by modelers that coupled models (even with passive mixed layer oceans) do adequately portray the natural unforced variability [citation omitted] despite acknowledging the cited shortcomings, and it may reasonably be claimed that this contention is the fundamental assumption behind the iconic claim of the last IPCC [that most of the observed warming is due to man].”243

The question is not whether any of the most recent work on intrinsic climate variability that I have discussed, or Lindzen’s summary of that literature, will ultimately be found to have accurately captured the most important mechanisms of intrinsic climate variability. The question instead is whether any person who is even somewhat informed about this literature and the shortcomings of climate models in capturing intrinsic variability could possibly accept the IPCC’s recent summary statement that “[t]here is also increased confidence that natural internal variability cannot account for the observed changes, due in part to improved studies that warming occurred in both oceans and atmosphere, together with observed ice mass losses.”244 I believe that the answer to this question is clearly “no”: that to anyone with a passing knowledge of literature, the IPCC’s statement seems to exaggerate or “oversell” current understanding of mechanisms of intrinsic climate variability, and, even more importantly, of the ability of climate models to capture those mechanisms.

3. Solar variation.

The literature reveals three ways in which variations in solar activity might influence the earth’s climate.245 The first and most direct is through fluctuations in the output of solar heat and light (total solar irradiance, or TSI). Until quite recently it was thought that the solar cycle correlates with climate cycles.246 It is now believed that over the short, 11-year term of the sunspot cycle, variation in TSI is too small—currently believed to be only around .05% or even less—to effect climate.247 Moreover, at such

243 Lindzen, Taking Greenhouse Warming Seriously, supra note __ at 947. 244 IPCC, Climate Change 2007, The Scientific Basis, supra note __ at 60. 245 E. Egorova, et al., Chemical and Dynamical Response to the 11-year variability of the solar irradiance simulated with a chemistry-climate model, 31 Geo. Res. Lett. L06119 (2004); Mike Lockwood and Claus Frölich, Recent Oppositely Directed Trends in Solar climate forcings and the global mean surface temperature, Proc. Royal Soc. A, doi:10.1098/respa.2007.1880 (2007). 246 See E. Friis-Christiansen and K. Lassen, Length of the Solar Cycle: An Indicator of Solar Activity Closely Associated with Climate, 254 Science 698 (1991). 247 Peter Foukal et al., A Stellar view on Solar Variations and Climate, 306 Science 68 (2004); Peter Foukal, et al., Variations in Solar Luminosity and their Effect on Earth’s Climate, 443 Nature 161 (2006).

57

short time scales, the dampening effect of the oceans is so great that even a larger solar variation could not be expected to impact global surface temperature.248

Work during the 1990’s suggested that over longer multi-decadal, centennial or millennial periods, there might well be larger variations in TSI that could affect earth’s climate.249This work drew inferences from findings regarding other stars, and therefore presumed that the Sun was similar to other stars that exhibit a long term, low frequency variation in luminosity. Recent observations have brought that presumption under controversy.250 It may be that even over longer multi-decadal or centennial time scales, variations in TSI are too small to impact climate even at these longer time scales.251

But variation in TSI is not the only mechanism by which variation in solar activity might affect the earth’s climate. Two other mechanisms have been posited: first, that much larger variations in solar ultraviolet irradiance indirectly influence the troposphere (and climate) via their influence on the stratosphere; and, second, that air ions generated by (fluctuating) cosmic rays alter cloud production.252 While the latter effect is apparently controversial and not well understood, there is both empirical evidence for and model simulations predicting a relatively strong influence of variations in solar UV radiation on global atmospheric circulation patterns.253 On centennial time scales, moreover, there is abundant evidence of a strong relationship between solar activity and global surface temperature. From 1890 until about 1970, the number of sunspots and total solar flux steadily increased along with mean global surface temperature; during a slightly longer period (up until 1985), there was a very close correlation between comic ray production and global mean surface temperature. (It has been conjectured that decreased levels of cosmic rays cause a decrease in low-level, primarily reflective low- level clouds.)254 Over even longer, millennial time scales, there is so far unassailable evidence of a very strong positive relationship between levels of solar irradiance and fundamental mechanisms of global climate such as the location of the jet stream and the strength of the North Hadley circulation cell.255 This work supports earlier millennial scale work that has recounted evidence that periods of reduced solar irradiance corresponded to Holocene-era glacial advances, to expansions in polar circulation above Greenland, and to an abrupt cooling that occurred in the Netherlands about 2700 years

248 Lockwood and Frölich, supra note __ at 4. 249 See E. Friis-Christiansen and K. Lassen, Length of the Solar Cycle: An Indicator of Solar Activity Closely Associated with Climate, 254 Science 698 (1991). 250 See Foukal et al., Variations in Solar Luminosity, 443 Nature at 164. 251 Mike Lockwood, What do Comogenic Isotopes Tell Us About Past Solar Forcing of Climate?, 125 Space Sci. Rev. 95 (2006). 252 Lockwood and Fröhlich, Trends in Solar Climate Forcings, supra note __ at 3; Egorova, et al., Chemical and Dynamical Response to the 11-year Solar Variability, supra note __ at L06119. 253 Egorova et al., Chemical and Dynamical Response, supra note __ at L06119. 254 Lockwood and Fröhlich, Trends in Solar Climate Forcings, supra note __ at 8-10; N.A. Krivova and S.K. Solanki, Solar Variability and Global Warming: A Statistical Comparison Since 1850, 34 Adv. Space Res. 361, 363 (2004); . 255 Geard Bond et al., Persistent Solar Influence on North Atlantic Climate During the Holocene, 294 Science 2130 (2001).

58

ago.256

Despite all of the evidence that solar variation does indeed influence earth’s climate, there are scientists who have long been and still remain skeptical that solar variation can really be so important to the sun-dependant earth’s climate. Among the most prominent of such scientific skeptics appear to be Peter Foukal and Tom Wigley. In a recent review article, they downplay both the possibility that variation in solar UV rays or cosmic rays – rather than in TSI per se – account for the influence of solar variation, saying that the “proposed indirect mechanisms” are “complex, and involved subtle interactions between the troposphere, stratosphere and even high layers of the Earth’s atmosphere that are much less well understood than the direct radiative forcing effect. Modeling of such interactions is proceeding rapidly, but incisive tests of the models will be required to achieve certainty.”257 And as for the so-far unassailable evidence of the millennial scale influence of solar variability on global climate, they say that as “no specific mechanism has been identified so far to generate millennial-scale solar irradiance variations,...[b]etter reconstructions of global temperature and solar activity will be required to investigate further the apparent relationships between climate and solar activity as seen over the past millennium and through the Holocene, particularly if the signature of any solar influence is spatially restricted”258 (Here, they allude to the fact that Bond et al. reported a close correspondence between solar variation and particular North Atlantic climate mechanisms.)

It might seem that whatever might be the explanation for the apparent long-term impact of solar variation on climate, the IPCC stood on unassailable ground in concluding in its 2007 AR that solar variation had the effect of cooling the planet since 1980, so that solar variation could not possibly account for warming since then. There is certainly evidence for the IPCC’s conclusion.259 But, as we have seen with several other crucial issues in climate science, the debate in fact has continued because a number of prominent climate scientists question whether the IPCC was looking at the right data. Scafetta and Wilson260 have recently argued for the superior accuracy of one of the two primary measurements total solar irradiance (TSI), and their preferred dataset shows that rather than falling or remaining constant, TSI increased significantly over the period 1986-1996. Scafetta has even more recently argued that when one allows for an appropriately long lag, variations in TSI can in fact explain “most” of the decadal and

256 Bond et al., Persistent Solar Influence on North Atlantic Climate, supra note __ at 2133 (citations omitted here). For further discussion of the evidence on the relationship between climate and solar variation during the Holocene, see Charles F. Keller, 1000 years of Climate Change, 34 Adv. Space Res. 315 (2004).

257 Peter Foukal et al., Variations in Solar Luminosity and Their Effect on Earth’s Climate, supra note __ , 443 Nature at 165. 258 Peter Foukal et al., Variations in Solar Luminosity and Their Effect on Earth’s Climate, supra note __ , 443 Nature at 165.

259 S.K. Solanki and N.A. Krivova, Can Solar Variability Explain Global Warming Since 1970?, 108 J. Geo. Res. 1200 (2003), Lean and Rind, How natural and anthropogenic influences alter global and regional surface temperature, 35 Geo. Res. Lett., supra note __. 260 Nicola Scafetta and Richard C. Willson, ACRIM-gap and TSI trend issue resolved using a surface magnetic flux TSI proxy model, 36 Geo. Res. Lett. L05701, doi:10.1029/2008GL036307 (2009).

59

secular variability in global mean surface temperatures since 1600, including the period since 1980.261

As with other contested issues in climate science, Scafetta and Wilson’s view that the variability in TSI has a significant impact on climate, accounting for a good bit of the warming since 1980 in particular, has been met with vigorous and virtually instantaneous rebuttal climate scientists who have been leaders in establishing the activist CO2 primacy view (in this case, Duffy, Santer and Wigley).262 Part of the argument against solar variability is that variations in TSI cannot account for observed changes, such as the relative cooling of the stratosphere since about 1950; but the primary argument made by Duffy et al. against solar variation as causing late 20th century warming is that a different and allegedly more accurate dataset measuring TSI shows that there was no increase in TSI since 1980.263 Leaving the ultimate question of which dataset is indeed more accurate aside, it is still interesting to note that Duffy et al. did not attempt to defend their choice of a dataset by questioning the arguments made in Scafetta and Wilson’s technical, peer- reviewed paper on TSI measurements.

4. Black Carbon (Soot)

In its Summary for Policymakers and Technical Summary accompanying its 2007 AR, the IPCC had very little to say about soot, or black carbon, but what it did say was that although the deposition of black carbon (soot) on snow reduced surface albedo (reflectivity), such deposition was estimated to generate a global warming of only .1 W/m2, a warming about which scientists had only a “low” level of understanding.264 As the actual 2007 AR explained, in the atmosphere, “black carbon strongly absorbs solar radiation,” meaning that black carbon in the atmosphere also causes warming, which the 2007 AR estimated at about .2 W.m2.265 Thus, according to the IPCC’s 2007 AR, the two effects of black carbon, or soot, added up to a net warming of .3 W/m2, a relatively low value when compared with the 1.66 W.m2 warming value given to CO2 by the IPCC.266

However, in research that was published around the same time as the release of the 2007 AR (and which therefore was likely available in unpublished form well before this), Flanner et al. generated revised estimates of the warming impact of black carbon deposition on snow that showed black carbon deposition was a significantly more powerful global warming force than CO2, indeed three times more powerful, and had resulted in an Arctic surface warming of between .5 and 1 degree centigrade over the

261 Nicola Scafetta, Empirical signature of the solar contribution to global mean air surface temperature change, J. Atmos. Terres. Phys. 20i:10.1016/j.jastp.2009.07.007 (2009). 262 Philip B. Duffy et al., Solar Variability does not explain late 20th-century warming, Physics Today, Jan. 2009, pp. 48-49, responding to Nicola Scafetta and Bruce J. West, Is Climate Sensitive to Solar Variability?, Physics Today, March, 2008, pp. 50-51.

263 See Philip B. Duffy et al., Solar Variability does not explain late 20th-century warming, Physics Today, Jan. 2009, pp. 48-49. 264 IPCC, Climate Change 2007, p. 30. 265 IPCC, Climate Change 2007, p. 165.

266 Robert F. Service, Study Fingers Soot as a Major Player in Global Warming, 319 Science 1745 (2008). 60

previous century.267 Moreover, in research published in 2001, six years before the 2007 AR, Jacobson explored the evolution of the chemical composition of aerosols in the atmosphere and found that black carbon was likely to be incorporated within other aerosols in the atmosphere (due to the way aerosol particles coagulate and grow), implying a much higher positive, warming impact from black carbon in the atmosphere than previously thought.268 The very next year, 2002, Jacobson reported results showing that through a variety of mechanisms, black carbon “warmed the air 360,00-840,00 times more effectively per unit of mass than did CO2.”269 That same year, in a Science magazine “Perspectives” article, Chameides and Bergin summarized the ongoing research into the warming impact of black carbon as suggesting that if black carbon was indeed an important contributor to atmospheric warming, then climate model simulations that somehow managed to reproduce 20th century temperature trends without even including black carbon might not be “meaningful.”270

By the summer of 2007, climate scientist Charles Zender was suggesting to Scientific American magazine that as much as 94 percent of warming in the Arctic over the last 100 years due to the deposition of black carbon on snow and ice in that region.271 Finally, about a year after the appearance of the IPCC’s 2007 Assessment Report, a review article by Ramanthan and Carmichal was published in which the authors estimated that the warming from black carbon was .9W/m2 and noted that “similar conclusions regarding the large magnitude of [black carbon] forcing” had been “inferred” by four papers published over the period 1998 to 2003, papers whose own estimates ranged from .4 W/m2 to 1.2 W/m2.272

Recalling that the IPCC had given an estimate for black carbon forcing of only .3-.4 W/m2 in its 2007 AR, it seems that in that 2007 AR, the IPCC had chosen what was in fact the lowest value in work published over the preceding decade for the estimated contribution of black carbon to global warming. Moreover, unmentioned by the IPCC’s 2007 AR was the enormous fraction of highly publicized arct ic warming that was by then estimated to have been caused not by CO2, but by garden-variety industrial soot, or black carbon. Since the IPCC’s own 2007 estimate for CO 2 – induced warming was only 1.66 W/m2, if the 2008 estimate for warming due to black carbon of .9W/m2 is correct, then the IPCC’s 2007 report may well have downplayed a factor – black carbon – that was in fact a very substantial contributor to 20th century warming, and an even more important – indeed the most important – factor in 20th century arctic warming. Finally, since climate models omit black carbon, recent work showing black carbon to be a very

267 Mark G. Flanner et al., Present-day climate forcing and response from black carbon in snow, 112 J. Geo. Res. D11202, doi:10.1029/2006JD008003 (2007). 268 Mark Z. Jacobson, Strong Radiative heating due to the mixing state of black carbon in atmospheric aerosols, 409 Nature 695 (2001).

269 Mark Z. Jacobson, Control of fossil-fuel particulate black carbon and organic matter, possibly the most effective method of slowing global warming, 107 J. Geo. Res., o. D19, 4410, doi:10.1029/2001JD001376 (2002). 270 William L. Chameides and Michael Bergin, Soot Takes Center Stage, 297 Science 2214 (2002).

271 See David Biello, Impure as the Driven Snow: Smut is a bigger problem than greenhouse gases in polar meltdown, Scientific American, June 8, 2007. 272 V. Ramanathan and G. Carmichael, Global and regional climate changes due to black carbon, 1 Nature Geo. 221, 222 (2008).

61

strong agent of global warming adds weight to the belief that if climate models are explaining 20th century warming trends as due solely to increases in CO2, then they must be vastly overestimating the sensitivity of climate to increases in CO2.

E. Glossing over Serious and Deepening Controversies Over Methodological Validity: The Example of Projected Species Loss

As an example of this particular rhetorical technique, consider the quantitative prediction, set forth in the IPCC Working Group II’s Summary for Policymakers, that 20- 30% of species worldwide will be at risk of extinction due to global warming. When one looks at the literature, one discovers that at least some biologists believe that this prediction relies upon an entirely novel and extremely controversial methodology.

In its April, 2007 “Summary for Policymakers,”273 the IPCC Working Group on Impacts, Adaptation and Vulnerability proclaimed that “[a]pproximately 20-30% of plant and animal species assessed thus far are likely to be at increased risk of extinction if increases in global temperature exceed 1.5 – 2.5°C.” The IPCC failed to release the actual report ostensibly being summarized in the “Summary for Policymakers,” and so at the time it was published, and for months afterward, it was not possible for anyone not on the IPCC Working Group II to even determine which studies were being cited in support of this conclusion. But finally, in July of 2007, the IPCC Working Group II did release its full report. That report explains that the IPCC prediction of a 20-30% decline in species is based on “correlative models” that “use knowledge of the spatial distribution of species to derive functions...or algorithms...that relate the probability of their occurrence to climatic and other factors.”274 The report observes that these methods have been criticized for: 1) assuming equilibrium between species and current climate; 2) being unable to account for species interactions; 3) failing to specify a physiological mechanism explaining the dependence of species on climate; and, 4) failing to take account of population dynamics and species migration.275 To a layperson, these sound like pretty serious problems, but IPCC Report assures the reader that these “correlative methods nonetheless “provide a pragmatic first-cut assessment of risk to species decline and extinction.”276

In support of its assertion that the statistical correlation between species and climate provides a “pragmatic first-cut assessment of risk to species decline and extinction,” the IPCC Report cites a single study. It is true that this study, by Thomas et.

273 Working Group II Contribution to the Intergovernmental Panel on Climate Change, Fourth Assessment Report, Climate Change 2007: Climate Change Impacts, Adaptation and Vulnerability 8 (April 6, 2007). 274 F. Fischlin, et. al., Ecosystems, their Properties, Goods and Services, in Climate Change 2007: Impacts, Adaptation and Vulnerability 211, 218 (M.L. Parry, et. al., 2007).

275 Fischlin, et. al, Ecosystems, Their Properties, Goods, and Services, supra note _- at 218. 276 Fischlin, et. al., Ecosystems, their Properties, Goods and Services, in Climate Change 2007: Impacts, Adaptation and Vulnerability 211, 218 (M.L. Parry, et. al., 2007).at 218.

62

al.,277 employs “correlative methods” to predict how climate change will impact species extinction risk. But it doesn’t simply calculate the statistical correlation between some set of climate variables and then use predicted changes in the climate variables to predict changes in species distribution. Rather, as summarized by its authors, the study by Thomas et. al. is based on the species-area relationship, a model in which the probability of extinction has a power law relationship to geographical range size.278 For the regions studied by Thomas et. al. (which make up about 20 per cent of the earth’s surface) even mid-range climate warming will reduce range sizes by 2050: quite straightforwardly, if average temperature increases everywhere on the earth, then a greater land area will be relatively warm, compared to today, while less land area will be relatively cool, compared to today. If, as the species-area relationship maintains, the number of species is (log-linearly) related to range size, then a reduction in the land area of relatively warm ranges must reduce the number of species who inhabit such ranges. Plugging the reduction in area due to climate change into the species-area relationship, Thomas et. al. find that between 15 and 37% of earth’s species will be “committed to extinction” by even mid-range global warming.279

Because these models “offer the advantage of assessing climate change impacts on biodiversity quantitatively,” the IPCC adopts their quantitative predictions.280 But the IPCC’s passing acknowledgement that the climate envelope (species-area based) approach to modeling the impact of climate change on species extinction risk has “limitations” seemingly ignores much stronger criticism of this methodology among biologists. Oxford zoologist Owen Jones, for example, has noted that the species-area relationship method used by Thomas et. al. “conceals a number of assumptions and complications”281 and “problematic...uncertainties concerning the usefulness of climate envelope models for predicting ranges under different climate change scenarios.”282 According to Owens, these “problematic” uncertainties and complications, include:

i) Uncertainty about “how many distributions are truly governed by climate.” As Owens explains, the “widespread ability of species to persist if transplanted or introduced outside their current range and simulated climatic envelope suggests that this is often not the case,” with the ecological niches currently occupied by species “considerably narrower than the fundamental niches that a species can occupy in isolation.” As he says, while we do not have the evidence to predict precisely how these changes will occur, “the

277 C.D. Thomas, et. al., Extinction Risk from Climate Change, 427 Nature145 (2004). 278 Called ecology’s “oldest law,” the species-area relationship relates the number of species S found in a sampled patch of area A to that area through the power law formula S = cAz, where c is a constant and z a parameter that has been observed to depend upon habitat, scale and taxa. See Michael L. Rosenzweig, Heeding the Warning in Biodiversity’s Basic Law, 284 Science 276 (1999) and Héctor García Martín and Nigel Goldenfeld, On the Origin and Robustness of Power-Law Species- Area Relationships in Ecology, 103 Proc. Nat’l. Acad. Sci. 10310 (2006). 279 Thomas, et. al., 427 Nature at 145. 280 See Fischlin et. al., Ecosystems, Their Properties, Goods and Services, supra note __ at 240. For further discussion of the use of the “climate envelope” approach, see J. Alan Pounds and Robert Puschendorf, Clouded Futures, 427 Nature 107 (2004). 281 Owen T. Lewis, Climate Change, Species-Area Curves and the Extinction Crisis, 361 Phil. Trans. R. Soc. B 163, 165 (2006). 282 Lewis, Climate Change, Species-Area Curves, supra note __ at 167.

63

set of interacting species at any particular locality will not be a simple reconstruction of the community composition observed at other localities before climate change.”283

ii) Although “pessimists” believe that it is unlikely that organisms can “evolve rapidly enough to adapt to changing environmental conditions,” there is evidence that precisely this has occurred in “some” insects and “at least” one species of plant.284

iii) While stating their calculated extinction probabilities “were specific to the regions and species included in the study,” Thomas et. al., “interpret their results as though they are global estimates....Even if predictions for the specific taxa and regions included in the study are accurate, the extrapolation to a global scale may be misleading.” Most seriously, perhaps, the method adopted by Thomas et. al. necessarily limited their analysis to endemics, “species whose ranges fall entirely within the particular study areas,” but such endemic species are likely to have particularly small ranges, and “it is well known that species with small ranges are particularly prone to extinctions.”285

iv) Although only a “small fraction” of species studied by Thomas et. al., are from tropical forests, “these forests account for over 50% of terrestrial biodiversity (perhaps considerably more) and may be less affected by climate change than habitats at higher latitudes.”286

v) Finally, and perhaps most strikingly to my layperson’s sensibilities, the methodology employed by Thomas et. al. will “inevitably detect extinctions. Negative changes in the size of a species’ range contribute to an increased extinction risk overall, while positive changes have no net effect on extinctions,” this despite the fact that locally, “the net effect on diversity at any one locality might well be positive, as species spread towards the poles from the most species-rich habitats near the equator.”287

Owens concludes his discussion of the Thomas et. al. study by noting that although Thomas et. al. “tend to emphasize the factors that may make their predictions too low,” the species-area method they utilize in fact “conceals a hotchpotch of assumptions, extrapolations, approximations and estimates that combine to generate considerable uncertainty...about the likely magnitude of extinctions caused by climate change.”288

The article by Owens cannot be dismissed as the views of a single and potentially outlying critic. In a summary article entitled “Forecasting the Effects of Global Warming on Biodiversity”, Botkin et al.289 – a team of no fewer than 18 biologists from around the world -- noted a number of serious limitations of the kind of model used to forecast

283 Lewis, Climate Change, Species-Area Curves, supra note __ at 167. 284 Lewis, Climate Change and Global Extinctions, supra note __ at 167. 285 Lewis, Climate Change and Global Extinctions, supra note __ at 168. 286 Lewis, Climate Change and Global Extinctions, supra note __ at 168. 287 Lewis, Climate Change and Global Extinctions, supra note __ at 168. 288 Lewis, Climate Change and Global Extinctions, supra note __ at 169. 289 Daniel B. Botkin et al., Forecasting the Effects of Global Warming on Biodiversity, 56 Bioscience 227, 231 (2007).

64

species loss by Thomas et al. (generally called bioclimatic envelope models). The 18 biologist co-authors explained how the limitations of such models include the assumption that observed species distributions are “in equilibrium with their current environment, and that therefore species become extinct outside the region where the environment, including the climate, meets their present or assumed requirements,” but this assumption contradicts the existing data and observations that “show species have survived in small areas of unusual habitat, or in habitats that are outside their well-established geographic range but actually meet their requirements.” Botkin et al. conclude that such models in general are “likely to overestimate extinctions, even when they realistically suggest changes in the range of many species,” while the Thomas et al. study in particular “may have greatly overestimated the probability of extinction...”290

According to two other peer-edited reviews of the “bioclimatic” models underlying the Thomas et al. species loss probability number – both of which appeared around or before the time of the IPCC’s 2007 AR4 -- “the problems associated with the present distribution of species are so numerous and fundamental that common ecological sense should caution us against putting much faith in relying on their findings for further extrapolations,”291 and the bioclimatic models used for future predictions are “based on some problematic ecological assumptions."292 Given the extensive and foundational criticism by biologists of the methodology underlying the species loss probability prediction generated by Thomas et al., the IPCC’s publication of that probability without qualification seems dangerously misleading, and in any event clearly exemplifies the rhetoric of adversarial persuasion, rather than “unbiased” assessment.

F. Exaggerate in the Name of Caution: Sea Level Scare Stories versus the Accumulating Evidence

Of all the potential negative consequences from global warming, sea level rise has been perhaps the most dramatically advertised. In a review essay entitled “The Threat to the Planet,”293 climate scientist and NASA Goddard Institute Director Jim Hansen— perhaps the most publicly visible climate scientist and certainly the one most often quoted by the popular press – opined that of all the threats from climate change, the “greatest” is the potential melting of the ice sheets of Greenland and Antarctica and the consequent increase in global sea level.294 Hansen takes as his rhetorical reference point what the IPCC and others have called the business-as-usual scenario, under which annual emissions of CO2 and other ghg’s continue to increase at their current rate for at least fifty years. Given such an increase, Hansen says that both climate models and paleoclimatic data from ice cores predict that in fifty years this increase in ghg’s will increase the earth’s average temperature by about 5 degrees Fahrenheit relative to today. According to Hansen, the ice core data also show that the last time that the Earth was five degrees

290 Botkin et al. supra note __ at 231. 291 Carsten F. Dormann, Promising the future? Global change projections of species distributions, 8 Basic and Appl. Ecol. 387, 388 (2007). 292 Miguel B. Araujo and Carsten Rahbek, How does climate change affect biodiversity?, 313 Science 1396 (2006). 293 Jim Hansen, The Threat to the Planet, New York Review of Books, July 13, 2006, p. 12. 294 Hansen, at 13, as is the remainder of this paragraph.

65

warmer than now (three million years ago), sea level was about eighty feet higher. Hansen describes the consequences of an eighty foot increase in sea level in catastrophic terms:

“Eighty feet! In that case, the United States would lose most East Coast cities: Boston, New York, Philadelphia, Washington and Miami; indeed practically the entire state of Florida would be under water. Fifty million people in the U.S. live below that sea level. Other places would fare far worse. China would have 250 million displaced persons. Bangladesh would produce 120 million refugees, practically the entire nation. India would lose the land of 150 million people.”295

Much could be said about the rather complex relationship between the eighty foot increase that Hansen predicts in this article for the popular press and what the evidence on sea level change actually shows.296 Hansen’s eighty foot increase in sea level is not without technical support, but that support comes from data on the impact on sea level of the melting of the last great deglaciation.297 That data show that the melting of the vast ice-age glaciers generated a rise in sea level of eighty feet in 500 years, with annual rates of increase sometimes exceeding 40 mm/yr. The continental ice sheets are of course much smaller today than they were during the last ice age. Moreover, the most recent paleoclimatic evidence shows that between 129,000 and 118,000 years ago, when summertime temperatures in Greenland were between 3.5 and 5 degrees centigrade (between 6 and 9 degrees Fahrenheit) warmer than today, sea level was 4 to 6 meters

295 Hansen, the Threat to the Planet, supra note __ at 13. 296 Since the glaciers reached their maximum extent around 20,000 years ago, sea level rose 350 feet in mid-latitudes as the ice melted; the rate slowed to 3 feet per century between 15,000 and 6,000 years ago (although some centuries may have been as high as 10 feet per century); between 6,000 and 3,000 years ago, the rate slowed further to 1.5 feet per century, and for the last 4,000 years, the rate has been less than 4 inches per century. Orrin H. Pilkey and Linda Pilkey-Jarvis, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future 70-71 (2007). Over the past 50 years, global sea level has been rising at a much higher rate of about 1.8mm/year; since 1993, sea level has been increasing at a ate of 3 mm/year (or more than a foot per century). Anny Cazenave, How Fast are the Ice Sheets Melting?, Science Express, October 19, 2006, 10.1126 Science.1133325. For the IPCC’s endorsement of these numbers, see N.L. Bindoff et. al., Observations: Oceanic Climate Change and Sea Level, in Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the IPCC 385, 409 (S. Solomon, et. al., eds. 2008).

Under very pessimistic assumptions about continuing increases in ghg’s, the IPCC projects an increase in global sea level of at most .58 meters, or a little more than two feet, by 2100. (This is the projection under IPCC Emission Scenario A1F1). See N.L. Bindoff, et. al., Observations: Oceanic Climate Change, supra note __ at 409; R.J. Nicholls, et. al., Coastal Systems and Low-Lying Areas, in Climate Change 2007: Impacts, Adaptation and Vulnerability 323 (M.L. Parry et. al., eds. 2007). Finally, remote sensing data from satellite and laser altimetry depict a rather complex picture in both Greenland and Antarctica: in Greenland, there is accelerated ice loss in coastal areas but a slight mass gain in inland high elevation areas; in Antarctica, accelerated ice mass loss in the western part of the continent, but a slight ice mass gain in the eastern part as a result of increased snowfall. Cazenave, How Fast are the Ice Sheets Meltinng?, at p. 1.

297 It has been estimated that during the last deglaciation (that began between 16,000 and 14,500 years ago, depending upon location), a significant meltwater pulse that occurred around 14,200 years ago raised sea levels an average of 20 meters (in some places, such as Barbados and Indonesia, 25 meters) in less than 500 years. Peter Clark et. Sea level fingerprinting as a direct test for the source of global melt water pulse 1A, 295 Science, 2438 (2002), and Andrew J. Weaver, et. al., Meltwater Pulse 1A from Antarctica as a Trigger of the Bolling-Allerod Warm Interval, 299 Science 1709 (2003).

66

higher than today.298 It has been conjectured that rates of sea level rise of up to 1 meter per century (10 mm/yr) occurred during that period.299

Thus even under the upper end of the IPCC business-as-usual forecast for temperature increase in Greenland of 10 degrees centigrade, it is difficult to see how Hansen’s 80 foot sea level increase in a single century is possibly consistent with the existing evidence. In fact, while poorly understood, given the existing state of the modeling, the most recent evidence on rates of deglaciation and sea level rise tend to show much more modest sea level increases and a more complex picture of ice melt than Hansen portrays. As for sea level rise, in its 2007 Assessment Report, the IPCC endorsed studies estimating that over the past 50 years, global sea level has been rising at a rate of about 1.8mm/year, and at a rate of 3 mm/year since 1993.300 Even under relatively pessimistic assumptions about continuing increases in ghg’s, assumptions that eventually generate an annual sea level increase of 4 mm/year, the IPCC’s 2007 AR projected an increase in global sea level of at most .44 meters, or about a foot and a half, by 2090.301 Since then, a number of studies have appeared that tend to show that the IPCC may not have been as conservative as it claimed: Holgate302 estimated that the sea level rise during the early twentieth century was 2.0 mm/year, much larger than the 1.45 mm/year estimate he found for the latter half of the twentieth century; Jevrejeva et al.303 find evidence that the sea level increase began over 200 years ago; Woppleman et al.304 find that sea level as measured at one stable tide gauge location has been increasing at constant rate for the last 100 years; using a wide variety of different sea level measures, Wunsch et al.305 come up with an estimate of an increase of 1.6 mm/year over the period 1993-2004 (versus the 3 mm/year estimate used by the IPCC in 2007). Perhaps most important is Wunsch et al.’s conclusion:306

298 Jonathan T. Overpeck, Paleoclimatic Evidence for Future Ice-Sheet Instability and Rapid Sea-Level Rise, 311 Science 1747, 1748 (2006). 299 Overpeck et. al., Paleoclimatic Evidence, supra note __ at 300 Anny Cazenave, How Fast are the Ice Sheets Melting?, Science Express, October 19, 2006, 10.1126 Science.1133325. For the IPCC’s endorsement of these numbers, see N.L. Bindoff et. al., Observations: Oceanic Climate Change and Sea Level, in Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the IPCC 385, 409 (S. Solomon, et. al., eds. 2007 or 2008). A recent study that corrects biases in the measurement of ocean heat content and increases in sea level due to warmer ocean temperatures (thermosteric sea level change) has come up with somewhat lower estimates for globally averaged sea level rises of 1.6 mm per year over the period 1961- 2003 and 2.4 mm per year over the period 1993-2003. Catia M. Domingues et al., Improved Estimates of Upper-Ocean Warming and Multi-decadal sea Level Rise, 453 Nature 1090 (2008).

301 This is the projection under IPCC Emission Scenario A1B. See N.L. Bindoff, et. al., Observations: Oceanic Climate Change, supra note __ at 409. 302 S.J. Holgate, On the decadal rate of sea level changes during the twentieth century, 34 Geo. Res. Lett. L01602 doi:10.1029/2006GL028492. In the present version, I paraphrase the summary of these studies provided by Madhav Khandekar, Global Warming and Sea Level Rise, 20 Energy & Env. 1067 (2009). 303 S Jevrejeva et al., Recent global sea level acceleration started over 200 years ago? 35 Geo. Res. Lett. L08715 doi:10.1029/2008GL033611.

304 G. Wopplemann et al., Tide gauge datum continuity at Brest since 1711: France’s longest sea-level record, 35 Geo. Res. Lett. doi:10.1029/2008GL035783. 305 Carl Wunsch et al., Decadal trends in sea level patterns: 1993-2004, 20 J. Clim. 5889 (2007). 306 Wunsch et al., supra note __ at 5905.

67

“At best, the determination and attribution of global-mean sea level change lies at the very edge of knowledge and technology. The most urgent job would appear to be the accurate determination of the smallest temperature and salinity changes that can be determined with statistical significance, given the realities of both the observation base and modeling approximations. ...It remains possible that the database is insufficient to compute mean sea level trends with the accuracy necessary to discuss the impact of global warming – as disappointing as this conclusion may be.”

As for ice loss, remote sensing data from satellite and laser altimetry depict a variegated picture in both Greenland and Antarctica: in Greenland, there is accelerated ice loss in coastal areas but a slight mass gain in inland high elevation areas; in Antarctica, accelerated ice mass loss in the western part of the continent, but a slight ice mass gain in the eastern part as a result of increased snowfall.307

G. A Theory That Cannot be Disconfirmed: Sea Level Scare Stories and The Continuing Off-Model Private Prognostications of Climate Change Scientist/Advocates

While the IPCC’s consensus projections generally correspond to what the mean or median GCM predicts (more on this below), the Reports are careful to at least mention more extreme and harmful future scenarios that one or more climate models suggest as being at least possible. Unsurprisingly, some leading establishment climate scientists clearly believe that more attention should be paid to possible global warming worst-case scenarios, even if those scenarios are only weakly supported, if at all, by the existing peer-edited literature. Hence, in a rhetorical strategy obviously closely related to the strategy of exaggeration just discussed, many leading climate change scientist/advocates have waged a continuing campaign that involves publicizing their own personal opinions that even moderate global warming may have catastrophic consequences.

Sea level rise once again provides a dramatic illustration of this strategy. Recall that the IPCC’s 2007 Physical Science Assessment Report presents a “consensus” estimate of 2100 sea level rise as somewhere between .18 and .6 meters.308 As the IPCC Report explained, this consensus estimate would have been higher had it included estimated sea level rise due to various feedbacks and dynamic effects, such as accelerating flow and calving of glaciers that terminate in the sea. The IPCC excluded such feedbacks and dynamic effects because “present understanding of the relevant processes is too limited for reliable model estimates.”309

With advance notice of the IPCC’s relatively cautious prediction on sea level rise, climate change advocates began arguing – shortly before the Report appeared -- in favor of much larger and potentially more catastrophic increases in sea level. For example, in

307 Cazenave, How Fast are the Ice Sheets Melting?, supra note __ at p. 1. 308 IPCC 2007, ___. 309 W.T. Pfeffer et al, Kinematic Constraints on Glacier Contributions to 21st Sea Level Rise, 321 Science 1340 (2008).

68

an “Editorial Essay” that appeared – somewhat incongruously – in the peer-edited journal Climatic Change, James Hansen began by posing the frightening questions:

“Are we on a slippery slope now? Can human-made global warming cause ice sheet melting measured in meters of sea level rise, not centimeters, and can this occur in centuries, not millennia? Can the very inertia of the ice sheets, which protects us from rapid sea level change now, become our bête noire as portions of the ice sheet begin to accelerate, making it practically impossible to avoid disaster for coastal regions?”310

Hansen notes that the existing climate models actually predict that with a doubling of CO2 both the Greenland and Antarctic ice sheets will be growing at a rate equivalent to a sea level fall of 12 cm per century, and that even studies that assume meltwater will greatly accelerate ice sheet flow in Greenland still predict very small contributions of such melting ice sheets to sea level rise.311 But he then goes on to state his opinion that “the calculations do not yet fully and realistically incorporate important processes that will accelerate ice sheet disintegration,” and supports his opinion by applying basic principles of climate physics to a selective parsing of paleoclimatic evidence.312The bottom line Hansen is driving toward is that “global warming of more than 1 degree C above today’s global temperature would likely constitute ‘dangerous anthropogenic interference’ with climate.”313

While less sustained, other activist climate scientists also effectively undercut the 2007 IPCC Report’s caution on sea level rise by preempting the published Report and arguing for the serious possibility of abrupt and dramatic sea level rise. In a 2006 review essay entitled “Abrupt Change in Earth’s Climate System,” Jonathan Overpeck – a Coordinating Lead Author of the AR4 (on the paleo-climate chapter) -- and Julia Cole argued that despite the IPCC’s consensus, “new evidence has emerged that ice sheets, and thus global sea level, can respond more quickly to climate change, perhaps in an abrupt manner.”314 Overpeck and Cole discuss precisely the same evidence that I cite above – showing that during the last interglacial, sea levels were 4-6 meters higher than today – but they emphasize that if the melting of the West Antarctic ice sheet contributed so much (as 3 meters) then, when there was only minor high-latitude Southern Hemisphere warming, then that ice sheet “could be quite susceptible to collapse in the future.”315

310 James E. Hansen, A Slippery Slope: How Much Global Warming Constitutes “Dangerous Anthropogenic Interference?,” 68 Clim. Change 269, 269-270 (2005). 311 Hansen, A Slippery Slope, supra note __ at 270. 312 Hansen, A Slippery Slope, supra note __ at 270-277.

313 Hansen, A Slippery Slope, supra note __ at 276. 314 Jonathan T. Overpeck and Julia E. Cole, Abrupt Change in Earth’s Climate System, 31 Ann. Rev. Environ. Resour. 1, 16 (2006). 315 Overpeck and Cole, supra note __ at 17.

69

Recent published work seems to cast increasing doubt on the prognostications of catastrophic ice sheet collapse made by activists such as Hansen and Overpeck.316 While some work has indeed identified possible physical mechanisms whereby warming accelerates the flow of Greenland outlet glaciers into the sea,317 other work has shown that this acceleration may be merely a short term phenomenon, with lower long term (equilibrium) rates of glacier mass loss.318 In terms of the expected sea level rise, Pfeffer et al.319, demonstrate that for sea level increases of 2 meters to be caused by 2100 solely by increasingly rapid and dynamically unstable calving of Greenland ice sheets, Greenland outlet glaciers would have to be outflowing at speeds between 22 and 40 times the fastest speed ever observed by any glacier. They conclude that under high but “reasonable” assumed rates of acceleration in ice sheet outflow in both Greenland and Antarctica (an order of magnitude higher than today), estimated sea level rise by 2100 is between .8 and 2 meters.320

Now a 2 meter sea level rise by 2100 is hardly insignificant and indeed could be severely harmful for some developing and island nations in particular. But even 2 meters pales in comparison with the 20 foot increase proclaimed by Hansen and the 4-6 meter number discussed by Overpeck/Cole. With the advantage of the recent studies discussed just a moment ago, one can put the criticism of the IPCC by Hansen in particular in perspective: the IPCC projection may have been a bit conservative, but the numbers suggested by Hansen seem increasingly fantastical.321 In this light, the kind of alarmist prognostications made prior to the publication of IPCC AR’s by Hansen smack much more of policy advocacy than actual scientific results.

As advocacy, such alarmist prognostications have the very important and somewhat paradoxical consequence of buttressing the IPCC’s claim to objectivity. The fact that some scientists have publicly opined that global warming is much more dangerous than a forthcoming IPCC Report will conclude strengthens the case for the IPCC as a sort of impartial judge. But this impression is highly misleading, for the

316 I would be remiss, however, were I to fail to note that in a recently published paper, W.T. Pfeffer et al., Kinematic Constraints on Glacier Contributions to 21st Century Sea-Level Rise, 321 Science 1340, 1342 (2008), demonstrate that for sea level increases of 2 meters to be caused by 2100 solely by increasingly rapid and dynamically unstable calving of Greenland ice sheets, Greenland outlet glaciers would have to be outflowing at speeds between 22 and 40 times the fastest speed ever observed by any glacier. They conclude that under high but “reasonable” assumed rates of acceleration in ice sheet outflow in both Greenland and Antarctica (an order of magnitude higher than today), estimated sea level rise by 2100 is between .8 and 2 meters. Pfeffer et al., Kinematic Constraints, supra note __ at 1342.

317 For one such mechanism, surface melting, see I.M. Howat, Synchronous retreat and acceleration of southeast Greenland outlet glaciers 2000-2006: ice dynamics and coupling to climate, 54 J. Glaciology 646 (2008). 318 Faezeh Nick et al., Large-scale changes in Greenland outlet glacier dynamics triggered at the terminus, Nature Geoscience, DOI:10.1038, published online Jan. 2009.

319 W.T. Pfeffer et al., Kinematic Constraints on Glacier Contributions to 21st Century Sea-Level Rise, 321 Science 1340, 1342 (2008). 320 Pfeffer et al., Kinematic Constraints, supra note __ at 1342 321 As discussed above, when read in full context, the Overpeck/Cole numbers are essentially factual.

When not carefully situated and explained, however, such numbers cause the IPCC projections to seem overly conservative.

70

Hansen and Overpeck/Cole essays are just that – essays expressing the opinions of experts – rather than original scientific contributions. Through such essays, climate change advocates essentially use their own expertise to set up an alternative, authoritative evaluation of the existing scientific evidence prior to the publication of the IPCC’s own evaluation. But this alternative explanation does not challenge the objectivity of the IPCC. Instead, through such a strategy, climate change advocates benefit from the IPCC’s image – as an objective assessor – on those issues upon which they agree with the IPCC, while retaining the freedom to make their own, more dire prognostications on issues such as sea level rise. As Hansen explains:

“...I disagree with the implication of Allen et al.322 that conclusions about climate change should wait until the IPCC goes through a ponderous process, and that verdicts reached by the IPCC are near gospel. IPCC conclusions, even after their extensive review and publication, must be subjected to the same scientific process as all others.

In the case at hand, I realize that I am no glaciologist and could be wrong about the ice sheets. Perhaps, as [the IPCC’s 2001 Assessement Report] and more recent global models suggest, the ice sheets are quite stable and may even grow with doubling of CO2. I hope those authors are right. But I doubt it.”323

Upon further analysis, this excerpt becomes even more disturbing. For Hansen is not only arguing, quite correctly in my view, that scientists should retain the freedom to criticize IPCC conclusions. What he is doing is to deliver his personal opinion, as an expert, in a way that seems highly likely to cause readers to confuse that opinion with a scientific conclusion or result. That is, Hansen is not presenting any new data or analysis, but just re-interpreting the models and evidence, without any particular explanation or justification for that interpretation.

Hansen is by no means alone in adopting this approach. Many climate scientists have responded to unexpected loss of Arctic sea ice by quickly stating their own opinions that in light of such unexpected changes, it seems that existing models are too conservative, and that ice loss will occur much more quickly than climate models, and the IPCC, have projected. For example, when satellite data had revealed that the Arctic sea ice pack had reached an all-time low after the summer of 2007, a senior research scientist at the U.S. National Snow and Ice Data Center told the media that the loss was “astounding” and that although models had predicted the complete disappearance of summer Arctic ice by 2070, “losing summer ice cover by 2030 is not unreasonable.”324 Another researcher opined that “the strong reduction in just one year certainly flags that the ice (in summer) may disappear much sooner than expected.”325 Even the official website of NOAA seems to put a rhetorical slant on the information it conveys: in 2009,

322 Myles Allen et al., Uncertainty in the IPCC’s Third Assessment Report, 293 Science 430 (2001). 323 Hansen, A Slippery Slope, supra note __ at 278. 324 See “Ice Loss ‘Opens Northwest Passage’”, available at www.cnn.com/2007/TECH/science/09/15/arctic.nwestpssg/index.html (published online September 15, 2007).

325 “See Ice Opens Northwest Passage,” supra note __. 71

summer arctic sea ice continued to increase in extent relative to its 2007 all-time low, but this increase – which could well be described as a trend back to the longer term norm – is instead described as the “the third lowest value of the satellite record.”326

II. Behind the Rhetoric: Apparent Uncertainties and Questions in Climate Science and their Policy Significance

The cross examination conducted in Part I has revealed a number of key questions and uncertainties in climate science that are neglected, obscured, or minimized by the establishment climate story. This section highlights some of the key questions and uncertainties and briefly explicates their policy significance.

A. Climate Model Projections: It’s all about the feedbacks

Perhaps the most fundamental and policy-relevant projection supplied by climate models is what is called the “sensitivity” -- the projected future temperature increase from a doubling of CO2 of global mean surface temperature. It is the possibility of high climate sensitivity that triggers the need for action, and the higher is the projected temperature increase, the more worrisome is human-induced climate change.

The Part I cross examination has revealed that it is the positive feedback effects presumed by climate models that accounts entirely for the possibility of climate sensitivity greater than 1.2 degrees centigrade, and, perhaps most importantly, for the possibility of very big, dangerous temperature increases exceeding even 5 degrees centigrade. Yet the cross examination has also uncovered recent work showing that if there is an important negative feedback that dissipates slowly over time, then the probability of very large temperature increases due to the presence of positive feedbacks is much smaller, and probabilities are much more concentrated around the more moderate, mean values. Moreover, it is a very long time – exceeding 150 years -- before there is a significant probability attached to a temperature increase even as large as 3 degrees centigrade.327

As for the evidence on feedback effects, the IPCC cites evidence tending to confirm at least some of the important climate model feedbacks -- such as that regarding constant tropospheric relative humidity and a consequently strong positive water vapor feedback.328 A review of the literature, however, suggests that there is accumulating evidence – some of which was available before the publication of the IPCC’s AR4 – that atmospheric humidity and water vapor are not responding to CO2 increases as climate models predict that they will. The studies relied upon by the IPCC seem to look at different datasets than do the studies that fail to confirm climate model predictions. As for the cloud feedback, the IPCC acknowledges the large remaining uncertainty about

326 See NOAA, Arctic report card: update for 2009, available at http://www.arctic.noaa.gov/reportcard/seaice.html.

327 Baker and Roe, The Shape of Things to Come, supra note __ at 4583. 328 IPCC, Climate Change 2007: The Physical Science Basis, supra note __ at 632-635.

72

cloud feedbacks and the enormous spread – from strongly negative to strongly positive – in climate model cloud feedback effects. Yet a comparison with the literature reveals that the IPCC is almost surely much more optimistic about the improving accuracy of climate model cloud feedbacks than are many leading climate scientists who study clouds and climate change.

Rhetorically, the establishment climate story virtually ignores the systematic importance of feedback effects to climate model projections. None of the IPCC documents intended to influence the media, policymakers or even scientists generally even mentions feedback effects. Beyond this, in work intended to influence such wide and non-specialized audiences, activist climate scientists argue that all the evidence – such as melting ice sheets – indicate that big positive feedback effects will be even worse than climate models project. Such a presentation – no explanation for the general role and assumptions about positive feedback effects in climate models and how those compare with actual theoretical results and observational evidence in the literature, coupled with dramatic proclamations that contemporaneous observations show that feedbacks are likely worse than thought – would seem highly likely to lead to widespread public misperception about the role of feedbacks in future climate projections.

Such a rhetorically-induced misperception about the role of feedback effects in climate projections can have a profound impact on climate policy analysis. This is clearly illustrated by two recent law review articles written by some of today’s most analytically rigorous environmental scholars. In one of these articles, Freeman and Guzman argue that climate change policy work has paid too little attention to the possibility of very large temperature increases and the potentially catastrophic events that will be caused by such temperature increases.329 As part of more general analysis of the role of how feedback effects in a variety of natural and socio-economic systems create a positive probability of catastrophic outcomes, Farber similarly argues that a positive probability of extremely large temperature increases and a “non-negligible probability of worldwide catastrophe” justify a “higher degree of precaution [as] “insurance” against climate catastrophe.330 Thus both articles argue that a positive probability of very big temperature increases – or, in climate science language, very high climate sensitivity – and corresponding catastrophic harm justify immediate and large expenditures to reduce ghg emissions.

Of these two, Farber’s article contains the more nuanced and detailed discussion of the feedbacks that account for a positive probability of high climate sensitivity. Farber relies heavily on a recent article by economist Martin Weitzman.331 Both Farber and Wietzman are concerned with the “fat tail” –a positive probability of extreme high climate sensitivity and very large, catastrophic warming. Weitzman is concerned with temperature increases even bigger than 4.5 degrees centigrade, and is especially concerned with temperature increases above 10 degrees centigrade, to which an ensemble

329 Freeman and Guzman, Climate Change and U.S. Interests, supra note __ at 1552-1554. 330 Daniel A. Farber, Uncertainty 36 (2010), available at http://ssrn.com/abstract=1555343.

331 Martin A. Weitzman, On Modeling and Interpreting the economics of catastrophic climate change, 91 Rev. Econ. & Stat. 1 (2009).

73

of climate models that he inspects attach the average probability of about 1 per cent.332 He says that as we have no experience with such large temperature increases, the inductive scientific method – by which he means learning from observations – cannot tell us anything about this probability, which instead comes from a “largely subjective and diffuse prior probability”333 and “significant uncertainties both in empirical measurements and in the not directly observable coefficients plugged into simulation models.” 334 Like Weitzman, Farber relies on the 2007 Science article by Roe and Baker discussed earlier for the proposition that even a reduction in uncertainty about the positive feedback would not reduce by much uncertainty about climate sensitivity, and so the “fat tail” problem would still be with us.335

While both Farber and Weitzman are to be praised for actually looking closely at climate science before discussing its policy implications, their discussion of the “fat tail” – or positive probability of very high climate sensitivity – problem suffers from a number of problems. First, both Farber and Weitzman discuss the standard range of climate sensitivity – as between about 1.5 and 4.5 degrees centigrade – without once mentioning that the range itself is due entirely to presumed net positive feedbacks.336 Their discussions never explain why there is a range in the first place. Instead, they focus on the role of feedbacks in generating temperature increases (climate sensitivity) above 4.5 degrees centigrade, and cite to the 2007 Science article by Roe and Baker for the proposition that there is inevitable uncertainty about such feedbacks and reducing it will not eliminate the “fat tail” probability of extreme climate sensitivity.337 They thus both completely overlook the primary policy implication of Roe and Baker:338 because climate models assume the predominance of positive feedbacks, they essentially assume the fat tail problem. Weitzman’s belief that the “fat tail” comes from irreducible uncertainty does not follow from anything in climate science: as discussed in Part I, there is a large and growing literature that attempts to empirically measure the most important feedbacks – water vapor and clouds in particular – that climate models presume to be positive. Moreover, the significance of Baker and Roe’s most recent work339 is that if the evidence shows that there are important negative feedbacks, then the fat tail of extreme climate sensitivity does not arise for centuries (in the case of a slowly dissipating negative feedback). In other words, if there are important negative feedbacks in the

332 Weitzman, supra note __ at 3. 333 Weitzman, supra note __ at 3.

334 Weitzman, supra note __ at 3 n. 4. Oddly, the main feedback that Weitzman discusses as one causing a “fat tail” of catastrophic climate change – methane release from melting permafrost -- is one that climate scientists believe to be a very remote possibility – that is, not supported as a catastrophic possibility by the existing paleoclimatic evidence. See the disussion in Indur M. Goklany, Trapped between the Falling Sky and the Rising Seas: The Imagined Terrors of the Impacts of Climate Change (draft of 13 December, 2009) (available at __).

335 See Farber, Uncertainty, at 19. It is worth noting that Farber, at 18, and Weitzman, at 3-5, are also heavily influenced by Margaret S. Torn and John Harte, Missing Feedbacks, Asymmetric Uncertainties, and the Underestimation of Future Warming, 33 Geo. Res. Lett. L10703 (2006). 336 See Farber, Uncertainty, at 32-40; Weitzman, supra note __ at 2-3.

337 See Farber, supra note __ at 19 and Weitzman, supra note __ at 3. 338 See supra note __ at __. 339 See Baker and Roe, supra note __.

74

climate system which the climate models simply assume away but which we can learn about from observations, then it may be that we will be able to predict that the climate system will not, at least for centuries, reach the high temperatures that really would put us into a state where the system might unpredictably and unknowably spiral out of control.

Needless to say, these crucially important policy implications of climate system feedbacks cannot be traced unless one first has a very clear idea of how feedbacks drive climate model projections. Rather than presenting such a clear story, in the IPCC’s communications to policymakers, it said nothing about feedbacks except those that might be dramatic and positive.

B. The Ability of Climate Models to Explain Past Climate

The IPCC and the climate establishment have vastly oversold climate models by declaring that such models are able to quite accurately reproduce past climates, including most importantly the warming climate of the late twentieth century. Mainstream climate modelers have themselves explained that climate models disagree tremendously in their predicted climate sensitivity – response of temperature to a CO2 increase – and are able to reproduce twentieth century climate only by assuming whatever (negative) aerosol forcing effect is necessary to get agreement with observations. These kind of explanations, by leading climate modelers, suggest that climate models do not in fact reflect understanding of the key physical climate processes well enough to generate projections of future climate that one could rely upon.

It seems unlikely that climate model projections would be accorded much policy significance if the way in which they were able to “reproduce” past climate was generally understood. It seems more than plausible that policymakers (let alone the general public), take a model’s purported ability reproduce past temperatures as an indication that the model’s assumption about climate sensitivity is correct. If policymakers were told that this is not so, that ability to reproduce past temperatures indicates only that a particular pairing of assumptions about climate sensitivity and aerosol forcing allowed the reproduction of past temperatures, then the logical question would be: which model gets the correct pairing of sensitivity and aerosol forcing? In answer to this, climate modelers would have to say that they do not know, and the best that could be done would be to use all the models (this is called the ensemble approach). But of course it is possible that all the models were very badly wrong in what they assumed about sensitivity. A policymaker aware of this would then have to ask whether it would be better to base policy on climate models, or a more naïve climate forecasting method, and whether further public funding of efforts to improve climate models was worthwhile.

C. The Existence of Significant Alternative Explanations for Twentieth Century Warming

The IPCC and the climate establishment story expresses great certainty in arguing that late twentieth century global warming was caused by the atmospheric buildup of human ghg emissions (this is the anthropogenic global warming or AGW story). The IPCC reports confidently assert that solar activity could not have accounted for warming during this period, because this was a period of weakening and not strengthening solar

75

irradiance, and that there was no natural forcing during this period that could have accounted for the warming. Yet a closer look at the literature shows that there is ongoing dispute about the possible role of the sun, with the debate coming down to conflicting views about the reliability of alternative datasets on solar activity. Perhaps even more importantly, a growing body of sophisticated theoretical work confirms that the non- linear global climate is subject to inherent warm and cool cycles of about 20 to 30 years in duration, with substantial evidence that a warm cycle was likely to have begun in 1976. As for the latitudinal pattern of twentieth century warming, with more pronounced warming in the Arctic in particular, there is now substantial evidence establishing that at least one half of such warming was due to the deposition of industrial era soot on the snows and ice of that region.

The existence of alternative explanations for twentieth century warming obviously has enormous implications for policy, for in order to determine how much to spend to reduce human ghg emissions, one must know first have some idea how harmful those emissions will be if they continue unabated. More precisely, what is ideally in hand for the design of climate policy is an empirically testable model that can separately identify the influence of the sun, natural climate variation, ghg emissions and other human forcings. Such a model could then be used to identify the harm caused by increases in human ghg emissions, holding constant the other factors that contribute to climate swings. Without such a model, there is a great risk that one variable – human ghg emissions – is being ascribed too much importance, leading to too great expenditures to reduce such emissions.

D. Questionable methodology underlying highly publicized projected impacts of global warming

One of the most widely publicized numbers in the establishment climate story is the projection that 20-30 per cent of plant and animal species now existing may become extinct due to global warming. This number is also one of the most troubling, because it comes from a single study whose methodological validity has been severely questioned by a large number of biologists. These biologists agree that the methodology neglects many key processes that determine how the number of species will respond to changing climate, and will always lead to an overestimate of species loss due to climatic change.

In its 2007 Summary for Policymakers released before the full Climate Science AR, the IPCC used the highly dramatic 20-30 per cent species loss number without any qualifications. In this instance, the role of rhetorical technique seems inextricably linked to substantive content. For suppose that the IPCC had been required to accompany every publicized projection – regarding both climate change and climate change impact – with even a brief accompanying statement summarizing and making available citations to work critical of the methodology underlying the projection. In the instance of the projected species loss probability, such a summary and disclosure of scientific critique would have revealed such widespread scientific doubt about the underlying method as to make it highly unlikely that the IPCC could actually put the numerical projection generated by that method in a Summary for Policymakers.

76

By putting an unqualified species loss probability number in the Summary for Policymakers, the IPCC has validated that number and thus encouraged its adoption and use in legal policy analysis. Freeman and Guzman, for example, note340 that when the IPCC-endorsed species loss probability is multiplied by an estimate of the dollar cost of species loss generated by Cass Sunstein, the resultant estimate is that climate change caused species loss will cost the U.S. between 1.4 and 3.5% of annual GDP per year.341 This number relies not only upon species loss estimates that, as explained above, are simply rejected as invalid by a large number of biologists, but also upon an ad hoc method of calculating the dollar cost of species loss that is without economic foundation.342 Yet according to Freeman and Guzman, this massive estimated GDP cost of climate change – induced species loss is “conservative,” because the methods used to estimate and value species loss “oversimplify the complex ecological interactions between species and ecosystems...[t]aking these interactions into account would probably make the numbers much larger.”343 Perhaps nothing that the IPCC said could have caused Freeman and Guzman to be less certain of the catastrophe that global warming will bring to non-human species. Still, had the IPCC put the species probability number in the context of widespread criticism by biologists of the methods used to generate it, then it seems hard to imagine that very many informed readers of IPCC Reports could possibly be persuaded to share Freeman and Guzman’s certainty of catastrophe.

III. Conclusion: Questioning the Established Science, and Developing a Suitably Skeptical Rather than Faith-based Climate Policy

Even if the reader is at this point persuaded to believe that there remain very important open questions about ghg emissions and global warming, and important areas of disagreement among climate scientists, she may well ask: So what? After all, such a reader might argue, CO2 is a ghg, and if we continue to increase CO2, then it seems clear that despite whatever uncertainty there may be about how much temperatures will increase as a consequence of increasing CO2 in the atmosphere, and about the impacts of such rising temperatures, there is no doubt that temperatures will increase with increasing CO2, and that at some point, such rising temperatures will cause harm, so that one way or another, at one time or another, we simply have to reduce our emissions of CO2.

However beguiling, such an argument not only oversimplifies the policy questions raised by human ghg emissions, it is also misunderstands the significance of the scientific questions revealed by my cross examination for the predictability of anthroprogenically-forced climate change. Consider first the scientific questions. If climate were a simple linear system – with increases in atmospheric CO2 directly and

340 Freeman and Guzman, supra note __ at 1558. 341 Wayne Hsiung and Cass R. Sunstein, Climate Change and Animals, 155 U.Pa.L.Rev. 1695, 1734 (2007). 342 For the critique of this methodology, see Jason Scott Johnston, Desperately Seeking Numbers: Global Warming, Species Loss, and the use and abuse of quantification in climate change policy analysis, 155 U.Pa.L.Rev. 1901 (2007). 343 Freeman and Guzman, Climate Change and U.S. Interests, supra note __ at 1558.

77

simply determining future warming – then while a detailed understanding of the earth’s climate system might still of scientific interest, there would be little policy justification for expending large amounts of public money to gain such an understanding. But if one thing is clear in climate science it is that the earth’s climate system is not linear, but is instead a highly complex, non-linear system made up of sub-systems – such as the ENSO, and the North Atlantic Oscillation, and the various circulating systems of the oceans – that are themselves highly non-linear. Among other things, such non-linearity means that it may be extremely difficult to separately identify the impact of an external shock to the system – such as what climate scientists call anthropogenic CO2 forcing – from changes that are simply due to natural cycles, or due to other external natural and anthropogenic forces, such as solar variation and human land use changes. Perhaps even more importantly, any given forcing may have impacts that are much larger – in the case of positive feedbacks – or much smaller – in the case of negative feedbacks – than a simple, linear vision of the climate system would suggest. Because of the system’s complexity and non-linearity, without a quite detailed understanding of the system, scientists cannot provide useful guidance regarding the impact on climate of increases in atmospheric ghg concentration.

As a large number of climate scientists have stressed, such an understanding will come about only if theoretical and model-driven predictions are tested against actual observational evidence. This is just to say that to really provide policymakers with the kind of information they need, climate scientists ought to follow the scientific method of developing theories and then testing those theories against the best available evidence. It is here that the cross examination conducted above yields its most valuable lesson, for it reveals what seem to be systematic patterns and practices that diverge from, and problems that impede, the application of basic scientific methods in establishment climate science. Among the most surprising and yet standard practices is a tendency in establishment climate science to simply ignore published studies that develop and/or present evidence tending to disconfirm various predictions or assumptions of the establishment view that increases in CO2 explain virtually all recent climate change. Perhaps even more troubling, when establishment climate scientists do respond to studies supporting alternative hypotheses to the CO2 primacy view, they more often than not rely upon completely different observational datasets which they say confirm (or at least don’t disconfirm) climate model predictions. The point is important and worth further elucidation: while there are quite a large number of published papers reporting evidence that seems to disconfirm one or another climate model prediction, there is virtually no instance in which establishment climate scientists have taken such disconfirming evidence as an indication that the climate models may simply be wrong. Rather, in every important case, the establishment response is to question the reliability of the disconfirming evidence and then to find other evidence that is consistent with model predictions. Of course, the same point may be made of climate scientists who present the disconfirming studies: they tend to rely upon different datasets than do establishment climate scientists. From either point of view, there seems to be a real problem for climate science: With many crucial, testable predications – as for example the model prediction of differential tropical tropospheric versus surface warming – there is no indication that

78

climate scientists are converging toward the use of standard observational datasets that they agree to be valid and reliable.

Without such convergence, the predictions of climate models (and climate change theories more generally) cannot be subject to empirical testing, for it will always be possible for one side in any dispute to use one observational dataset and the other side to use some other observational dataset. Hence perhaps the central policy implication of the cross-examination conducted above is a very concrete and yet perhaps surprising one: public funding for climate science should be concentrated on the development of better, standardized observational datasets that achieve close to universal acceptance as valid and reliable. We should not be using public money to pay for faster and faster computers so that increasingly fine-grained climate models can be subjected to ever larger numbers of simulations until we have got the data to test whether the predictions of existing models are confirmed (or not disconfirmed) by the evidence.

This might seem like a more or less obvious policy recommendation, but if it were taken, it would represent not only a change in climate science funding practices, but also a reaffirmation of the role of basic scientific methodology in guiding publicly funded climate science. As things now stand, the advocates representing the establishment climate science story broadcast (usually with color diagrams) the predictions of climate models as if they were the results of experiments – actual evidence. Alongside these multi-colored multi-century model-simulated time series come stories, anecdotes, and photos – such as the iconic stranded polar bear -- dramatically illustrating climate change today. On this rhetorical strategy, the models are to be taken on faith, and the stories and photos as evidence of the models’ truth. Policy carrying potential costs in the trillions of dollars ought not to be based on stories and photos confirming faith in models, but rather on precise and replicable testing of the models’ predictions against solid observational data.

No comments:

Post a Comment

the mikiverse loves free speech and wholeheartedley accepts, that someone who is diametrically opposed to my views is free to promulgate those thoughts...However, misogyny, racism, intolerance etc will see that comment deleted.
These abstract considerations will be solely, and exclusively determined by the mikiverse, so play hard, but, nice.