Interview with Nathan Urban on his new paper “Climate Sensitivity Estimated from Temperature Reconstructions of the Last Glacial Maximum”

Nathan Urban is a Postdoctoral Research Fellow at Princeton’s Woodrow Wilson School of Public and International Affairs. He recently spoke to Planet 3.0 about the topic of climate sensitivity here. Today, he’s graciously agreed to answer some questions about a paper he co-authored that was just published in the journal Science.

Q:  Thanks for talking to us about your paper. How would you summarize it, if you had to do it in one sentence?

It estimates the influence of carbon dioxide on the climate using temperature data from the Last Glacial Maximum, and finds a weaker influence (and with less uncertainty) than many previous studies.

Q: Can you put that into a little bit of a broader context for us? 

The paper uses temperature data from a glacial period about 20,000 years ago to estimate what is known as the equilibrium climate sensitivity to a doubling of atmospheric carbon dioxide (CO2).  This quantity is also referred to as “climate sensitivity” or “ECS” for short.  It is a standard measure of how much the climate can be changed with respect to some change in the Earth’s energy balance, such as the greenhouse effect of CO2.  ECS is defined through a kind of hypothetical “thought experiment”.  

If we had the ability to precisely double the amount of CO2 in the atmosphere, ECS is the average amount of global warming that would result, assuming we could wait a long time for the warming to fully take effect.   (We could equally well consider the warming that would result from a tripling, or some other change, of CO2, but doubling has been adopted as a standard reference measure.  From that we can estimate the warming from other amounts of CO2.)

I don’t want to give a tutorial on climate sensitivity here; for that, see my earlier interview on the Azimuth Project and Planet 3.0 blogs.  In particular, I recommend reading the discussion of “feedback effects”, which can combine with greenhouse or other warming to cause more (or less) warming than the greenhouse effect alone.

Q:  Why should we care about climate sensitivity?

Climate sensitivity tells us about how much global warming we will see from the greenhouse effect, in the long run.

To measure climate sensitivity, we’d have to experiment on the Earth by doubling CO2 in the atmosphere and waiting a long time (at least hundreds of years for the climate to approach a new equilibrium), holding everything else constant.  But this isn’t really feasible.  We can’t hold every other factor constant, and we would like to have some idea of what’s coming ahead of time, instead of just waiting to see what happens.

Instead of measuring climate sensitivity, which we can’t do, we want to estimate it, based on data that we can already measure today.  With that estimate in hand, we can in turn estimate how much global warming we’re likely to see from future fossil fuel emissions.

Q:  How might someone go about trying to estimate climate sensitivity?

We look for periods in the Earth’s history when CO2 levels were higher or lower than today, and see how much warmer or colder the climate was.  From this we can estimate ECS by applying a physical theory relating carbon dioxideto planetary temperatures.

The period of time we consider is the Last Glacial Maximum (LGM), about 19,000 to 23,000 years ago, which was the deepest and coldest phase of what is colloquially known as “the last ice age”.  We obviously weren’t around to measure the Earth’s temperature back then, but we can infer it based on geological evidence.  We call this a “reconstruction” of  temperatures, rather than a measurement, based on “proxy” evidence.  Proxies are measurements that tell us indirectly about temperatures back then, such as levels of chemical isotopes found in core samples.

Part of the reason glacial periods were cold is because CO2 levels were lower then, by about 100 parts per million.  This allows us to relate CO2 to climate change.  Higher climate sensitivities (i.e., stronger influence of carbon dioxide on climate) imply a colder LGM, because there is strong cooling from a weakened greenhouse effect.  We can therefore work backward from LGM temperatures to the implied climate sensitivity.

The physical theory we use in our paper is a climate model — a computer simulation of the climate system.  The University of Victoria “UVic” model has a three dimensional circulation model of the ocean, and a simplified “energy balance” model of the atmosphere.  (This means that it doesn’t simulate the atmospheric circulation directly, only the approximate transfer of energy.)  We simulate the LGM climate assuming different hypothetical climate sensitivities, and see which of the simulations agree with LGM temperatures, and which do not.  We assign probabilities to climate sensitivities in proportion to how closely they agree with the temperature data.

Below is a comparison of the proxy temperature data with the predictions of our best-fitting climate model simulation.

Some people object to using climate models in studies such as these, favoring what they call “empirical” or “data-driven” approaches.  In my opinion, there is no such thing as a “purely empirical” estimate of climate sensitivity.  Climate sensitivity is not an observable quantity.  It has to be inferred indirectly from quantities that can be observed.  The only way to do this is to use a physical model that links the observed quantities to the inferred quantity.

Analyses that are often passed off as empirical, such as direct comparisons of global average temperature and radiative forcing data, implicitly use climate models, even if the “model” is nothing more than a ratio or linear regression.   This usually amounts to using a zero-dimensional linear energy balance climate model in disguise, whose physical assumptions are much cruder than the model we use.

There is no escaping using a model, it’s just a matter of how realistic a model you want to use.  I do like simple climate models, and there can be advantages to using models that don’t make complex physical assumptions.  But we shouldn’t fool ourselves into thinking that simple models are not models at all, or that they necessarily produce superior estimates just because they make fewer assumptions.  I advocate working up and down the hierarchy of model complexity to improve understanding and test the robustness of conclusions.

Q:  You recommend working with a “hierarchy” of both simple and complex models.  Why did you choose to use the UVic model for this study?

The simple answer is that the lead author, Andreas Schmittner, chose UVic because he is one of the model developers.  Given a hammer, you then look for nails:  there are certain problems that particular models are suited to study.

We wanted to choose a somewhat complex model because simple models have already been studied, and because we wanted the ability to analyze spatial patterns in the data.  The simplest models don’t simulate regional climate changes very well.

On the other hand, UVic isn’t the most complex model either.  It has a simplified atmosphere, which is an advantage and disadvantage.  The disadvantage is that it has a very approximate representation of atmospheric processes.  The advantage is that this makes the simulations run faster.  It is less computationally expensive.  (There is a new version of the model, called OSUVic, that has a dynamic atmospheric circulation model, but it is 10 or 20 times slower.)

This speed advantage makes UVic a good choice for paleoclimate studies, when you have to run the model for long periods of simulated time.  It also makes UVic suitable for uncertainty analysis, because you can afford to run multiple simulations to study the model’s response to different assumptions (e.g., to different climate sensitivities). The more complex the model, the fewer simulations are affordable within a given computational budget, and the fewer uncertainties can be explored.

Within the class of simplified-atmosphere models, UVic has a better ocean model than most, so it’s a good choice when sea surface temperatures and changes in ocean circulation are important, as we believe they are during the LGM.  UVic also has terrestrial and ocean carbon cycle models, and so simulates the response of the carbon cycle to changes in climate and CO2 levels.  (For example, colder temperatures can cause vegetation patterns to change, which change the reflectivity of the planet’s surface and its temperature.)  We have not yet analyzed in detail what effect carbon cycle feedbacks have on our findings, but our model simulates them.

Q:  Why does the study consider climate data from so far in the past?

The main alternative is to consider historic temperature measurements from the last century or so.  Instrumental data is more accurate and numerous than indirect geological reconstructions of temperatures 20,000 years again.  However, a limitation of instrumental data is that it’s only available for a relatively short period of time (about a century).  This makes it difficult to separate out the climate effect of CO2 from other causes of climate variability.  Also, the last century hasn’t seen as much climate change as was present during the Last Glacial Maximum; with the latter, there is a stronger climate signal to analyze.

Q:  How does this paper improve on existing research?

Other studies have estimated climate sensitivity from LGM temperatures before.  Our new study has several main features:

1. It uses a new reconstruction of LGM temperatures from geologic evidence.

2. It considers a number of climate model simulations (an “ensemble” of simulations) each with a different climate sensitivity, each with a different climate sensitivity, to evaluate the uncertainty in our ECS estimate.

3. It uses a more realistic climate model than has commonly been used in such perturbed physics ensemble studies.

4. It estimates ECS using the spatial pattern of LGM temperatures, as opposed to a single average temperature.

Earlier studies have done some of the above, but not the full combination.  We also have made some other methodological improvements on past work.

Q:  What did you find, and what is the significance of your findings?

The scientific community generally believes that the climate sensitivity is likely to lie between 2 and 4.5 degrees Celsius per doubling of atmospheric CO2, with a best estimate of 3 °C.  I will call this the “IPCC” or “consensus” estimate, since it is based on a review of the scientific literature found in the latest report of the Intergovernmental Panel on Climate Change (IPCC).

In our LGM study we find that ECS is “likely” (66% probability) to lie between 1.7 and 2.6 degrees, and “very likely” (90% probability) to lie between 1.4 and 2.8 degrees, with a best estimate of around 2.2 or 2.3 °C.  Our estimate of the warming effect of CO2 is therefore on the low end of, and less uncertain than, the currently accepted IPCC range.

Our low sensitivity is interesting, but within the range of previous studies.  What is probably more significant is the fact that our analysis seemingly rules out the higher sensitivities (above the IPCC “best” estimate of 3 °C) which other studies have been unable to exclude.  (Note the word “seemingly”:  more on that later.)

Q:  Looking at your Figure 2, it looks like your data implies the LGM was only about 2°C colder than the modern climate.  This seems like a small change.  How do you reconcile this with the large changes in ice sheets, sea level, and vegetation during the LGM?

First, Figure 2 is a little misleading in this context, because it shows the temperature averaged only over the locations where we have proxy data.  This doesn’t give the same number as the average global temperature. In our best-fitting climate model, if we average over the entire Earth’s surface, we get a global average surface air cooling of about 3.3°C. This is 33% less cooling than the ~5°C figure that people often cite.

Still, with only 3.3°C of global cooling, one might wonder whether it is possible to grow the large ice sheets that existed at that time, with the accompanying large fall in sea level. For that, global averages can be deceiving. You have to look at how cold the ice sheets are, not the planetary average. If we look specifically at land temperatures north of 40°N latitude, our model simulates a cooling of 7.7°C. We compared this to the scientific literature and found a study which reported that a cooling of 8.3 ± 1°C is sufficient to generate the LGM ice sheets. So our study appears consistent with glaciological constraints.

That being said, our LGM temperature reconstruction is quite different from what has been commonly assumed, and our study may prove inconsistent with other evidence that we have not yet considered. This is something that will have to be sorted out by further debate and research.

Q:  Why does this study find a lower estimate of climate sensitivity?

Probably the main reason is because our new temperature reconstruction suggests the Last Glacial Maximum was warmer than previously thought.  The warmer the glacial period, the weaker must be the cooling effect of diminished glacial CO2 levels.  We find that the cooling between today and the LGM is about 30-40% less than previously thought.  Roughly, this implies a 30-40% lower climate sensitivity.

If we take the IPCC 2–4.5 °C range and subtract a third from it to account for the weaker LGM cooling that we find, we get 1.3-3 °C.  This is similar to (but wider than) our 1.7–2.6 °C “likely” range.  This suggests that our new temperature reconstruction explains a lot of the difference between our climate sensitivity estimate and previous estimates.

(This isn’t an entirely fair comparison, though, since the IPCC range takes into account data other than the LGM temperatures we studied.  It also assumes that LGM cooling and climate sensitivity are strictly proportional.)

It is important to note that we are not the first to find climate sensitivities on the low end of the IPCC range. The paper that published most of the sea surface temperature data we use found a range of climate sensitivities between 1 and 3.6 °C, similar to but somewhat wider than our range, using simpler methods. Our contribution is to reanalyze this data with the addition of land data and some other sea surface data, studying the spatial pattern of temperature change in a formal model-based uncertainty analysis.

Q:  Does this study overturn the IPCC’s estimate of climate sensitivity?

No, we haven’t disproven the IPCC or high climate sensitivities.  At least, not yet.  This comes down to what generalizations can be made from a single, limited study.  This is why the IPCC bases its conclusions on a synthesis of many studies, not relying on any particular one.

While our statistical analysis calculates that high climate sensitivities have very low probabilities, you can see from the caveats in our paper (discussed further below), and my remarks in this interview, that we have not actually claimed to have disproven high climate sensitivities.  We do claim that our results imply “lower probability of imminent extreme climatic change than previously thought”, and that “climate sensitivities larger than 6 K are implausible”, which I stand by.  I do not claim we have demonstrated that climate sensitivities larger than 3 K are implausible, even though we calculate a low probability for them, because our study has important limitations.

It is rare that a single paper overturns decades of work, although this is a popular conception of how science works. Many controversial results end up being overturned, because controversial research, almost by definition, contradicts large existing bodies of research. Quite often, it turns out that it’s the controversial paper that is wrong, rather than the research it hopes to overturn. Science is an iterative process.  Others have to check our work.  We have to continue checking our work, too.  Our study comes with a number of important caveats, which highlight simplifying assumptions and possible inconsistencies.  These have to be tested further.

There is a great quote from an article in the Economist that sums up my feelings, as a scientist, about the provisional nature of science.

“In any complex scientific picture of the world there will be gaps, misperceptions and mistakes. Whether your impression is dominated by the whole or the holes will depend on your attitude to the project at hand. You might say that some see a jigsaw where others see a house of cards. Jigsaw types have in mind an overall picture and are open to bits being taken out, moved around or abandoned should they not fit. Those who see houses of cards think that if any piece is removed, the whole lot falls down.”

Most scientists I know, including myself, are “jigsaw” types.  We have to see how this result fits in with the rest of what we know, and continue testing assumptions, before we can come to a consensus about what’s really going on here.  The rest of the Economist article, by the way, is well worth reading.

Q:  Tell me about these caveats you keep mentioning.

Let’s get into details.  One major caveat has to do with unexplained differences in the climate sensitivities implied by land vs. ocean temperatures.  First, some of our results in graphical form:

This figure shows the uncertainty analysis for our climate sensitivity estimate.  The black curve is our main result, which is the result of our analysis of both land and sea surface temperature data.  The height of the curve is proportional to the probability that ECS has a given value.  You can see that most of the probability weight lies between 1 and 3 °C, concentrating around 2.2 or 2.3 °C.

There are, however, two other curves in this figure.  These show our estimates if we look at only the land data or only the ocean data.

Two things are immediately apparent from these curves.  First, the sea surface temperature data support lower climate sensitivities and the land surface temperature data support higher sensitivities.  There isn’t a great deal of overlap between these curves, so this suggests a possible inconsistency between the land and ocean analyses.  Second, when we combine the land and ocean data, the ocean data dominate the result (the black and blue curves are very similar), “overruling” what the land data have to say.  I think this is, at least in part, because there are more ocean data than land data.

This discrepancy between land and ocean is one of our biggest caveats.  (I originally mentioned this directly in the abstract of our paper, but it was cut in editing for space reasons, so you have to read the body of the paper to find this out.)  If the ocean and land analyses really are inconsistent with each other, which one should we trust?  Maybe neither one?  How can we reconcile these results?

Ultimately, the discrepancy occurs because at low sensitivities, the climate model predicts land temperatures that are warmer than the proxy data, but at high sensitivities, the model predicts ocean temperatures that are colder than the proxy data. Actually, at low sensitivities the model does generate cold land temperatures, but most of them don’t occur where the cold proxy temperatures are found.

There are many hypotheses for what’s going on here.  There could be something wrong with the land data, or the ocean data.  There could be something wrong with the climate model’s simulation of land temperatures, or ocean temperatures.  The magnitudes of the temperatures could be biased in some way.  Or, more subtly, they could be unbiased, on average, but the model and observations could disagree on where the cold and warm spots are, as I alluded to earlier.  Or something even more complicated could be going on.

Until the above questions are resolved, it’s premature to conclude that we have disproven high climate sensitivities, just because our statistical analysis assigns them low probabilities.  The uncertainty analysis is only as good as the data and models that go into it, and we need to continue studying the relationships between and quality of the data and model we used.

Q: Are there any other caveats to this study, other than the discrepancy between land and ocean-based estimates? 

Yes.  Obviously the quality of the reconstructed proxy temperature data is important, since a lot of our conclusions depend on our new reconstruction of past temperatures.  I defer to my coauthors about that, since it is outside my area of expertise.

Beside the data quality, the model quality is also important. One limitation of our study is that we assume that the physical feedbacks affecting climate sensitivity occur through changes in outgoing infrared radiation (e.g., through the greenhouse effect of water vapor). In reality, feedbacks affecting reflected visible light are also important (e.g., cloud behavior). Our study did not account for these feedbacks explicitly. Also, as I mentioned earlier, our simplified atmospheric model does not represent these feedbacks in a fully physical manner. Finally, there is the “state dependence” of climate sensitivity which we have not fully addressed with our model. The extent to which these model limitations influence our findings depends on how much they affect the spatial pattern of temperature change considered in the data-model comparison. It is hard to know what the net effect would be without actually redoing the study with a more complex model. There is some evidence in the literature that LGM climate sensitivity estimates are model-dependent, and complex models may simulate the same amount of LGM cooling with different climate sensitivities.

There are also caveats related to the statistical data-model comparison, which could either bias our estimates or mischaracterize the level of certainty we find.

One way in which our climate sensitivity estimate could be biased is because of missing data.  We don’t have temperature proxies everywhere on the Earth’s surface, as you can see from the first figure in this interview.   Our study is based on only 435 temperature proxies, 322 ocean proxies and 113 land proxies.  The coverage is spotty.  The data we have may be present in colder- or warmer-than-average locations, compared to the locations where we don’t have data, due to choices scientists have made of where to sample, or due just to chance.  If so, our results could be biased toward too-low or too-high climate sensitivities.

Our calculated uncertainty range could also be either overconfident (too narrow) or underconfident (too wide).  Usually studies are more often overconfident than underconfident.  The only physical uncertainty we formally considered is the uncertainty in climate sensitivity.  There can be other uncertainties, such as in the dust present during the LGM (which can have a non-greenhouse cooling effect), or in LGM wind stresses (which our model cannot calculate, due to its simplified atmosphere).  We did explore the robustness of our results to some of these assumptions, but it wasn’t a full-fledged uncertainty analysis, and it’s always possible we could have neglected some important source uncertainty, which would tend to make our results overconfident.

There is a more subtle way in which our results could be overconfident.  Earlier I said that one of the advantages of this study is that it analyzes the spatial pattern of LGM temperatures, instead of just working with global averages or other highly aggregated data representations.  This allows us to compare the data to the model in a more refined way that uses the information contained in regional patterns of climate.  This tends to reduce uncertainty, because it uses more data, or more structured data.

On the other hand, this could causes us to accidentally rule out some possibilities, reducing uncertainty more than is warranted (overconfidence), if we don’t properly account for regional biases in the data or model.  For example, suppose that a particular climate sensitivity causes the model to agree with the data extremely well except in one particular location, where the model fits terribly because either the data or the model is particularly bad.  In that case, we might not want to be too hasty in rejecting that climate sensitivity, even though it superficially causes poor data-model agreement, if we don’t completely trust the data or model in every region.

You can think of this as an “outlier” problem.  It is well known in statistics that outliers can have undue influence on conclusions if the analysis doesn’t somehow account for their presence.  This is a specific case of a more general phenomenon that statisticians call “discrepancy”, which is a catch-all term for structured sources of error in the data or model.

It is very difficult to account for discrepancy in a statistical analysis unless you know ahead of time how and where the data and model have errors.  We explored the possibility that the model or data have some global bias as a simple representation of discrepancy, but it is still possible that our analysis is ruling out possibilities that it shouldn’t. I think there is some evidence this is the case in some of our side analyses exploring alternate assumptions. Some of those analyses produce extremely narrow uncertainty distributions, more narrow than I actually believe. I suspect this is due to a too-simple treatment of discrepancy in these situations, although I don’t know if this carries over to our main analysis.

Q: Some earlier studies have found that the sensitivity of the climate during the Last Glacial Maximum was different than the present-day climate sensitivity. Does this affect your results?

There are obvious differences between the LGM and modern climates. The LGM had major continental ice sheets in the Northern Hemisphere, for one. Differences in the state of the climate, or “state dependence”, can affect how the climate responds to CO2, e.g., by having more ice around that can be melted.

Q: Is the UVic model you used capable of addressing this issue?

The way the UVic model parameterizes feedbacks is not complex enough to show such state- and forcing-dependent nonlinearities.

That said, we don’t assume that doubling CO2 during the LGM would have the same effect on the climate as doubling CO2 today. But we do have to assume some common physics between the two periods of time in order to compare them. What we assume is that both periods of time obey the same relationship between temperature and the amount of infrared radiation that escapes to space — the same “outgoing longwave feedback”. This assumption may not be entirely true either, but it is better than assuming that the LGM and modern climates have exactly the same temperature response to CO2. To do better than this we would need a better climate model that is capable of simulating the various climate feedback processes at a more physical level.

Q:  Given all these caveats, how robust are the results of your study?

I think our lower climate sensitivity estimate will hold up, provided the reconstructed LGM temperature data on which it is based hold up.  Our finding of a warmer LGM will prove controversial among the scientific community and the data will be subject to much scrutiny.  It remains to be seen whether this temperature data is consistent with everything else we know about that period of time (its climate, its vegetation, the size of its ice sheets, etc.).

I am less confident that our narrow uncertainty range really does exclude climate sensitivities above 3 °C.  This is something that could be overturned by future work.  It certainly would stimulate a lot of rethinking among scientists if the result isn’t overturned.  I can’t say I’m rooting strongly for either outcome, though.  I’d be pleased to see our findings confirmed, but if they’re disproven, I’ll learn something from the way in which they are disproven, and this will improve my own research.  Who knows, maybe I will disprove them myself.

Q: Can you briefly summarize which aspects of the study you and you coauthors contributed to?

I developed and conducted the statistical data-model comparison, in collaboration with lead author Andreas Schmittner. This corresponds to Figure 3 of the paper and most of sections 5, 6, and 7 of the supporting online material. Andreas designed and carried out the model simulations. Other coauthors worked on the temperature reconstructions, the assumptions about dust forcings, etc. I can’t tell you the exact partitioning of responsibility because I entered this project relatively late, after all the proxy reconstructions and model simulations had been completed.

Q: Your paper got a lot of positive attention from climate skeptic blogs like “Watts Up With That?”. What’s your reaction to all that? 

I haven’t followed these blogs too closely, but I skimmed the comments on a few that were pointed out to me.  The responses I saw were fairly predictable, veering from uncritical acceptance of our findings, to uncritical dismissal of any study that involves computer models or proxy data. But some comments did seem to find an appropriate middle ground of, well, skepticism.

Q. It’s a little funny, to me, that your paper was receiving such positive comments from skeptics while many of those same skeptics also support claims by Richard Lindzen and Roy Spencer purporting to find an essentially insensitive (~1°C or less) or self-stabilizing climate. Does your paper support such incredibly low values for ECS?

Our analysis found a lower bound of 1.35 °C for climate sensitivity (less than 5% probability of being below this bound). We tried a range of statistical and physical assumptions, and found sensitivities as low as 1.15 °C, and as high as 4.65 °C (if we analyze the land data). I don’t think sensitivities lower than our bound are consistent with either our study or paleoclimatic evidence in general.

Q: Any other thoughts on the skeptics’ reception of your paper?

One blog did surprise me. World Climate Report doctored our paper’s main figure when reporting on our study.  This manipulated version of our figure was copied widely on other blogs.  They deleted the data and legends for the land and ocean estimates of climate sensitivity, and presented only our combined land+ocean curve:

 

Upper: World Climate Report’s manipulated image removing the Land and Ocean data.

 Lower: The actual figure as it appears in Science, with the Land and Ocean curves included.

They did note that their figure was “adapted from” ours, and linked to our paper containing the real figure.  On the other hand, Pat Michaels duplicated this doctored version of our figure again in an article at Forbes, and didn’t mention at all that it had been altered.    (A side note with respect to the Forbes article:  Science didn’t “throw a tantrum” about posting our manuscript on the web.  They never contacted us about that.  I took it down myself as a precaution, due to the journal’s embargo policy.)

I find this data manipulation problematic.  When I created the real version of that figure, it occurred to me that it would be reproduced in articles, presentations, or blog posts.  Because I find the difference between our land and ocean estimates to be such an important caveat to our work, I made sure to include all three curves in the figure, so that anyone reproducing it would have to acknowledge these caveats.  I didn’t anticipate that anyone would simple edit the figure to remove our caveats.  I can’t say why they deleted those curves.  If you were to ask them, I’d guess they’d say it was to “clarify” the figure by focusing attention on the main result we reported.

Regardless of their intent, I find the result of their figure manipulation to be very misleading, especially since their blog post strongly implies that our study eliminates the “fat right tail” of the climate sensitivity distribution, and has proven the IPCC’s climate sensitivity range to be incorrect.  Our land temperature curve, which they deleted, undermines their implication.  They intentionally took our figure out of the context in which it was originally presented, a form of “selective quotation” which hides data that does not support their interpretation.

In summary, I find World Climate Report’s behavior very disappointing and hardly compatible with true skeptical inquiry.  I can only imagine how they would respond if they found a climate scientist intentionally deleting data from a figure, especially if they deleted data that undermined the point of view they were presenting.

Q: What are some other topics in climate science that you’re interested in? What’s next for you?

I’d like to follow up on this LGM project, looking into the caveats in more detail. This includes expanding the uncertainties considered, upgrading the statistical treatment of data-model discrepancies, and finding a way to handle state-dependence better. I am also interested in estimating climate sensitivity using data from periods of time in the Earth’s history other than the LGM. Another question I’d like to study more is whether slow carbon-cycle feedbacks can amplify global warming beyond what the direct climate feedbacks would predict (e.g., if warming can cause the release of additional carbon from permafrost or ocean clathrates). Currently and in the near future, though, I am starting to focus more on ice sheet dynamics and sea level rise.

Beyond uncertainty quantification, I’m moving toward quantifying learning rates. How quickly will we be able to reduce our uncertainties in the future, by acquiring more data or better synthesizing existing data? What effect will this have on climate policy? Finally, I’d like to learn more about climate adaptation policy and how uncertainties affect adaptive decision making.

Thanks very much to Nathan, for talking to Planet 3.0 about his latest paper. We hope to check in again with him about his future research.

 

UPDATE: RC weighs in, and links to other valuable discussion including a couple by James.

Comments:

  1. Pingback: A new LGM reconstruction, with implications for climate sensitivity | The Way Things Break

  2. Pingback: Patrick Michaels: Serial Deleter of Inconvenient Data

  3. Pingback: Crimes Against Humanity: Pat Michaels – Serial Deleter of Inconvenient Data « The Climax

  4. Pingback: Crimes Against Humanity: Pat Michaels: Serial Deleter of Inconvenient Data « The Climax

  5. Pingback: Pat Michaels: Serial Deleter of Inconvenient Data « Climate Denial Crock of the Week

  6. Pingback: Patrick Michaels Loves to Delete Inconvenient Data | Planetsave

  7. Pingback: Media Misleads On Flawed Climate Sensitivity Study: Avoiding “Drastic Changes Over Land” Requires Emissions Cuts ASAP

  8. Pingback: Ice age constraints on climate sensitivity

  9. Pingback: Dot Earth Blog: More on the ‘Sensitive’ Climate Question - World Bad News : World Bad News

  10. Comment: I’m not sure all at Planet 3.0 noticed Nathan Urban’s # 60 here:

    I am a coauthor on a manuscript in revision, Olson et al., JGR-Atmospheres (2011), which has a climate sensitivity analysis from modern (historical instrumental) data, using a similar UVic perturbed-physics ensemble approach. It finds a little under 3 K for ECS (best estimate).

    This result should not be surprising given that the lower LGM sensitivity is tied to finding the LGM not so cold after all.

    A question (hoping Nathan Urban is still around):

    You are not in a physics department but instead at Princeton’s Program in Science, Technology, and Environmental Policy, Woodrow Wilson School of Public and International Affairs. From my viewpoint you are still doing physics – planetary physics, a fine and challenging field. Do you still see yourself as a physicist? And in view of your experience so far, would you encourage a young physicist to go into planetary physics?

    • Hi, sorry I haven’t responded, I’ve been busy this past week.

      I do consider myself a physicist, although I usually tell people I’m a climate scientist when they ask my profession. And yes, I think planetary physics is, as you say, a fine and challenging field. Like many scientific subfields, however, it can be hard to find jobs.

  11. Pingback: Daily Forecast: Today’s Online Buzz on Environmental Issues « Climate Task Force

  12. Pingback: More on the 'Sensitive' Climate Question - NYTimes.com

  13. Pingback: Climatemonitor

  14. Pingback: Nieuwe studie klimaatgevoeligheid op basis van laatste IJstijd | Klimaatverandering

  15. Pingback: Other Voices: Life on Planet 3.0 - NYTimes.com

  16. Pingback: Emission Trading - Page 149

  17. Pingback: Rassicurazioni | Svoogle News

  18. Pingback: Climate Sensitivity Still Huge Concern, Despite Shallow, Incorrect Media Reports | Planetsave

  19. Pingback: Rassicurazioni « Oggi Scienza

  20. Pingback: Miniciclot. 28.11.2011 » Ocasapiens - Blog - Repubblica.it

  21. Question — the peaks in Fig. 3 — if I understand this those double peaks are peculiarities of the fairly coarse model used?

    It’s the model coughing up peaks like big blocky pixels, not an attempt to represent some fine-grained process varying across that curve.

    If that even makes sense. Trying to find simple words.

    Also, I came across “Probabilistic hindcasts and projections of the coupled climate …”
    http://www.princeton.edu/~nurban/pubs/moc-projections.pdf

    and wonder if Dr. Urban could be persuaded to do a review of his own work and say what’s going forward from previous publications, and what he’s not pursuing?

    • I think some of the multiple peaks are interpolation artifacts. We weren’t able to run the model at every climate sensitivity, so we had to statistically interpolate the output between the runs we do have. The ECS peaks are near (but don’t exactly coincide with) the ECS values where we ran the model.

      I’ve already done a review of the MOC projections paper on the Azimuth blog. See part 3 and part 4. (Parts 1 and 2 have already been reposted here on Planet 3.0.) I say a little about my current research plans at the end of this interview above.

  22. Pingback: Nova in Moderation « itsnotnova

  23. Pingback: Schmittner et al. (2011) on Climate Sensitivity - the Good, the Bad, and the Ugly

  24. I think the formatting is finally all cleaned up. Thanks, Steve Bloom, for catching some of the problems. I don’t know why WordPress didn’t like the initial formatting, but it seems to look okay now.

    I think it might be useful to do a follow up on this touching on some of the potential problems with the study, and giving Nate and any of his coauthors a chance to respond. Gavin Schmidt mentioned that he might be doing a post on this paper, so I don’t want to have the authors spread too thinly in trying to respond.

  25. Pingback: Climate Sensitivity Paper « Azimuth

  26. In email, a reader asks:

    “There are obvious differences between the LGM and modern climates. The LGM had major continental ice sheets in the Northern Hemisphere, for one. Differences in the state of the climate, or “state dependence”, can affect how the climate responds to CO2, e.g., by having more ice around that can be melted.”

    If I understand that correctly, he is saying that the energy used to melt ice is energy that might otherwise raise temperatures.
    Is that right?

    Also, if he is saying the LGM was less cold than previously thought, doesn’t this imply that the ice sheets will melt with less warming than previously thought?

    • “If I understand that correctly, he is saying that the energy used to melt ice is energy that might otherwise raise temperatures.”

      Well, I was really thinking of ice albedo feedbacks. If you apply a given greenhouse effect to a big ice sheet, you’ll get some greenhouse warming plus some additional warming due to replacing reflective ice with radiatively absorptive soil or vegetation. Without the ice sheet, you just get the greenhouse warming.

      Also, if he is saying the LGM was less cold than previously thought, doesn’t this imply that the ice sheets will melt with less warming than previously thought?

      All else equal, yes. So even if we get less warming, we don’t necessarily get less sea level rise. On the other hand, there is a lot of state dependence of the sea level rise response for similar reasons as discussed above: you’ll get different responses if you apply the same warming to a big continental ice sheet in a glacial climate vs. a smaller ice sheet in an interglacial climate.

  27. Just to beat this horse completely to death, that last paragraph —

    ‘”Hence, drastic changes over land can be expected,” he said. “However, our study implies that we still have time to prevent that from happening, if we make a concerted effort to change course soon.”‘

    — is precisely wrong in its implication. If indeed the study is correct that less CO2 will get us the the same big changes (temperature excepted), we are under more pressure.

    • I’m sorry, I was thinking about that wrong, although I think I’m half-right about the implication (meaning that the pressure is more or less as before). Taking it from scratch, the CO2 relationship to non-temp climate factors wouldn’t change directly. The direct temperature effects, OTOH, would have to be less, e.g. atmospheric water vapor content that scales directly to temperature. Probably other things as well, but my knowledge grows thin. OTOH it sounds as if the sensitivity of climate to non-CO2 forcings and feedbacks might be greater, but on that I’m really over my head.

  28. Significantly, following are the *last* paragraphs of a longish press release. This is what’s known in journalism as “burying the lede.”

    ‘”When we first looked at the paleoclimatic data, I was struck by the small cooling of the ocean,” Schmittner said. “On average, the ocean was only about two degrees (Celsius) cooler than it is today, yet the planet was completely different — huge ice sheets over North America and northern Europe, more sea ice and snow, different vegetation, lower sea levels and more dust in the air.

    ‘”It shows that even very small changes in the ocean’s surface temperature can have an enormous impact elsewhere, particularly over land areas at mid- to high-latitudes,” he added.

    ‘Schmittner said continued unabated fossil fuel use could lead to similar warming of the sea surface as reconstruction shows happened between the Last Glacial Maximum and today.

    ‘”Hence, drastic changes over land can be expected,” he said. “However, our study implies that we still have time to prevent that from happening, if we make a concerted effort to change course soon.”‘

  29. Hmm, now having looked around at some of the media coverage, this passage from the BBC article seems typical of the mass confusion engendered by the paper:

    ‘Lead author Andreas Schmittner from Oregon State University, US, explained that by looking at surface temperatures during the most recent ice age – 21,000 years ago – when humans were having no impact on global temperatures, he, and his colleagues show that this period was not as cold as previous estimates suggest.

    ‘”This implies that the effect of CO2 on climate is less than previously thought,” he explained.’

    Of course Andreas knows that global average temperature is only one of a number of effects of CO2 on climate, and that it’s long been clear that it’s other ones (mainly related to circulation and hydrology, to say nothing of ocean chemistry) not addressed by this study that we have to worry about (at least in the next century or two), but now attentive Beeb readers who aren’t familiar with this area of the science (i.e., nearly all of them) will have a different impression.

    For me, the important take-home point of the study is that it may be possible for even a relatively small change in CO2 to have such large effects, which in a sense is a finding of *higher* sensitivity of a different sort.

    But I expect that I have very little company. Meh.

    • I have been thinking along these lines myself, and I tend to agree with your conclusion.

      If the temperature sensitivity goes down vs the prior but the climate sensitivity to temperature goes up vs the prior, we are not any better off than we were.

  30. Really excellent interview, and much thanks to both Nathan and Things.

    Erratum: The first figure seems to be missing.

    Re WCR, perhaps Nathan isn’t familiar with their history. A few years back, I (a climate science amateur, albeit a fairly well-read one) did a little experiment to see if I could spot the central lie in WCR posts. After a dozen or so successes in a row, I felt that the point had been proven. IOW, it has always been thus with WCR, as one might surmise from their funding sources (the fossil fuel industry for those who don’t know). It’s a really sad commentary on our society that there’s a willing audience willing to both pay for and consume this sort of garbage, although to be fair it’s generally higher-quality garbage than the average WUWT post (although note that WCR material gets reposted there often enough) OTOH the WCR authors (Pat Michaels and Chip Knappenberger) clearly know exactly what they’re doing, which is more than one can say for many WUWT contributors.


Leave a Reply