New tool puts a consistent value on experts’ uncertainty on climate change models

Science can flourish when experts disagree, but in the governmental realm uncertainty can lead to inadequate policy and preparedness. When it comes to climate change, it can be OK for computational models to differ on what future sea levels will be. The same flexibility does not exist for determining the height of a seawall needed to protect people from devastating floods.

For the first time in the climate field, a Princeton University researcher and collaborators have combined two techniques long used in fields where uncertainty is coupled with a crucial need for accurate risk-assessment — such as nuclear energy — in order to bridge the gap between projections of Earth’s future climate and the need to prepare for it. Reported in the journalNature Climate Change, the resulting method consolidates climate models and the range of opinions that leading scientists have about them into a single, consistent set of probabilities for future sea-level rise.

“Scientists working in climate change know that the models used throughout climate research have shortcomings. At the same time policymakers need to know the future of sea-level rise, and they need as robust a prediction as we can give,” said Michael Oppenheimer, Princeton’s Albert G. Milbank Professor of Geosciences and International Affairs and the Princeton Environmental Institute and first author of the paper.

“For someone trying to prepare their city or coastline, how much the ocean will rise is not an abstract question,” Oppenheimer said. “They need a number that’s not too high or too low. Lives and dollars are at risk.”

Climate projections attempt to capture immense, complicated phenomena that are dependent on various shifting factors — natural and man-made — and complex interactions between oceans, ice and land. Ice in particular is “notoriously difficult” to model, Oppenheimer said. Giving statistically accurate and informative assessments of a model’s uncertainty is a daunting task, and an expert’s scientific training for such an estimation may not always be adequate.

Oppenheimer and his co-authors use a technique known as “structured expert judgment” to put an actual value on the uncertainty that scientists studying climate change have about a particular model’s prediction of future events such as sea-level rise. Experts are each “weighted” for their ability to quantify uncertainty regarding the situation at hand by gauging their knowledge of their respective fields. More consideration is given to experts with higher statistical accuracy and informativeness. Another technique, called probabilistic inversion, would adjust a climate model’s projections to reflect those experts’ judgment of its probability.

Structured expert judgment has been used for decades in fields where scenarios have high degrees of uncertainty, most notably nuclear-energy generation, Oppenheimer explained. Similar to climate change, nuclear energy presents serious risks, the likelihood and consequences of which — short of just waiting for them to occur — need to be accurately assessed.

When it comes to climate change, however, the procedure by which experts assess the accuracy of models projecting potentially ruinous outcomes for the planet and society is surprisingly informal, Oppenheimer said.

When the Intergovernmental Panel on Climate Change (IPCC) — an organization under the auspices of the United Nations that periodically evaluates the effects of climate change — tried to determine the ice loss from Antarctica for its Fourth Assessment Report released in 2007, discussion by the authors largely occurred behind closed doors, said Oppenheimer, who has been long involved with the IPCC and served as an author of its Assessment Reports.

In the end, the panel decided there was too much uncertainty in the Antarctic models to say how much ice the continent would lose over this century. But there was no actual traceable and consistent procedure that led to that conclusion, Oppenheimer said. As models improved, the Fifth Assessment Report, released in 2013, was able to provide numerical estimates of future ice loss but still based on the informal judgment of a limited number of participants.

Claudia Tebaldi, a project scientist at the National Center for Atmospheric Research, said that the researchers propose a much more robust method for evaluating the increasing volume of climate-change data coming out than experts coming up with “a ballpark estimate based on their own judgments.”

“Almost every problem out there would benefit from some approach like this, especially when you get to the point of producing something like the IPCC report where you’re looking at a number of studies and you have to reconcile them,” said Tebaldi, who is familiar with the research but had no role in it. “It would be more satisfying to do it in a more formal way like this article proposes.”

The implementation of the researchers’ technique, however, might be complicated, she said. Large bodies such as the IPCC and even individual groups authoring papers would need a collaborator with the skills to carry it out. But, she said, if individual research groups adopt the method and demonstrate its value, it could eventually rise up to the IPCC Assessment Reports.

For policymakers and the public, a more transparent and consistent measurement of how scientists perceive the accuracy of climate models could help instill more confidence in climate projections as a whole, said Sander van der Linden, a postdoctoral researcher and lecturer of public affairs, and director of Princeton’s Social and Environmental Decision-Making (SED) Labwho studies public policy from a behavioral-science perspective. With no insight into how climate projections are judged, the public could take away from situations such as the IPCC’s uncertain conclusion about Antarctica in 2007 that the problems of climate change are inconsequential or that scientists do not know enough to justify the effort (and possible expense) of a public-policy response, he said.

“Systematic uncertainties are actually forms of knowledge in themselves, yet most people outside of science don’t think about uncertainty this way,” said van der Linden, who is familiar with the research but had no role in it. “We as scientists need to do a better job at promoting public understanding of uncertainty. Thus, in my opinion, greater transparency about uncertainty in climate models needs to be paired with a concerted effort to improve the way we communicate with the public about uncertainty and risk.”

Oppenheimer worked with co-author Christopher Little, a climate scientist at Atmospheric and Environmental Research Inc. in Massachusetts, and a former postdoctoral research associate in the Program in Science, Technology and Environmental Policy in Princeton’s Woodrow Wilson School of Public and International Affairs; and Roger Cooke, a professor at the University of Strathclyde in Scotland and Resources for the Future in Washington, renowned for his research on structured expert judgment.

The latest paper stems from research Oppenheimer and Little published in 2013 in Nature Climate Change and the Proceedings of the National Academy of Sciences. These earlier papers proposed methods for more consistently integrating ice-loss from Antarctica and Greenland into sea level-rise projections.

The current paper, “Expert judgement and uncertainty quantification for climate change,” was published online April 27 by Nature Climate Change.


Substack subscription form sign up