News + Opinions

Vulcan Climate Modeling Team Marks Exciting Milestone Using Machine Learning to Improve Accuracy

 
Jul 20 2021
Oli Watt-Meyer from the Machine Learning group on Vulcan's Climate Modeling team tells us about some exciting progress they've made recently

What is the climate modeling team working on big picture?

Our team is working to make climate models faster and better. Climate models encode mathematical representation of the Earth’s systems to predict the evolution of the atmosphere and ocean starting from current conditions. In other words, they simulate weather patterns – both short-term and long-term. We can use climate modeling to understand what types of changes due to climate change we should plan for.

Our team is focusing our efforts on improving the so-called “FV3GFS” model, which is currently used for global weather forecasts in the United States. We are making it faster by redesigning a climate model that will run on modern supercomputers. To do that we are rewriting the existing Fortran source code in a newly developed Python-based programming language in order to efficiently use the latest computing hardware.

We are making it more accurate by using machine learning to improve the representation of the atmosphere in the model. Right now climate models are good at projecting warming trends, but less good at projecting what precipitation – so how much rain or snow a region should expect – might look like in 10 years or 50.
 

What part of that are you focusing on?

I work on the machine learning (“make it better”) side of the team. Climate models range from coarse to fine resolution; the finer the resolution, the more accurate the results tend to be. High-resolution models are expensive to run, so we’re looking for a way to leverage their accuracy for the more broadly used coarse models.

We use reference datasets from ultra high-resolution simulations to train machine learning algorithms that work in concert with the conventional climate model to make it more accurate. Specifically, we’re trying to improve precipitation projections and reduce uncertainty of how rainfall is going to change under global warming.
 

How’s it going? What progress have you made recently? 

It’s going well! We’ve just published a paper on an exciting new result. In this work, we run a coarse resolution simulation for a historical period (in this case, the year 2015) and track how the simulations differs from what actually occurred in the atmosphere over that time. We then train a machine learning algorithm that can gently “nudge” the climate model to stay on a track that is closer to reality.

When we rerun the climate model with the machine learning adding its correction at each snapshot in time, it is able to make more accurate forecasts of large-scale weather patterns on short (3-7 day) time scales and also predicts a pattern of rainfall over the year-long run that is closer to what actually occurred in 2015.
 

Why is that exciting?

This is the first time a nonlinear machine learning algorithm has been used to make this kind of correction in a realistic climate model. A lot of previous work has focused on idealized models, such as planets that are fully ocean-covered.

The fact that we can use a machine learning model within a realistic climate model and do stable year-long simulations is a key first step, and not an easy task! That we are also able to make notable improvements in the simulation accuracy of some variables is really exciting.
 

What difference would this make in terms of responding to climate change?

Long-term, the goal is that our machine learning methods get incorporated into the leading climate models that are used to make projections about climate change. If we are able to achieve our goal of reducing the uncertainty in precipitation forecasts, on timescales from seasonal up to 20- to 50-years into the future, this would be tremendously useful for planners in all kinds of areas.

Planning for droughts, planning for floods, planning for heat waves – all of this important work would be more effective if decisionmakers have better information on what to expect.
 

Where do you go from here?

We’ve got a lot of future directions that we’re excited about! One important path is applying our current method to a higher resolution climate model—can the machine learning still provide improvements when the baseline model you’re starting out with is already pretty good? As well, we are training our machine learning models on output from an ultra high-resolution simulation instead of on observational data.

The advantage of this strategy is that it lets us train a machine learning model that will perform better in a wide range of climates, not just in present-day conditions. Finally, we have been using different machine learning architectures, such as neural networks, which provide efficiency advantages and in some cases can be more accurate than the random forest algorithm we have primarily used to-date.
 

Who are you working on this with?

We closely collaborate with scientists at NOAA’s Geophysical Fluid Dynamics Laboratory and two members of our team work at this site. This lab was the center that developed the engine for the model that we are using and trying to improve.

They also have access to a supercomputer and provide computational resources for us—including doing the very high-resolution reference simulations. The codebase for the climate model we are working with is provided by NOAA’s Environmental Modeling Center.
 

How can people learn more?

For some general discussion of our project, and recent news and announcements, I encourage folks to checkout our website.

Within the scientific community, we are regularly presenting our work at conferences such as the American Geophyscal Union and American Meteorological Society’s annual meetings. The publication mentioned above is available online.

An exciting part of our work is that it is all open-source and open-development. You can find an index of our code repositories on GitHub.
 
Oli Watt-Meyer