Google’s new weather prediction system combines AI with traditional physics

Date:

While new machine-learning techniques that predict weather by learning from years of past data are extremely fast and efficient, they can struggle with long-term predictions. General circulation models, on the other hand, which have dominated weather prediction for the last 50 years, use complex equations to model changes in the atmosphere and give accurate projections, but they are exceedingly slow and expensive to run. Experts are divided on which tool will be most reliable going forward. But the new model from Google instead attempts to combine the two. 

“It’s not sort of physics versus AI. It’s really physics and AI together,” says Stephan Hoyer, an AI researcher at Google Research and a coauthor of the paper. 

The system still uses a conventional model to work out some of the large atmospheric changes required to make a prediction. It then incorporates AI, which tends to do well where those larger models fall flat—typically for predictions on scales smaller than about 25 kilometers, like those dealing with cloud formations or regional microclimates (San Francisco’s fog, for example). “That’s where we inject AI very selectively to correct the errors that accumulate on small scales,” Hoyer says.

The result, the researchers say, is a model that can produce quality predictions faster with less computational power. They say NeuralGCM is as accurate as one-to-15-day forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF), which is a partner organization in the research. 

But the real promise of technology like this is not in better weather predictions for your local area, says Aaron Hill, an assistant professor at the School of Meteorology at the University of Oklahoma, who was not involved in this research. Instead, it’s in larger-scale climate events that are prohibitively expensive to model with conventional techniques. The possibilities could range from predicting tropical cyclones with more notice to modeling more complex climate changes that are years away. 

“It’s so computationally intensive to simulate the globe over and over again or for long periods of time,” Hill says. That means the best climate models are hamstrung by the high costs of computing power, which presents a real bottleneck to research. 

AI-based models are indeed more compact. Once trained, typically on 40 years of historical weather data from ECMWF, a machine-learning model like Google’s GraphCast can run on less than 5,500 lines of code, compared with the nearly 377,000 lines required for the model from the National Oceanic and Atmospheric Administration, according to the paper. 

NeuralGCM, according to Hill, seems to make a strong case that AI can be brought in for particular elements of weather modeling to make things faster, while still keeping the strengths of conventional systems.

“We don’t have to throw away all the knowledge that we’ve gained over the last 100 years about how the atmosphere works,” he says. “We can actually integrate that with the power of AI and machine learning as well.”

Hoyer says using the model to predict short-term weather has been useful for validating its predictions, but that the goal is indeed to be able to use it for longer-term modeling, particularly for extreme weather risk. 

NeuralGCM will be open source. While Hoyer says he looks forward to having climate scientists use it in their research, the model may also be of interest to more than just academics. Commodities traders and agricultural planners pay top dollar for high-resolution predictions, and the models used by insurance companies for products like flood or extreme weather insurance are struggling to account for the impact of climate change. 

While many of the AI skeptics in weather forecasting have been won over by recent developments, according to Hill, the fast pace is hard for the research community to keep up with. “It’s gangbusters,” he says—it seems as if a new model is released by Google, Nvidia, or Huawei every two months. That makes it difficult for researchers to actually sort out which of the new tools will be most useful and apply for research grants accordingly. 

“The appetite is there [for AI],” Hill says. “But I think a lot of us still are waiting to see what happens.”

Correction: This story was updated to clarify that Stephan Hoyer is a researcher at Google Research, not Google DeepMind.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this
Related