Friday, January 7, 2022

The danger of leaving weatherprediction to AI

From Wired by Meghan Herbst

When it comes to forecasting the elements, many seem ready to welcome the machine.
But humans still outperform the algorithms—especially in bad conditions.

HUMANS HAVE TRIED to anticipate the climate’s turns for millennia, using early lore—“red skies at night” is an optimistic sigil for weather-weary sailors that’s actually associated with dry air and high pressure over an area—as well as observations taken from roofs, hand-drawn maps, and local rules of thumb.
These guides to future weather predictions were based off years of observation and experience.

Then, in the 1950s, a group of mathematicians, meteorologists, and computer scientists—led by John von Neumann, a renowned mathematician who had assisted the Manhattan Project years earlier, and Jule Charney, an atmospheric physicist often considered the father of dynamic meteorology—tested the first computerized automatic forecast.

Charney, with a team of five meteorologists, divided the United States into (by today’s standards) fairly large parcels, each more than 700 kilometers in area.
By running a basic algorithm that took the real-time pressure field in each discrete unit and prognosticated it forward over the course of a day, the team created four 24-hour atmospheric forecasts covering the entire country.
It took 33 full days and nights to complete the forecasts.
Though far from perfect, the results were encouraging enough to set off a revolution in weather forecasting, moving the field toward computer-based modeling.

Over the ensuing decades, billions of dollars in investments and the evolution of faster, smaller computers led to a surge in predictive capability.
Models are now capable of interpreting the dynamics of parcels of atmosphere as small as 3 kilometers in area, and since 1960 these models have been able to include ever-more-accurate data sent from weather satellites.

In 2016 and 2018, the GOES-16 and -17 satellites launched into orbit, providing a host of improvements, including higher-resolution images and pinpoint lightning detection.
The most popular numerical models, the US-based Global Forecasting System (GFS) and European Center for Medium-Range Weather Forecasts (ECMWF), released significant upgrades this year, and new products and models are being developed at a faster clip than ever.
At a finger’s touch, we can access an astonishingly precise weather forecast for our exact location on the Earth’s surface.

Today’s lightning-speed predictions, the product of advanced algorithms and global data collection, appear one step away from complete automation.
But they’re not perfect yet.
Despite the expensive models, array of advanced satellites, and mega-computers, human forecasters have a unique set of tools all their own.
Experience—their ability to observe and draw connections where algorithms cannot—gives these forecasters an edge that continues to outperform the glitzy weather machines in the highest-stake situations.

On the left is the new paper’s “Deep Learning Weather Prediction” forecast.
The middle is the actual weather for the 2017–18 year,
and at the right is the average weather for that day. Image via: J. A. Weyn, D. R. Durran, and R. Caruana, “Improving data-driven global weather prediction using deep convolutional neural networks on a cubed sphere”

THOUGH TREMENDOUSLY USEFUL with big-picture forecasting, models aren’t sensitive to, say, the little updraft in one small land quadrant that suggests a waterspout is forming, according to Andrew Devanas, an operational forecaster at the National Weather Service office in Key West, Florida.
Devanas lives near one of the world’s most active regions for waterspouts, marine-based tornadoes that can damage ships that pass through the Florida Straits# and even come onshore.

The same limitation impedes predictions of thunderstorms, extreme precipitation, and land-based tornadoes, like those that tore through the Midwest in early December, killing more than 60 people.
But when tornadoes occur on land, forecasters can often spot them by looking for their signature on radar; waterspouts are much smaller and often lack this signal.
In a tropical environment like the Florida Keys, the weather doesn’t change much from day to day, so Devanas and his colleagues had to manually look at variations in the atmosphere, like wind speed and available moisture—variations that the algorithms don’t always take into account—to see if there was any correlation between certain factors and a higher risk of waterspouts.
They compared these observations to a modeled probability index that indicates whether waterspouts are likely and found that with the right combination of atmospheric measurements, the human forecast “outperformed” the model in every metric of predicting watersprouts.

Similarly, research published by NOAA Weather Prediction Service director David Novak and his colleagues show that while human forecasters may not be able to “beat” the models on your typical sunny, fair-weather day, they still produce more accurate predictions than the algorithm-crunchers in bad weather.
Over the two decades of information Novak’s team studied, humans were 20 to 40 percent more accurate at forecasting near-future precipitation than the Global Forecast System (GFS) and the North American Mesoscale Forecast System (NAM), the most commonly used national models.
Humans also made statistically significant improvements to temperature forecasting over both model’s guidance.
“Oftentimes, we find that in the bigger events is when the forecasters can make some value-added improvements to the automated guidance,” says Novak.

Particularly in adverse conditions, great improvements to the model’s forecast were usually due to human augmentation, he adds.
This is even more true for local, severe events like thunderstorms and tornadoes, which rely on split-second decision-making in order to save lives.
As forecasters become more familiar with a particular model, they begin to notice its biases and failings, Novak adds.
Just like the model learns from us, we learn from the model.

picture : ECMWF
AT EMBRY-RIDDLE AERONAUTICAL College in Arizona, meteorologist Shawn Milrad prepares would-be forecasters to use the glut of tools now at their disposal.

Milrad entered the field in the early 2000s, an era when the dominant forecasting methods were shifting from older techniques to numerical weather models and automated observations.

These technologies were critical to recent advances in atmospheric science, but Milrad cautions his students against complacency and dependence on the automated data models.

“If they’re going to be forecasting precipitation, they should be able to defend it by analyzing the physical processes and mechanisms that they see on the maps,” says Milrad.
He sees utility in the continued use of rules of thumb and pattern recognition techniques, not only as teaching tools, but also to defend against losing the vital experience forecasters bring to bear in severe weather situations or when models are off-base.
“There’s an old adage that ‘all models are wrong, some are useful,’” says Milrad.
“Even if it’s a great forecast it’s going to be slightly wrong.
It's how you can add value to that model.”

Plus, even though computer-generated forecasts are likely to continue improving over time, a number of challenges stand in the way of anything resembling full automation, which requires a significant expansion of computing power, with a multibillion dollar price tag.
The Department of Energy bankrolled the development of three exascale computers—capable of performing 1018 calculations per second—in 2018.
The first of these, the Aurora supercomputer under development at Argonne National Laboratory in Illinois, is slated to go online in 2022 and will be able to perform 1 quintillion calculations per second, but several different scientific fields are vying for access to its immense processing power.
And current infrastructure could also be at risk, since full rollout of 5G threatens to interfere with several key weather satellites.
Radio interference could degrade the quality of satellite observations of water vapor and potentially set forecasting capability back by decades.

In truth, the future of accurate weather forecasting may not necessarily rely on automation, but on a more mundane solution: financial support.
Thanks to these technological advances in weather forecasting and meteorology, human forecasters who once juggled the more tedious aspects of the job now have the bandwidth to focus on severe weather, research, and communicating important information about risks and preparation to agencies and people living in their area.
If such important work is to continue, the National Weather Service, on which so much of our weather infrastructure relies, must remain adequately funded.

Though it’s the private weather companies—like Accuweather and Weather Underground—that can provide more frequent, pinpoint forecasts, their business models rely on advertisement, subscription revenue, and enhanced services offered at a premium, and most employ few meteorologists (Accuweather employs around 100, while the NWS has more than 2,000).
Prior attempts by legislators—with financial backing from Accuweather executives—to limit the NWS from sharing weather information with the public have been met with outrage by the meteorological community.
If we want to continue to receive in-depth weather forecasts and crucial warnings, touched by human hands, we need to preserve agencies and services that value human-augmented forecasts and the public’s right to know.
(The service’s budget fell considerably during the Trump administration but thankfully is now reaching new highs, with a $6.2 billion NOAA funding package proposed for 2022—the largest in the agency’s history.)

Devanas, the NWS forecaster in Key West, agrees that the private sector has a lot to contribute to forecasting but is wary of the amount of unreliable weather information that is circulated as a result.
Even as algorithms and models continue to improve, Devanas believes we can’t lose sight of the science behind everything.
“I'm not here to say ‘Today is going to be 92 degrees, and it’s going to be 80 degrees at night with a 20 percent chance of rain.’ I could in essence get a monkey to do that,” he says.
“Those are things where we need some local expertise.
Those are things where the rules of thumb come in, and that local knowledge becomes invaluable.”

Links :

No comments:

Post a Comment