An In-Depth Look at the Finite Volume Cubed-Sphere Dynamical Core (FV3)
From ScienceMag by Paul Vousen
Take that, Europe.
Computer modeler aims to give U.S. lead in weather predictions :
Shian-Jiann “S. J.” Lin’s program will power short-term weather forecasts and long-term climate simulations.
Computer modeler aims to give U.S. lead in weather predictions :
Shian-Jiann “S. J.” Lin’s program will power short-term weather forecasts and long-term climate simulations.
From below the conference table comes the thrum of incoming phone alerts.
The new weather forecast has rolled in, and the climate scientists, even though it’s not typically their business, dig out their phones to look: snow tomorrow—hardly unusual for early February in Princeton, New Jersey.
But the weather models have the storm breaking severe, dumping a foot or more.
A snow day seems likely.
Across the table at the Geophysical Fluid Dynamics Laboratory (GFDL), Shian-Jiann “S. J.” Lin is not convinced.
He is the master of 20,000 lines of computer code that divide the atmosphere into boxes and, with canny accuracy, solve the equations that describe how air swirls around the globe.
For decades, Lin’s program has powered the long-term simulations of many climate models, including GFDL’s—one of the crown jewels of the U.S. National Oceanic and Atmospheric Administration (NOAA).
Now, Lin’s domain is expanding to a different side of NOAA: the short-term weather forecasts of the National Weather Service (NWS).
By 2018, Lin’s program will be powering a unified system for both climate and weather forecasting, one that could predict conditions tomorrow, or a century from now—and do it faster and better than current models.
His work will soon be guiding mayors planning not just for snow plows, but also rising seas.
But Lin has started early.
His small team is already running a prototype forecast on their supercomputer.
And in his typically confident and brash style, he offers a minority report about the next day’s storm.
“If our forecast is correct, it’s only 3 to 6 inches,” Lin announces. His peers at the table seem skeptical.
“It’s going to be a mess,” one warns.
But Lin doesn’t budge.
He rarely needs to.
“We’ll see what we get tomorrow,” he says.
“You want to bet?”
Much is riding on Lin.
NOAA'S new weather satellite expected to lead to more accurate forecasts.
The first set of images from the GOES-16 satellite have been released by National Oceanic and Atmospheric Administration (N0AA).
The geostationary satellite will be used for weather forecasting, severe storm tracking and more.
The first set of images from the GOES-16 satellite have been released by National Oceanic and Atmospheric Administration (N0AA).
The geostationary satellite will be used for weather forecasting, severe storm tracking and more.
Recently, NWS has suffered some prominent embarrassments, such as in 2012, when it predicted Hurricane Sandy would sputter out over the ocean while a leading European center accurately forecast the direct hit on New York City.
Fed up with the country’s second-place status, Congress in 2013 poured $48 million into NWS weather modeling.
The message for NOAA was clear: Get America on top.
This drive has opened up an opportunity.
For a long time, meteorologists and climate scientists operated in separate domains.
Meteorologists focused on speed: ingesting as many data as possible from satellites, balloons, and buoys and quickly spinning it into a forecast.
Climate scientists focused on the fussy physics of their models to produce plausible simulations over decades.
But now, the two groups are discovering common ground, in “subseasonal to seasonal” predictions—from a month to 2 years out.
In order to push forecasts beyond 10 days or so, meteorologists need the superior physics of the climate models.
Meanwhile, climate scientists want to know how weather phenomena that happen on monthly or annual timescales, like El Niño, influence the global climate.
“The two cultures are speaking each other’s language, and realizing they’re going to live and die together,” says John Michalakes, a computer scientist who develops atmospheric models at the Naval Research Laboratory in Monterey, California.
There could be another benefit to blurring the lines between weather and climate, one that climate scientists are loath to talk about explicitly.
Although studies of human-driven climate change have faced scrutiny and scorn from conservative politicians in the United States, weather research remains solidly bipartisan, says David Titley, director of the Center for Solutions to Weather and Climate Risk at Pennsylvania State University in State College.
Just this month, for example, Congress passed a weather forecasting bill that dedicates $26.5 million of NOAA’s budget to improving its seasonal predictions, and climate change doubters were among the supporters.
“If I were running the world, I would keep that divide vague,” Titley says.
In his modeling, Lin never made the distinction.
“From the beginning we talked about how there is no difference between weather and climate,” says Ricky Rood, an atmospheric scientist at the University of Michigan in Ann Arbor and Lin’s longtime collaborator.
But others haven’t wanted to hear that message—and especially not from Lin, who is as feisty and fractious as a government employee can get.
“It’s amazing to me,” says Rood, “that S. J. could evolve to be a source of unification.”
Storms have roiled around Lin his whole life.
Typhoons are regular events in Taipei, where he grew up, and he was always fascinated by their power.
“I have hurricanes in my blood,” he says.
Born in 1958 to parents who ran a small construction company, he was the first in his family to go to college.
As a student at National Taiwan University, he studied microprocessor architectures, along with meteorology and fluid dynamics.
He became fascinated with the challenge of rendering the continuous currents of the atmosphere in the discontinuous, 0-or-1 world of computer code.
At the time, Taiwan was a dictatorship, and Lin joined student groups opposed to the regime.
After college, he faced several years of mandatory military service.
He aced his entry test and assumed he would land a cushy engineering job in Taipei.
Instead, he was shipped to the Matsu Islands, 16 kilometers from the Chinese mainland.
He was hardly a model soldier.
He hated having to recite party doctrine during assemblies.
“You had to pretend, and say something not in your heart,” he says.
Taiwan didn’t seem to have a place for him, so in 1983 he enrolled in the aerospace engineering department at the University of Oklahoma, one of the only schools he could afford.
He wanted to be a rocket scientist.
But it was a tough transition.
He cared more about learning computer languages than English, and felt isolated.
His accent is a barrier, but not the only one.
“Some folks tend to have a difficult time following S. J.,” says Bill Putman, a meteorologist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and another longtime collaborator.
“But it’s not necessarily a language barrier.
It’s more a knowledge barrier.”
Seeing his talent for computational fluid dynamics, his adviser suggested Lin switch to Princeton University, which with its partnership with GFDL is a hotbed for atmospheric modeling.
He learned how GFDL scientists divided the air into a 3D grid that spanned the globe and stretched from the surface to the stratosphere, following lines of latitude and longitude.
Along points on the grid, they would set initial conditions—the weather or climate for a given moment in time.
Then, point by point, the computer would solve equations describing changes in wind, air pressure, temperature, and humidity for successive steps in time.
Computers were room-sized mainframes at the time, and the model grids were huge, with a mesh size of 500 kilometers.
The models could recreate only the largest atmospheric features, like jet streams and the Hadley cell, the belt that circulates warm air from the equator to the subtropics.
After graduate school, Lin decided to stay in the United States.
“I’m now more American than I am Taiwanese,” he says.
He drinks whisky, but infuses it with ginseng.
He returned to the University of Oklahoma as a postdoc to work on modeling tornadoes.
But computers couldn’t yet model events that unfold at such small scales.
The failure was humbling, and Lin says it provided a mantra: “Choose the right level of complexity for the particular problem, at the time that you have the resources to do it.”
Lin soon found the right problem at NASA.
In the late 1980s, Rood was working on the problem of the Antarctic ozone hole at Goddard.
NASA was flying research planes into the hole to measure the chemicals that might be destroying it.
These flights revealed a drop in several short-lived reactive nitrogen oxides, which allowed chlorine from human-made chemicals to linger, priming further reactions that broke down the ozone.
But Rood’s atmospheric models couldn’t simulate the flows and reactions.
No matter what he did, the nitrogen reactants remained steady.
How could that happen?
At the time, an elegant mathematical solution had overtaken global modeling, called the spectral method.
Rather than solving at points on a latitude-longitude grid, scientists realized that fluid flow in the atmosphere could be represented as the sum of a series of hundreds of sinusoidal, crisscrossing waves.
The code ran faster, and the results could be transformed back onto a regular grid.
The spectral method still powers most global weather forecasts today, including at NWS.
But the speed comes with a cost: When the waves are projected back into physical space, mass can gradually grow unbalanced.
For weather models, which only run for days into the future, this is not a big deal.
But for models of atmospheric chemistry and climate, which run for much longer periods, these distortions were a critical flaw.
Fortunately for Rood, a young Taiwanese scientist had written to him, lured by his publications.
When Lin joined NASA in 1992 as a contractor, the two set out to build a model that, above all else, preserved mass.
This first meant jettisoning the spectral method.
It also meant upgrading from finite-difference modeling, which solves for points on a grid, to a finite-volume model, which solves for conditions averaged across each cell, or box, and is ideally suited for conserving mass because the calculations pass fluxes, or volumes, of material from one box to the next.
Others had considered such a solution, but thought it too complex or computationally expensive.
But Lin was a master of computational efficiency.
Over a furious few years in the mid-1990s, he and Rood expanded their model beyond chemical transport—for which it remains the standard—to a fullfledged dynamical core fast enough to be used for climate models.
Put a mote of dust in the air, says Paul Ginoux, an aerosol modeler at GFDL, who also worked with Lin at Goddard, “and this code will transport it at the right place, at the right moment.
And that’s beautiful.” The name of the code was far more mundane.
They called it “FV,” for finite-volume, and later FV3.
Their work soon drew the attention of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, one of the country’s leading institutes for weather and climate science, which incorporated FV into its influential climate model.
NASA’s climate laboratory in New York City adopted it as well.
And in 2003, GFDL lured Lin away to upgrade FV and fold it into its global simulation.
The results of these models, some of the top U.S. contributions to the United Nations panel on climate change, have informed much of what the public hears about global warming.
And they’ve all had Lin’s innovations at their heart.
There's a term of art at NOAA for the reactive way Congress finances weather research: “budgeting by disaster.” It’s rarely pretty, and it’s why the coming merger in atmospheric modeling will, at its root, be thanks to the calamities of Hurricane Katrina and Hurricane Sandy.
In 2005, after NWS failed to forecast Katrina’s direct hit on New Orleans, Louisiana, until 2 days out, Congress set aside money to improve predictions of Atlantic hurricanes.
As it happened, it was around this time that Lin walked into the office of his boss at GFDL, Isaac Held, and declared: “I’m going to revolutionize weather prediction.” Computers were now capable of processing boxes small enough to render hurricanes.
More important, Lin had developed a key bit of physics needed for FV3 to forecast realistic hurricanes.
Many global forecasting models operate using an assumption called the hydrostatic principle—where the gravity of the air in any box is exactly balanced by the upward force of the air pressure in the box below it.
This works for coarse models, which cannot directly simulate the fine upward and downward flows in the real atmosphere.
But recreating weather events like hurricanes and thunderstorms, where updrafts are important, requires breaking this hydrostatic principle.
After a decade of mulling, Lin finally had an efficient way of incorporating nonhydrostatic flows into his code.
He needed to test it.
Zooming in on storms
The FV3 model divides the atmosphere
into boxes and simulates conditions in each one.
To avoid problems at the poles, its coordinates are based on a cubed sphere.
The program can also nest grids to simulate weather at different scales.
Frank Marks, who leads hurricane research at
NOAA’s Atlantic Oceanographic and Meteorological Laboratory in Miami,
Florida, was overseeing improvements for the regional hurricane model
for the Atlantic basin.
With a smaller area to model, Marks can afford
to have fine-scale boxes.
Lin convinced him to use Katrina dollars to
buy extra supercomputer time.
Run FV3 at a 1-kilometer resolution, Lin
promised, and the finest details of cyclones would arise.
Sure enough,
the violent walls of a hurricane’s eye opened in his code.
In 2014, when NOAA announced a competition to choose the “core” of
the agency’s next-generation weather forecast system, Lin was ready.
Five models were entered, including FV3.
And by the summer of 2015,
FV3 was one of two frontrunners, along with the Model for Prediction
Across Scales (MPAS), the globalized version of a long-standing system
produced by NCAR and used by many researchers.
They would be judged on
their speed and accuracy in mimicking the atmosphere’s flows.
For 6 months, Lin’s placid office turned frenetic, as his team worked
nights and weekends to embed FV3 within the weather service’s system.
“There was never a time where I thought we were losing the battle on
scientific ground,” Lin says.
One advantage of his model was efficiency.
It is Lin’s obsession—and not just at work: When Hurricane Sandy
knocked out power at Lin’s modest home, he refused to use a normal
generator, and instead rigged his Prius up to his home wiring.
Its
battery, he explained, would make certain any extra electricity the
car’s generator churned out wouldn’t go to waste.
So that FV3 could make efficient use of limited computing power, Lin
and his team had written the code to work in parallel.
This is hard for
global models, where the weather in one box can influence another box a
hemisphere away.
But this interconnectedness isn’t as big a problem in
the vertical dimension, so Lin enabled FV3’s layers to be detached from
each other and be processed in parallel.
He won additional efficiencies
by changing the shape of the grid.
Climate models are plagued by the
so-called pole problem, the result of the strangely squished and
stretched boxes near the poles.
So Lin and Putman, his former NASA
colleague, abandoned the latitude/longitude system in favor of a cubed
sphere.
Picture a six-sided die inflated like a balloon.
There were no
more poles to handle, just six square panels, with tricky interactions
at the seams.
The net result: compared with MPAS, FV3 took a third as many computer
processors to run at operational standards.
It also outperformed MPAS
when run on a vast number of processors, and it could zoom in to model
one part of the globe at high resolution without skewing its performance
in coarser regions.
It was a slaughter.
NCAR withdrew its model before
NOAA anointed FV3 as the winner, in July 2016.
“There was just never any
conclusive evidence that MPAS had an advantage that was worth the
cost,” says Michalakes, who led the computing comparisons.
During the competition, Lin had complained that NOAA was biased in
favor of MPAS; now, he crows about his victory.
“Most people in that
discipline paid no respect to what we had been doing,” he says.
“They
found out the hard way.” With NCAR toppled, Lin now faces far bigger
rivals: the United Kingdom’s Met Office, which since the early 1990s has
been the only center to have merged its weather and climate forecasts,
and the European Centre for Medium-Range Forecasts, which has long run
the top-rated weather model.
This time around, he’ll need help.
European modelers start with the same set of balloon, satellite, and
ground measurements as everyone else.
But they cleverly inject
randomness into these initial conditions, then do multiple runs to come
up with a “consensus” forecast.
Getting the United States up to those
standards will require winning over U.S.
researchers to provide
innovative techniques that Lin and his colleagues can adapt for their
model.
Yet there’s a risk that academic weather scientists will avoid using
FV3 and instead stick with MPAS, more comfortable with its origins and
documentation, says Cliff Mass, an atmospheric scientist at the
University of Washington in Seattle.
Lin’s reluctance to break down his
code in the past has heightened concerns.
“Lin is a brilliant modeler,”
Mass says.
“He’s not big on community support.” But Putman believes Lin
will embrace true improvements.
“If he sees something that will push
this code beyond where it is now, I’m sure he’s willing to adapt.”
At a workshop next week, NWS will lay out its aggressive timetable
for turning on FV3.
By this May, FV3 ought to be fully wired into the
service’s data assimilation.
And by the first half of 2018, if all goes
well, NOAA will flip the switch, making it the standard forecast that
feeds into all of our phones.
Meanwhile, Lin’s team continues to tinker with FV3.
They’re honing a
more powerful zooming technique: allowing the grid to create nests of
high-resolution boxes, 2 to 3 kilometers a side, over regions of
interest.
This could allow high-resolution hurricane forecasts to be run
at the same time as global predictions, with no need to wait for the
global run to finish.
And it could capture tornado outbreaks and severe
storms, weather that has been too finegrained for existing global
models.
“We’re kind of ambitious,” Lin says.
“We’re trying to cover
everything.”
On a screen at GFDL, Lucas Harris, Lin’s deputy, zooms in on
Oklahoma, where a nested FV3 grid is recreating the events of May 2013.
It was that month that a severe twister plowed through Moore, Oklahoma,
killing 24.
As the model runs, scattered storms organize into a line of
squalls.
Then anvil clouds form—the thunderstorm cells from which
tornadoes would touch down on Moore.
Next, Harris changes the place and
time, to the eastern United States in June 2012, when a bow of
thunderstorms—a so-called derecho—caught forecasters off guard and in
some areas knocked out power for a week.
The model sees traces of the
storm nearly 3 days in advance.
“Previously,” Harris says, “it was
believed there was only 12 hours of predictability to this event.”
So far these results have stayed in the lab, but Lin is doing his
best to spread the gospel.
For the 2017 hurricane season, his prototype
will run alongside existing regional hurricane models.
And next month,
Lin will return to Oklahoma for the “Spring Experiment,” a research
jamboree of severe storm scientists, to test how the zooming technique
could help local forecasters.
All this collaboration, this dependence on outside contributions,
makes Lin nervous.
His model is moving out of the lab into the messy
real world.
Will it become the bedrock of all weather and climate
prediction, from tornadoes next week to temperature rises next decade?
“I’m cautiously optimistic, but not overly optimistic,” he says.
A good omen comes the next morning.
Snow blankets
Princeton—beautiful, but also manageable.
Nearly 6 inches fell, not a
foot or more.
GFDL could have stayed open.
Over the ether, Lin can’t
resist a final comment.
“The snow,” he writes, “is not as bad as
forecasted.”
Links :
The Washington Post : IBM just threw its hat into the weather modeling ring
ReplyDelete