Thursday, August 30, 2018

Why the weather forecast will always be a bit wrong

The Great Storm of October 1987: when forecasters got it wrong.

From Phys / The Conversation by Jon Shonk

The science of weather forecasting falls to public scrutiny every single day.
When the forecast is correct, we rarely comment, but we are often quick to complain when the forecast is wrong.
Are we ever likely to achieve a perfect forecast that is accurate to the hour?

There are many steps involved in preparing a weather forecast.
It begins its life as a global "snapshot" of the atmosphere at a given time, mapped onto a three-dimensional grid of points that span the entire globe and stretch from the surface to the stratosphere (and sometimes higher).

Using a supercomputer and a sophisticated model that describes the behaviour of the atmosphere with physics equations, this snapshot is then stepped forward in time, producing many terabytes of raw forecast data.
It then falls to human forecasters to interpret the data and turn it into a meaningful forecast that is broadcast to the public.

Find out what the lines, arrows and letters mean on synoptic weather charts.
The word 'synoptic' simply means a summary of the current situation.
In weather terms this means the pressure pattern, fronts, wind direction and speed and how they will change and evolve over the coming few days.
Temperature, pressure and winds are all in balance and the atmosphere is constantly changing to preserve this balance.

The whether in the weather

Forecasting the weather is a huge challenge.
For a start, we are attempting to predict something that is inherently unpredictable.
The atmosphere is a chaotic system – a small change in the state of the atmosphere in one location can have remarkable consequences over time elsewhere, which was analogised by one scientist as the so-called butterfly effect.

Any error that develops in a forecast will rapidly grow and cause further errors on a larger scale.
And since we have to make many assumptions when modelling the atmosphere, it becomes clear how easily forecast errors can develop.
For a perfect forecast, we would need to remove every single error.

Forecast skill has been improving.
Modern forecasts are certainly much more reliable than they were before the supercomputer era.
The UK's earliest published forecasts date back to 1861, when Royal Navy officer and keen meteorologist Robert Fitzroy began publishing forecasts in The Times.

His methods involved drawing weather charts using observations from a small number of locations and making predictions based on how the weather evolved in the past when the charts were similar.
But his forecasts were often wrong, and the press were usually quick to criticise.

A great leap forward was made when supercomputers were introduced to the forecasting community in the 1950s.
The first computer model was much simpler than those of today, predicting just one variable on a grid with a spacing of over 750km.

This work paved the way for modern forecasting, the principles of which are still based on the same approach and the same mathematics, although models today are much more complex and predict many more variables.

Nowadays, a weather forecast typically consists of multiple runs of a weather model.
Operational weather centres usually run a global model with a grid spacing of around 10km, the output of which is passed to a higher-resolution model running over a local area.

To get an idea of the uncertainty in the forecast, many weather centres also run a number of parallel forecasts, each with slight changes made to the initial snapshot.
These small changes grow during the forecast and give forecasters an estimate of the probability of something happening – for example, the percentage chance of it raining.




History of Weather Forecasting

The future of forecasting

The supercomputer age has been crucial in allowing the science of weather forecasting (and indeed climate prediction) to develop.
Modern supercomputers are capable of performing thousands of trillions of calculations per second, and can store and process petabytes of data.
The Cray supercomputer at the UK's Met Office has the processing power and data storage of about a million Samsung Galaxy S9 smartphones.

A weather chart predicts atmospheric pressure over Europe, December 1887.

This means we have the processing power to run our models at high resolutions and include multiple variables in our forecasts.
It also means that we can process more input data when generating our initial "snapshot", creating a more accurate picture of the atmosphere to start the forecast with.

This progress has led to an increase in forecast skill.
A neat quantification of this was presented in a Nature study from 2015 by Peter Bauer, Alan Thorpe and Gilbert Brunet, describing the advances in weather prediction as a "quiet revolution".

They show that the accuracy of a five-day forecast nowadays is comparable to that of a three-day forecast about 20 years ago, and that each decade, we gain about a day's worth of skill.
Essentially, today's three-day forecasts are as precise as the two-day forecast of ten years ago.

But is this skill increase likely to continue into the future?
This partly depends on what progress we can make with supercomputer technology.
Faster supercomputers mean that we can run our models at higher resolution and represent even more atmospheric processes, in theory leading to further improvement of forecast skill.

According to Moore's Law, our computing power has been doubling every two years since the 1970s.
However, this has been slowing down recently, so other approaches may be needed to make future progress, such as increasing the computational efficiency of our models.

So will we ever be able to predict the weather with 100% accuracy? In short, no.
There are 2×10⁴⁴ (200,000,000,000,000,000,000,000,000,000,000,000,000,000,000) molecules in the atmosphere in random motion – trying to represent them all would be unfathomable.
The chaotic nature of weather means that as long as we have to make assumptions about processes in the atmosphere, there is always the potential for a model to develop errors.

Progress in weather modelling may improve these statistical representations and allow us to make more realistic assumptions, and faster supercomputers may allow us to to add more detail or resolution to our weather models but, at the heart of the forecast is a model that will always require some assumptions.

However, as long as there is research into improving these assumptions, the future of weather forecasting looks bright.
How close we can get to the perfect forecast, however, remains to be seen.

Links :

No comments:

Post a Comment