Weather forecasting is an imperfect science. Through decades of research, meteorologists have come up with tools that allow us to fairly accurately predict weather in the short term using numerical modeling and forecast analysis. Beyond the scope of about a week however, the certainty in these forecasts drops off significantly. This is because weather is based on an infinitely complex and constantly changing system, with a little bit of chaos thrown in for fun.
There are many steps involved in weather forecasting. Firstly, a global snapshot of the atmosphere is captured at a given time and mapped onto a 3-D grid of points that extend over the entire globe from the surface to the stratosphere. Using a powerful computer and a numerical model that describes the behaviour of the atmosphere incorporated with physics equations, this snapshot is therefore pushed forward in time to produce terabytes of raw forecast data. These data are then interpreted by human forecasters which turn them into a meaningful forecast broadcasted to the public.
Forecasting the weather is challenging as we attempt to predict something that is inherently unpredictable. Since the atmosphere is a chaotic system, any small change at one location will cause remarkable consequences elsewhere over time, which was analogised as the so-called butterfly effect. Therefore, any error that develops in the forecast will grow rapidly causing further errors on a larger scale. As such, to obtain a perfect forecast, every single error would need to be removed. Forecasting skills have improved over time. Modern forecasts are certainly more reliable than they were before the supercomputer era. The UK’s earliest published forecasts date back to 1861, when Royal Navy officer and meteorologist Robert Fitzroy began to publish forecasts in the Times newspaper. Their methods involved drawing weather charts using observations from a few locations and making predictions based on how the weather evolved in the past when the charts were similar. However, their forecasts were often wrong, and the press were usually quick to criticise.
The advent of supercomputers in the 1950s brought so much of insights to the forecasting community. This work paved the way for modern forecasting, the principles of which are still based on the same approach and the same mathematics, although models today are much more complex and predict many more variables. Nowadays, a weather forecast typically consists of multiple runs of a weather model. Operational weather centres usually run a global model with a grid spacing of around 10km, the output of which is passed to a higher-resolution model running over a local area. To reduce the errors, many weather centres also run a number of parallel forecasts, each with slight changes made to the initial snapshot. These small changes grow during the forecast and give forecasters an estimate of the probability of something happening – for example, the percentage chance of it raining.
The supercomputer age has been crucial in allowing the science of weather forecasting (and indeed climate prediction) to develop. Modern supercomputers are capable of performing thousands of trillions of calculations per second and can store and process petabytes of data. This means we have the processing power to run our models at high resolutions and include multiple variables in our forecasts. It also means that we can process more input data when generating our initial snapshot, creating a more accurate picture of the atmosphere to start the forecast with. This progress has led to an increase in forecast skill. A neat quantification of this was presented in a Nature study from 2015 by Peter Bauer, Alan Thorpe and Gilbert Brunet, describing the advances in weather prediction as a “quiet revolution”. They show that the accuracy of a five-day forecast nowadays is comparable to that of a three-day forecast about 20 years ago, and that each decade, we gain about a day’s worth of skill. Essentially, today’s three-day forecasts are as precise as the two-day forecast of ten years ago.
But is this skill increase likely to continue into the future? This partly depends on what progress we can make with supercomputer technology. Faster supercomputers mean that we can run our models at higher resolution and represent even more atmospheric processes, in theory, leading to further improvement of forecast skill. According to Moore’s Law, our computing power has been doubling every two years since the 1970s. However, this has been slowing down recently, so other approaches may be needed to make future progress, such as increasing the computational efficiency of our models.
So, will we ever be able to predict the weather with 100% accuracy? In short, no. There are 2×10⁴⁴ molecules in the atmosphere in random motion – trying to represent them all would be unfathomable. The chaotic nature of weather means that as long as we have to make assumptions about processes in the atmosphere, there is always the potential for a model to develop errors. Progress in weather modelling may improve these statistical representations and allow us to make more realistic assumptions, and faster supercomputers may allow us to to add more detail or resolution to our weather models, but, at the heart of the forecast is a model that will always require some assumptions.
However, as long as there is research into improving these assumptions, the future of weather forecasting looks bright. How close we can get to the perfect forecast, however, remains to be seen.
This article was originally published on The Conversation by Jon Shonk, Research scientist at the University of Reading.
To learn more about other climate-related stories occurring across Africa, be sure to click here!
© 2018 Oceanographer Daneeja Mawren
Leave a Reply.