Hello, welcome to Surfer's Path please use the links below to jump to a specific section.

Navigation Search Content Other Mpora Sites

A History of Surf Forecasting: Part I

13:26 20th June 2007 by
Share:

Old fishermen used to say it had to do with the moon, the wind, the lay of the seaweed on a low-tide rock, but their surf forecasts weren’t always that accurate. These days we’re way ahead of ourselves, and this is – scientifically speaking – how our powers to see the future emerged.

More than ever before, a tremendous amount of scientific effort is being put into predicting the future. Climate modellers, tidal modellers, economics modellers, coastal modellers – all are concerned with describing some phenomenon, natural or otherwise, in enough detail to simulate its behaviour and then predict what it might do in the future. Most of these people know well that their task will never be fulfilled completely – the laws of chaos prevent this – but they nevertheless keep on trying. But still, imperfect predictions are useful to us, and the less imperfect they are, the more useful they are.

And now, more than ever before, we surfers want to know the future. Surf forecasting has become a major part of our lives, especially since the internet has made available to us an entire universe of information. Nowadays, any surfer can access a swell-forecasting website that will tell them where to go, what board to take, which wetsuit and what stickers to wear. No brain required.

However, aside from the increase in available forecast information for the ‘masses’ (which has obviously improved things tremendously), can we really make more accurate forecasts than we could before? Can we be any more precise about swell arrival time, swell quality, changes in local
winds or how long the swell will last? Or are we still missing something? How far has wave forecasting come, how much more useful is it than it was before, and how much more useful will it be in the future?

To begin to answer those questions, I decided to have a look at the history of wave forecasting right from the very beginning, and then assess where it is at right now, and where we’re going with it in the future.

In this, the first part, we start by looking at our own particular history from a surfer’s point of view, together with the almost-parallel scientific development of wave-forecasting models. Then we look at how the two finally merged together with the advent of the internet.

EARLY DAYS OF SURF FORECASTING

Until a very short time ago, surf forecasting was shrouded in mystique – almost a ‘dark art’, certainly a murky science. It was best left to a handful of gurus possessed with that special power: the power for interpreting the isobaric chart. Before the internet came along and we were suddenly, almost overnight, given access to unimaginable amounts of information, wave forecasting was little more than blind guesswork. This was true even for those who were able to boast some knowledge of physics or oceanography.

For much of the time the only resource available was the isobaric analysis chart. This is a daily analysis of the atmospheric pressure at sea level. In those days, we normally had to rely on a chart that was published in the newspaper or shown on the television. The chart is compiled from a series of atmospheric pressure values measured by a network of buoys and ships.

These measurements are then interpolated to provide an estimate of the pressure ‘field’ over the expanse of the entire ocean, in the form of contour lines of equal pressure called isobars (iso = equal, bar = pressure). The result is a ‘snapshot’ of the situation at some time in the recent past. It is not a forecast.

A few simple rules enabled us to use the isobaric chart to make a reasonable estimate of the surf conditions in the near future. From the orientation
and number of isobars we could easily identify the current position of a storm over the ocean, together with its all-important fetch (the area containing winds that generate the swell). With some experience of the likely size of swell produced by a particular fetch, how fast that swell travels and how it might be affected by the continental shelf, we would then make a guess as to the height and arrival time of the swell. Don’t forget, this might be at some beach thousands of miles away from the storm itself. Then, with some previous knowledge as to the most likely trajectory of the storms
in that particular ocean at that time of year, we would make a guess at the local wind conditions expected on that beach. As you can see, that’s a lot of guesswork and extrapolation based on very little initial information. Even for the most experienced and talented wave-forecasting shaman, it was still very difficult to predict the state of affairs at the end of a long and uncertain string of physical processes connected to just one snapshot of a single parameter. There were just too many variables.

“BEFORE THE INTERNET CAME ALONG AND WE WERE
SUDDENLY, ALMOST OVERNIGHT, GIVEN ACCESS TO
UNIMAGINABLE AMOUNTS OF INFORMATION, WAVE
FORECASTING WAS LITTLE MORE THAN BLIND GUESS WORK.”

Instead of relying on pure instinct and experience, what we really needed was access to the ‘automatic’ forecasting being done by the real experts; the meteorological research centres such as those under the National Oceanographic and Atmospheric Administration (NOAA), the European Centre
for Medium-range Weather Forecasts (ECMWF) or the UK Meteorological Office. These centres were already making predictions using the most powerful supercomputers in the world, together with a great deal of clever mathematics.

At first, this information was scarcely available to the general public. To obtain predictions of local winds around the coast, or the movement of storms over the ocean, we relied on radio forecasts giving a ‘general synopsis’ and ‘area forecasts’ or perhaps, television forecasts which, if we were lucky, gave isobaric charts for the next few days ahead. One could also obtain isobaric forecast charts in the form of a fax, usually by subscribing to a special (paid) marine service. And later, a simplified ‘significant wave height’ forecast chart also started to become available by marine fax. These were generated by one of many predecessors to the computer programmes that produce the wave forecasts we all use today.

As marine faxes and other similar facilities gradually improved and became more widely available, along with the (initially incipient) growth of the internet, a few people began to realise that surfers would benefit greatly from this ‘expert’ information. For a number of years, paid subscription telephone and fax services were available whereby the caller could receive ready-interpreted swell-forecast information, often together with
real-time reports from a wide range of beaches. The ‘forecaster’ would typically spend time every morning assessing the charts and making a best guess at the coming surf conditions, which would then be recorded for the clients to listen to, or sent to the clients via fax.

MEANWHILE, BACK AT THE LAB …

The idea of using mathematical equations to generate predictions for waves started way back in the 1940s. This was when swellprediction
techniques were first developed to help military landing operations in the Second World War. (Sadly, most of the motivation and funding for wave-model research is still military-based). The first methods for automatic wave prediction were not fully based on the real physics behind wave generation.

Instead, they were what could be termed semi-theoretical and semi-empirical. This means that, although the method was based on mathematical relationships between the wind and the waves, the relationships contained ‘constants’ – numbers whose values could only be ascertained by feeding actual measured data into the equations.

By the early 1950s, a set of equations had been developed by Harald Sverdrup, Walter Munk and Charles Bretschneider, relating wave height and period to the three factors of windspeed, fetch and duration. This was subsequently referred to as the SMB method, after its inventors. The height and
period predicted by the SMB method were just single values, called the significant wave height and the significant wave period. These were designed to correspond with those values estimated by an ‘experienced’ observer by eye, from aboard a ship. As I have mentioned many times before, the famous significant wave height, also called ‘H1/3’, is calculated by averaging the highest one-third of all the waves observed at a point over
some time period.

The method assumed right from the beginning is that the entire sea state could be approximated as a single ‘significant wave’ which grew, decayed and propagated away from the storm centre as a single entity. This was, of course, highly unrealistic. However, it was the best they could do at the time. The height and period of the ‘significant wave’ were dependent upon the fetch size (bigger fetches produce bigger waves), the duration (the longer the wind blows over the same stretch of sea, the bigger the wave) and the windspeed itself (the stronger the wind, the bigger the wave).

This last factor, the windspeed, had a quadratic relationship to the wave height rather than a linear one. This meant that doubling the windspeed will make the wave four times as big. Of course, waves cannot keep growing forever. So, automatically built into Sverdrup and colleagues’ formulae were
limiting states whereby the wave growth reached a saturation point. These were termed fetch-limited, duration-limited, and fully-arisen sea, the wave growth being limited in each case by the fetch, the duration or the windspeed.

Continued on page 2…

Note that the SMB method is still being used nowadays, particularly by coastal engineers who want a quick idea of wave heights without running a complicated computer model. For example, the US Army Corps of Engineers publish a graph called a nomogram, from which one can quickly look up the height and period of the waves given the windspeed, fetch and duration (see diagram in box below).

The SMB method suffered from two major restrictions, both of which would be addressed in the decades following its invention. The first was that it was not based on a proper physical understanding of how waves are generated; and the second was that it used a single entity called the ‘significant wave’ to represent what is, in reality, an entire spectrum of wave heights, periods and directions.

By around the mid-1950s, work was already underway by J. Miles and O.M. Phillips to investigate how waves were really generated on the surface of the ocean, in an exact way that was not empirical or semi-empirical. They came up with the Miles-Phillips theory, which is still used nowadays. The theory describes how very small waves, called capillary waves, first begin to grow from an entirely flat sea. Then it describes how larger waves (called gravity waves) are subsequently formed from a sea already containing capillary waves. The capillary waves are generated by vertical perturbations in the surface wind, causing irregularities in the water surface. This then increases the surface roughness which, in turn, enables any further action of the wind to ‘grip’ the water surface, lifting it up even more.

The second mechanism is self-perpetuating – the rougher the surface, the more ‘grip’ and, therefore, the easier it is for a given wind to increase the height of the waves. The first mechanism causes the waves to grow linearly with time, but the second mechanism causes them to grow exponentially. Of course, eventually a point will be reached where a particular windspeed can’t lift up the surface of the sea any more – the force of gravity pulls the water back down again at the same rate as the wind lifts it up.

Apart from gravity itself, there are other physical mechanisms that work to reduce the height of the waves. One of these is the friction generated by the molecules under each wave interacting with other water molecules and with those on the seabed. Another mechanism is whitecapping, where the waves momentarily break in deep water, giving up a lot of energy to turbulence and sound.

Yet further mechanisms exist relating to the transfer of energy between the waves themselves. This is a curious concept, which involves a continual flux of energy from the shorter-period waves towards the longer ones. As the sea state grows, energy is diverted more and more away from the shorter-period waves and towards the longer ones, the short ones effectively being ‘gobbled up’ by the long ones, and the sea state becoming increasingly more dominated by long-period waves. This explains why, in a growing sea, the significant wave height not only gets bigger, but the significant period also gets longer.

(For those with a knowledge of physics, this ‘non-linear transfer’ can be explained by an input of energy at the highfrequency end of the spectrum, which gradually shifts through to the low-frequency end where it piles up, progressively skewing or ‘red-shifting’ the spectrum towards the low-frequency end).

So, after getting into the nitty-gritty of how waves are actually generated, scientists were now ready to write down, in mathematical terms, how wave energy on the sea surface grew and changed as the wind blew across that surface. Around the early 1960s, an equation was born which was to become the cornerstone behind all wave models developed thereafter. This was the Action Balance Equation, sometimes called the Radiative Transfer
Equation. It represented a major advance on earlier methods for three important reasons:

• It was a ‘dynamical’ equation, which meant it described the evolution of the sea state rather than predicting the sea in some ‘final’ state after being worked on by the wind.

• It was more closely related to the real underlying physics of wave growth and decay.

• It was a two-dimensional spectral equation, describing the wave energy evolution not just at a single frequency and direction, but over a whole range of both.

Here is a highly simplified schematic representation of the Action Balance Equation:

Each term in the equation has quite a complicated story behind it. I’ll briefly describe what each one means in simple terms. The box on the far left is the rate of change of wave energy with time, at a point on the ocean, over a number of different directions and frequencies. The ones on the right-hand side of the equals sign are the ingredients that go to make up the one on the left. The latter are known as source terms, some of which put
energy in to the waves, and some of which take it out.

• ‘Wind input’ is the energy transfer between the air and the water due to the wind blowing over the surface of the water (this is where all the formulae from the Miles-Phillips mechanism go).

• ‘Friction’ is the energy taken out of the water due to processes like molecular friction, whitecapping and wind blowing in the opposite direction to the waves.

• ‘Non-linear transfer’ is that transfer of energy between waves of different frequencies (the ‘gobbling-up’ of short waves by long waves).

By about 1971, a few models had been developed using this equation. These were termed ‘first-generation’ wave-forecasting models. Although they worked, it was obvious that more research was needed into the mathematics behind the source terms and their relative importance. There was also the fact that computing power was a big restricting factor in those days. For example, on the biggest supercomputers of the day, the models
could not be run with the ‘non-linear-transfer’ term included, otherwise the time taken to generate the simulation would far exceed real time. (Wouldn’t it be absurd, if tomorrow’s forecast were not available until next week?).

Ironically, it was soon to be found that the non-linear transfer was much more important than people first realised, and could not be ignored if the models were to work properly.

Continued on page 3…

This was confirmed in 1973, with the results of a landmark study called the Joint North Sea Wave Experiment (JONSWAP). This was undertaken by a large group of scientists headed by Klaus Hasselmann, the then director of the Max Planck Institute for Meteorology in Hamburg. Hasselmann had
already published some fundamental work a decade earlier on the mathematical theory of non-linear transfer. Results showed that the non-linear transfer of energy between waves of shorter and longer periods was much more important than that previously thought. In fact, the ‘non-linear’ term in the Action Balance Equation was now acknowledged to be the most important term.

The resounding success of JONSWAP led to the development of ‘second-generation’ wave models. These employed progressively more accurate ways of representing the ‘non-linear’ term without computing it exactly (computers still weren’t powerful enough to do this yet). With the help of the
results from JONSWAP, scientists were able to estimate how the wave spectrum changed as the sea state grew, with the nonlinear effects automatically being taken into account. These models, although clearly much better than the firstgeneration ones, were still not very accurate when it came to predicting swell propagation away from the storm centre, nor were they very good with rapidly changing wind regimes such as those found in hurricanes.

Therefore, in 1985, the Wave Model Development and Implementation (WAMDI) group was set up, headed by Professor Hasselmann and containing over 70 of the world’s top oceanographers, mathematicians and programmers. The group’s task was to develop a ‘third-generation’ wave model, which could accurately compute all the terms in the Action Balance Equation, including the all-important non-linear term. The project culminated in a famous paper published in 1988 in the Journal of Physical Oceanography: ‘The WAM model – a third generation ocean wave prediction model’. The paper, with its 13 authors, described the definitive wave model – the WAM – which was subsequently to be used by millions of people all over the world.

The development of wave models has suffered greatly from the same syndrome as that which must have frustrated Leonardo da Vinci and other great minds stifled by their times. All the way through, as our understanding of wave generation and propagation gradually improved and more accurate representations of the processes could be devised, they couldn’t be put into practice until the appropriate technology became available.

BOTH HISTORIES COME TOGETHER

It was around the early to mid-1990s when our own story of swell forecasting rendezvoused with the scientific development of wave modelling. Simply – the internet started to become widely accessible to the general public. We were suddenly handed on a plate an immense array of new resources. For example, the US Navy’s Fleet Numerical Meteorological and Oceanographic Centre (FNMOC), who implemented a global version of the WAM in 1994, started making waveheight contour charts available on the internet; and several meteorological institutes started publishing forecast isobaric charts, usually up to a maximum of about four days ahead.

Instead of trying to work out whether a rumoured low pressure would deepen and send swell, we could now not only see exactly how much swell it was generating and in what direction it was travelling, but we could also have some idea about the local winds once that swell arrived. It seemed as if we’d been fumbling around in a dark room all that time, and someone suddenly turned on the light.

And that’s where we leave it for now. In Part II we’ll be looking at wave forecasting up to the present day, and what the future has in store for us. We’ll be trying to figure out whether we really are better at forecasting surf now than we were back in the isobaric-chart days, and what the implications might be as surf forecasting ‘for the masses’ becomes ever more widely available. I predict you’ll get some good surf before then, but
don’t blame me if not!

X

Also in Land / Sea / Sky

Wave Power: Stealing the Juice?

Read More