What We Cannot Know: Explorations at the Edge of Knowledge. Marcus Sautoy du
Чтение книги онлайн.
Читать онлайн книгу What We Cannot Know: Explorations at the Edge of Knowledge - Marcus Sautoy du страница 12
Two states that were imperceptibly different could evolve to two considerably different states. Any error in the observation of the present state – and in a real system, this appears to be inevitable – may render an acceptable prediction of the state in the distant future impossible.
THE REVENGE OF THE GRASSHOPPER
When Lorenz explained his findings to a colleague, he received the reply: ‘Edward, if your theory is correct, one flap of a seagull’s wings could alter the course of history forever.’
The seagull would eventually be replaced by the now famous butterfly when Lorenz presented his findings in 1972 at the American Association for the Advancement of Science in a paper entitled: ‘Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?’
Curiously, both the seagull and the butterfly might have been pre-empted by the grasshopper. It seems that already in 1898 Professor W. S. Franklin had realized the devastating effect that the insect community could have on the weather. Writing in a book review, he believed:
An infinitesimal cause may produce a finite effect. Long-range detailed weather prediction is therefore impossible, and the only detailed prediction which is possible is the inference of the ultimate trend and character of a storm from observations of its early stages; and the accuracy of this prediction is subject to the condition that the flight of a grasshopper in Montana may turn a storm aside from Philadelphia to New York!
This is an extraordinary position to be in. The equations that science has discovered give me a completely deterministic description of the evolution of many dynamical systems like the weather. And yet in many cases I am denied access to the predictions that they might make because any measurement of the location or wind speed of a particle is inevitably going to be an approximation to the true conditions.
This is why the MET office, when it is making weather predictions, takes the data recorded by the weather stations dotted across the country and then, instead of running the equations on this data, the meteorologists do several thousand runs, varying the data over a range of values. The predictions stay close for a while, but by about five days into the future the results have often diverged so wildly that one set of data predicts a heat wave to hit the UK while a few changes in the decimal places of the data result in rain drenching the country.
Starting from nearly the same conditions, forecast A predicts strong wind and rain over the British Isles in 4 days’ time, while forecast B predicts incoming high pressure from the Atlantic.
The great Scottish scientist James Clerk Maxwell articulated the important difference between a system being deterministic yet unknowable in his book Matter and Motion, published in 1877: ‘There is a maxim which is often quoted, that “The same causes will always produce the same effects.”’ This is certainly true of a mathematical equation describing a dynamical system. Feed the same numbers into the equation and you won’t get any surprises. But Maxwell continues: ‘There is another maxim which must not be confounded with this, which asserts that “Like causes produce like effects.” This is only true when small variations in the initial circumstances produce only small variations in the final state of the system.’ It is this maxim that the discovery of chaos theory in the twentieth century revealed as false.
This sensitivity to small changes in initial conditions has the potential to sabotage my attempts to use the equations I’ve written down to predict the outcome of my dice. I’ve got the equations, but can I really be sure that I’ve accurately recorded the angle at which the cube leaves my hand, the speed at which it is spinning, the distance to the table?
Of course, everything isn’t completely hopeless. There are times when small changes don’t alter the course of the equations dramatically, like the paths in the classical billiard table. What is important is to know when you cannot know. A beautiful example of knowing the point when you can’t know what is going to happen next was discovered by mathematician Robert May when he analysed the equations for population growth.
KNOWING WHEN YOU CAN’T KNOW
Born in Australia in 1938, May had originally trained as a physicist working on superconductivity. But his academic work took a dramatic turn when he was exposed in the late 1960s to the newly formed movement in social responsibility in science. His attention shifted from the behaviour of collections of electrons to the more pressing questions of the behaviour of population dynamics in animals. Biology at the time was not a natural environment for the mathematically minded, but following May’s work that would all change. It was this fusion of the hardcore mathematical training he’d received as a physicist combined with a new sensibility to biological issues that led to his great breakthrough.
In a paper in Nature called ‘Simple Mathematical Models with Very Complicated Dynamics’, published in 1976, May explored the dynamics of a mathematical equation describing population growth from one season to the next. He revealed how even a quite innocent equation can produce extraordinarily complex behaviour in the numbers. His equation for population dynamics wasn’t some complicated differential equation but a simple discrete feedback equation that anyone with a calculator can explore.
Feedback equation for population dynamics
Suppose I consider an animal population whose numbers can vary between 0 and some hypothetical maximum value that I will call N. Given some fraction Y (lying between 0 and 1) of that maximum, the equation determines what in the next season is the revised fraction of the population that survives after reproduction and competition for food. Let’s suppose that each season the reproduction rate is given by a number r. So that if the fraction of the maximum population that survived to the end of the season was Y, the next generation swells to r × Y × N.
But not all of these new animals will survive. The equation determines that the fraction that will not survive is also given by Y. So out of the r × Y × N animals that start the season, Y × (r × Y × N) die. So the total left at the end of the season is (r × Y × N) – (r × Y2 × N) = [r × Y × (1 – Y)] × N, which means that the fraction of the maximum population that exists in the current season is r × Y × (1 – Y).
Essentially the model assumes that at the end of each season the surviving population gets multiplied by a constant factor, called r, the reproduction rate, to produce the number of animals at the beginning of the next season. But there aren’t enough resources for them all to survive. The equation then calculates how many of these animals will make it till the end of the season. The resulting number of animals that survive then gets multiplied by the factor r again for the next generation. The fascinating property of this equation is that its behaviour really depends only on the choice of r, the reproduction rate. Some choices of r lead to extremely predictable behaviours. I can know exactly how the numbers will evolve. But there is a threshold beyond which I lose control. Knowledge is no longer within reach because the addition of one extra animal into the mix can result in dramatically different population dynamics.
For example, May discovered that if r lies between 1 and 3 then the population eventually stabilizes. In this case it doesn’t