Bird Senses. Graham R. Martin

Чтение книги онлайн.

Читать онлайн книгу Bird Senses - Graham R. Martin страница 15

Автор:
Жанр:
Серия:
Издательство:
Bird Senses - Graham R. Martin

Скачать книгу

      What eyes do

      The crucial property of the first camera eyes, and indeed of all eyes since, is that they were able to determine the position of a light source relative to the animal. They were more than simple light detectors; they also had the capacity of spatial vision. Spatial vision provides information on where objects are relative to the observer. What’s more, it can do this more or less instantaneously and continuously.

      This might seem an obvious attribute of vision, but it is not true of any other sense, nor was it an attribute of the very first eyes. Not until camera eyes evolved was it possible to obtain accurate information about the positions of objects within a large part of the environment in which an animal sits. Furthermore, most camera eyes can do this over a range of light levels, although as light levels fall the accuracy of spatial vision usually decreases. Being able to function over a range of light levels is an important attribute of vision because in natural environments light levels change both constantly and dramatically. In open habitats at all latitudes except close to the poles, ambient light levels change over many million-folds on a daily cycle; from noontide sunlight to starlight. Therefore, a key aspect of an animal’s eye is not only how much spatial detail it can detect but also over how much of the daily light cycle it can provide useful spatial information.

      Functioning over the full range of naturally occurring light levels is difficult. Some eyes have evolved to provide spatial information over a wide range of light levels, but many have evolved to function primarily within a relatively narrow range, typically those experienced during daytime (dawn to dusk) or night-time (dusk to dawn). Even within these periods light levels are not static and change over many thousand-folds.

      Colour vision is primarily an elaboration of spatial vision. Colour vision is often thought of as something rather different or special, something that is additional to ‘simple’ spatial vision – perhaps regarded as simple because it can be achieved in what appears to be a less sophisticated world of black and white. However, colour vision has value because it enhances the extraction of spatial detail by using differences in how light of different wavelengths is reflected from different surfaces.

      Lit by sunlight, a ‘blue’ surface reflects light only within a relatively narrow range of the wavelengths of light that fall upon it. The surface absorbs light from the rest of the spectrum. A ‘red’ surface reflects and absorbs light in other parts of the spectrum. However, the colour vision mechanism that determines which part of the spectrum light is from is rather wasteful of photons. This is because the visual system must make elaborate comparisons between light reflected from the different surfaces. The consequence is that the ability to detect levels of contrast between patterns is always lower for coloured than for black and white patterns. Faced with the task of detecting contrast in a grating (of the kind discussed above in the ‘Measuring senses’ section of Chapter 2), or with the task of resolving the finest stripe width that can be reliably detected, performance with stripes of different colours is always inferior compared with black and white patterns.

      Lack of colour vision at low, night-time, light levels occurs in most vertebrates. It is not because there is no colour information potentially available in the environment. The lack of colour is a property of the visual system, not of the environment. At night, photons are relatively scarce. To see something at night necessitates maximum use of any photons that are available. Having a mechanism that detects the part of the spectrum that photons come from is too wasteful of light to have general utility compared with the advantage of simply being able to detect that something is actually present. The stimuli for colour vision are present in the environment at night as much as they are during the day, but vision does not make use of them. Colour vision is a bonus of high light levels.

      This simple observation tells us that colour is not a property of the world but a property of the visual systems that extract information from the world. Light itself is not coloured. Colour is an attribute added by visual systems. This observation was first made by Isaac Newton in his Opticks (published in 1704) and captured in the famous phrase ‘The Rays, to speak properly, are not coloured’. It was based initially upon his observations of how white light can be broken up into prismatic colours. Newton elaborated this key idea by further experiments on many aspects of human vision. The implications of this observation have been investigated and discussed from both scientific and philosophical viewpoints ever since. Humans have projected many aesthetic properties onto ‘colour’, and this has given philosophers a rich theme for speculation and theorising. It is essential, however, to be aware that colour vision is an elaboration of the mechanisms that extract spatial detail from the environment.

      Sources of variation in vision

      We are familiar with the idea that there are many different designs of bird wings and bills. Ornithology textbooks show arrays of different wing and bill types along with discussion of their different attributes and functions. Such discussions make it clear that there is not a single optimal wing or bill type. A single wing type cannot fly at all velocities, or support birds of different weights, or carry out all kinds of manoeuvres. A single bill cannot be an all-purpose tool for extracting and handling many types of food. What is optimal depends upon the task. This also applies to the senses of birds, especially vision.

      It is relatively easy to understand the structural bases for the different types of performance of birds’ wings or bills. For example, there may be obvious differences in the relative lengths and flexibility of bones, or in the number, hardness, and relative lengths of feathers. Bills also differ in their length, shape, and flexibility. It is not so immediately obvious when looking at an eye how variations in vision can arise. How is it possible for one bird to have quite different visual capacities from another?

       Camera eyes

      The camera type of eye is found in all vertebrates and in some invertebrates (octopuses, squids). A camera eye is also referred to as a ‘simple’ eye, and this label is not without good reason. Compared with the complexity of the multiple repeated structures which are found in the compound eyes of most invertebrates, camera eyes are structurally and conceptually simple. The important point, however, is that within this simplicity of basic design there is great potential for variation in each of the key components. Both gross and subtle variations in these components can profoundly alter the vision of an animal, and hence change the information that different eyes can extract from the same scene.

      The basic structure of a camera eye has just two key functional components, an image-producing system and an image-analysing system (Figure 3.2). Not only can these components show much variation, they can also vary in their characteristics independently of each other. The image in the eye of one species will be different to that of another, as will the ways that these images are analysed. Furthermore, with two eyes in an animal’s head, they can be placed in different positions with respect to each other in the skull. This alters the region about the head from which visual information can be retrieved at any one instant and can profoundly influence what an animal can detect in the world that surrounds it.

      These two main functional components of camera eyes are conceptually simple. The optical system produces an image of the world outside the eye, and the analysis system extracts information from that image. These two functional components can be matched in a straightforward manner to the main anatomical parts of an eye (Figure 3.2). Indeed, they can be matched to the key components of any camera, from the camera in your phone to a sophisticated video, single-lens reflex, or plate camera.

      The optical system of a camera eye consists of the lens and the cornea. The initial extraction of visual information is carried out by the retina onto which the optical system projects an image of the world. The retina is a very thin structure of immense complexity, made up of layers of specialised

Скачать книгу