Bird Senses. Graham R. Martin

Чтение книги онлайн.

Читать онлайн книгу Bird Senses - Graham R. Martin страница 16

Автор:
Жанр:
Серия:
Издательство:
Bird Senses - Graham R. Martin

Скачать книгу

neural cells. These include the layer containing the photoreceptor cells, and it these which detect the pattern of light within the image. The neural cells of the retina are anatomically part of the brain to which each eye is connected via the optic nerve.

      Although the retina shows immense complexity – indeed, it is composed of many millions of cells – there is only a small number of different cell types. This applies especially to the photoreceptors, whose types are discussed in more detail later in this chapter. The important point to note is that significant differences in vision arise from the ways in which the different photoreceptor types are packed together and arranged across the retinas of different species.

      This variation in packing and arranging high numbers of receptor cells of just a few types is not unique to vision. It is what underpins variation in other senses too. This will be discussed in later chapters showing, for example, how variation in touch sensitivity, taste, and smell arise. Each of these senses is based on a relatively small number of receptor types, but marked differences in sensory capacity occur because of their relative numbers, and how the receptors are arranged, in different species. Ultimately, these variations are what underpin the sensory ecology of different species.

       Sources of variation in camera eyes

      These functional components of an eye can be matched to the two main functional parts found in all human-made imaging systems. From large astronomical telescopes to the small cameras built into mobile phones, these systems all have one part that produces the image and another that analyses it, and it is clear that the properties of these two components differ greatly.

      Even within the cameras of mobile phones properties can be varied to give images that differ markedly in the information they provide. These differences in information capacities result primarily from three fundamental attributes: the degree of detail that can be detected, the extent of the world that is available for analysis, and the range of light levels over which the camera will operate. These will be familiar and important to keen photographers, but even manufacturers of mobile phones draw attention to these features in their marketing materials.

      Comparing a mobile phone camera with an astronomical telescope is straightforward. Both are doing essentially the same thing in the same way, but the levels of information they provide are phenomenally different. However, neither one can do the other’s job. The essential point is that the same consideration applies to eyes. They have evolved in different species to provide information for the conduct of different tasks and in different environments. Differences in their eyes are the result of relatively fine-tuning of both image production and image analysis, similar to the fine-tuning of components that underlies differences in phone cameras.

      That people are willing to invest time and money in choosing between phone cameras indicates how differences in the information extracted by cameras are functionally significant. That eyes can differ markedly in all of these attributes suggests that if we were able to choose between different types of eyes, rather than having those we are born with, we might spend a lot of time in coming to a decision.

      The optical systems of camera eyes

      The optical system is composed of two main elements (Figure 3.2). The cornea is the relatively simple curved surface at the front of the eye. In eyes that operate primarily in air, the cornea is essentially a boundary between air and the fluid-filled chamber of the eye. The radius of curvature of the cornea is the key to its image-forming properties. A more highly curved surface produces a smaller image than a shallowly curved surface. The lens is suspended in the fluids that fill the chambers of the eye. It is also relatively simple. Like a magnifying glass, it has two convex surfaces, but these can vary in how curved they are. Unlike a magnifying glass, the interior of the lens is not uniform but is made up of a complex structure of transparent layers of different densities. The optical function of the lens is primarily concerned with making relatively fine adjustments that focus the image, already formed by the cornea, onto the retina.

      FIGURE 3.2 Camera eyes can be divided into two main functional parts; the optical system and the image analysis system. The optical system in bird eyes have two components, the cornea and the lens. These produce a focused image that is projected onto the retina, which is where the first stage of image analysis begins, and from where information is sent via the optic nerve to the brain. The key thing is that although these two functional components are joined together in a single eye, they can to some degree evolve independently of each other. The lens and cornea can have many different optical properties depending on their shapes and sizes and can produce images with different properties (for example, size, brightness, and contrast). The retina can exhibit huge variation in the way receptors are arrayed across its surface. This means that eyes with different image-making properties can evolve and eyes with different image analysis systems can also evolve. Even eyes which are of the same size and overall shape can have very different properties. Analysis of different eyes has revealed a plethora of subtle differences in both image production and image analysis. This results in the eyes of different species gathering different information about the world in which they sit. This diagram is based upon the tubular-shaped eyes found in owls. (Diagram by Nigel Hawtin, nigelhawtin.com.)

      It can be seen immediately that there is much scope for changing the overall image-forming properties of an eye by virtue of small changes in the absolute size, curvatures, and relative positions of these two optical components.

      Although we cannot know the optical properties of the very first, relatively crude, camera eyes it is easy to understand that they must have varied in their properties with respect to a number of parameters. Two parameters were key, the brightness of the image (how much light is captured to make the image) and the quality of the image. The precision with which light from a point in the world is brought to a focus in the image determines how faithfully the image reproduces the world that it represents. Surprisingly small variations in optical structure result in marked differences in the way optics represent the world, and in eyes these small optical variations have been rich sources for the operation of natural selection. Selection for subtle differences in optics was the beginning of the process by which eyes have evolved to match the demands of different tasks and different light environments. Today we can identify eyes with marked differences in the brightness and quality of their optical images. Some examples will be discussed in later chapters that look at specific examples of the sensory ecology of birds which face different perceptual challenges.

       Variation in image properties

      By definition an image is never perfect. It is a simulacrum that always lacks some information about the world. The quality of the image, and hence the information that it contains, usually varies across the image surface. Image quality is usually closer to perfection along, or close to, the optic axis of the lens system. This is the line about which the optical elements of the system are arranged; in camera eyes it is the direction about which the cornea and lens are symmetrically aligned (Figure 3.2).

      Moving away from the optic axis results in an image of progressively poorer quality. It is here, in the more peripheral parts of the image, where obvious distortions and aberrations occur. This is something that is readily apparent in simple hand lenses or in camera lenses at the cheaper end of the market. To correct for these peripheral distortions requires elaborations and refinements of the optical system, hence the high prices asked for camera lenses which maintain high quality across a broad section of the image.

      The image produced by peripheral optics is often masked out in human-made cameras, and not presented for analysis by the film or photodiode array. However, peripheral optics cannot be ignored when trying to understand the visually guided behaviour of many vertebrate

Скачать книгу