Imagery and GIS. Kass Green

Чтение книги онлайн.

Читать онлайн книгу Imagery and GIS - Kass Green страница 15

Автор:
Жанр:
Серия:
Издательство:
Imagery and GIS - Kass Green

Скачать книгу

binary machine code, therefore each bit location has only two possible values (one or zero, on or off), and radiometric resolution is measured as a power of 2. One-bit data would result in image pixels being either black or white, so no shades of gray would be possible. The first digital sensors were 6 bit, allowing 64 levels of intensity. More recent sensors such as Landsat 8, Sentinel-2, and WorldView-3 have 11- to 14-bit radiometric resolutions (for a range of from 2,048 to 16,384 levels of intensity).

      The range of electromagnetic energy intensities that a sensor actually detects is termed its dynamic range. Specifically, dynamic range is defined as the ratio of the maximum intensity that can be measured by a device divided by the lowest intensity level discernible. It is important to note the difference between radiometric resolution and dynamic range. The radiometric resolution defines the potential range of values a digital remote sensing device can record. Dynamic range is calculated from the actual values of a particular image. Dynamic range is defined by the difference between the lowest detectable level and the brightest capturable level within one image. It is governed by the noise floor/minimal signal and the overflow level of the sensor cell.

      The sensor used to originally capture an image determines the radiometric resolution of the image. Thus, scanning a film image to create a digital version results in a digital image with the radiometric resolution of the film sensor, not of the digital scanner, even though the radiometric resolution of the scanner may be better than that of the film image.

       Spatial Resolution

      An image’s spatial resolution is determined by the altitude of the platform, and the viewing angle, lens focal length, and resolving power of the sensor. Spatial resolution has two different definitions:

       The smallest spatial element on the ground that is discernible on the image captured by the remote sensing system. The definition of “discernible” can refer to the ability to detect an element as separate from another, or to both detect and label the different elements. This definition was commonly used when remotely sensed images were collected primarily on film.

       The smallest spatial unit on the ground that the sensor is able to image. This is the more common meaning and is the one relied upon by makers and users of digital remote sensing systems. Usually, it is expressed as the ground sample distance (GSD), which is the length on the ground of one side of a pixel.

      GSD is a function of sensor pixel size, height above terrain, and focal length, as expressed in the following equation:

Images

      The distance to ground is a function of platform altitude and sensor viewing angle. If focal length and sensor resolving power are held constant (as they are in most airborne systems), then the lower the altitude of the system, the smaller the GSD and the higher the spatial resolution of the resulting imagery. If focal length and distance to ground are held constant (as they are in satellite systems), then the higher the sensor resolving power, the higher the spatial resolution. If sensor resolving power and distance to ground are held constant, then the longer the focal length, the higher the spatial resolution of the sensor. Because the sensor and the altitude of satellite remote sensing systems are constant over the usable life of the system, their spatial resolutions are also fairly constant for each satellite system and change only when the viewing angle is changed.

      Airborne systems have varying spatial resolutions depending on the sensor flown and the altitude of the aircraft platform. Spatial resolution is also affected by whether the sensor has a stabilized mount, a forward motion compensation unit, or both, which compensate for the forward motion of the aircraft and minimize the blur caused by the motion of the platform relative to the ground by moving the sensor in the reverse direction of that of the platform (and at the ground speed of the platform) during sensor exposure. Figure 3.17 compares the spatial resolution of 15-meter pan-sharpened Landsat imagery to that of airborne 1-meter National Agriculture Imagery Program (NAIP) imagery over a portion of Sonoma County, California. Figure 3.18 compares the NAIP imagery to 6-inch multispectral imagery over a subset of the same area.

Images

      Figure 3.17. Comparison of Landsat 15-meter pan-sharpened satellite imagery to 1-meter National Agriculture Imagery Program (NAIP) airborne imagery over a portion of Sonoma County, California. Color differences are due to sensor differences and the imagery being collected in different seasons. (esriurl.com/IG317)

Images

      Figure 3.18. Comparison of 1-meter National Agriculture Imagery Program (NAIP) imagery to 6-inch airborne imagery over a subset of the area of figure 3.17. Color and shadow differences are due to sensor differences and the imagery being collected in different seasons. (esriurl.com/IG318)

      The highest spatial resolution obtainable from a civilian satellite is WorldView-4’s 30 centimeters (11.8 inches). High-resolution airborne multispectral sensors have spatial resolutions of 2 to 3 centimeters at an altitude of 500 feet (e.g., UltracamEagle). Because they can fly lower than piloted aircraft, UASs can obtain higher spatial resolutions than manned aircraft.

       Viewing Angle

      Viewing angle is often used to refer to one or both of the following angles:

       The maximum angle of the IFOV of the sensor, from one edge of the sensor view to the other, as shown in figure 3.19. Traditional film-based aerial survey cameras often used wide-angle cameras with a 90-degree IFOV. When they took photographs vertically, the features at the edges of the frames were captured at an angle of about 45 degrees to vertical. With the advent of digital photography, many digital aerial survey cameras have a narrowed IFOV, and coverage is achieved by taking more images. Most satellite imagery is collected with an even narrower IFOV. For example, a vertical WorldView-3 scene captures a strip about 13.1 km wide from an altitude of 617 km, with an IFOV of about 1 degree.

       The pointing angle of the sensor as measured from directly beneath the sensor (0°, or nadir) to the center of the area on the ground being imaged. This angle is also referred to as the elevation angle. Sensor viewing angles are categorized as vertical or oblique, with oblique being further divided into high oblique (images that include the horizon) and low oblique (images that do not include the horizon), as shown in figure 3.20.

      Traditionally, with aircraft imagery, images captured with the sensor pointed at less than ± 0 to 3 degrees off nadir are considered vertical, and images collected at greater than ±3 degrees are considered oblique (Paine and Kiser, 2012). However, with the plethora of pointable high-resolution satellites, satellite companies tend to define images captured with a sensor viewing angle of ± 0 to 20 degrees as vertical images, and images collected with sensor angles greater than ±20 degrees as oblique.

      Viewing angle is important because it affects the amount of area captured in an image, whether only the top of an object or its sides are visible, and the spatial resolution of the imagery. The larger the viewing angle from the sensor to the object, the longer the distance to the ground and the lower the spatial resolution of the pixels. For example,

Скачать книгу