Imagery and GIS. Kass Green

Чтение книги онлайн.

Читать онлайн книгу Imagery and GIS - Kass Green страница 9

Автор:
Жанр:
Серия:
Издательство:
Imagery and GIS - Kass Green

Скачать книгу

energy occurs in many forms, including gamma rays, x-rays, ultraviolet radiation, visible light, infrared radiation, microwaves, and radio waves. It is characterized by three important variables: 1) speed, 2) wavelength, and 3) frequency The speed of electromagnetic energy is a constant of 186,000 miles/second, or 3 × 108 meters/second, which is the speed of light. Wavelength is the distance between the same two points on consecutive waves and is commonly depicted as the distance from the peak of one wave to the peak of the next, as shown in figure 3.3. Frequency is the number of wavelengths per unit time.

Images

      Figure 3.3. Diagram demonstrating the concepts of electromagnetic wavelength and frequency

      The relationship between wavelength, wave speed, and frequency is expressed as

Images

      Because electromagnetic energy travels at the constant speed of light, when wavelengths increase, frequencies decrease, and vice-versa (i.e., they are inversely proportional to each other). Photons with shorter wavelengths carry more energy than those with longer wavelengths. Remote sensing systems capture electromagnetic energy emitted or reflected from objects above 0 degrees Kelvin (absolute 0).

      Electromagnetic energy is typically expressed as either wavelengths or frequencies. For most remote sensing applications, it is expressed in wavelengths. Some electrical engineering applications such as robotics and artificial intelligence express it in frequencies. The entire range of electromagnetic wavelengths or frequencies is called the electromagnetic spectrum and is shown in figure 3.4.

Images

      Figure 3.4. The electromagnetic spectrum

      The most significant difference between our eyeballs and digital cameras is how the imaging surfaces react to the energy of photons. As shown in figure 3.4, the retinas in human eyes sense only the limited visible light portion of the electromagnetic spectrum. While able to capture more of the spectrum than human eyes, film is limited to wavelengths from 0.3 to 0.9 micrometers (i.e., the ultraviolet, visible, and near infrared). CCD or CMOS arrays in digital sensors are sensitive to electromagnetic wavelengths from 0.2 to 1400 micrometers. Because remote sensors extend our ability to measure more portions of the electromagnetic spectrum than our eyes can sense, remote sensors extend our ability to “see.”

       Film versus Digital Array Imaging Surfaces

      The imaging surfaces of our eyes are our retinas. Cameras once used only film, but now primarily use digital (CCD or CMOS) arrays. From its beginnings in the late 1800s to the 1990s, most remote sensing sensors relied on film to sense the electromagnetic energy being reflected or emitted from an object. Classifying the resulting photographs into information required manual interpretation of the photos. In the 1960s, digital sensors were developed to record electromagnetic energy as a database of numbers rather than a film image. This enabled the development of sensors that can sense electromagnetic energy across the range from ultraviolet to radio wavelengths. Now, most remote sensing systems use digital arrays instead of film. Because the values of the reflected and emitted energy are stored as an array of numbers, computers can be trained to turn the imagery data into map information by discovering correlations between variations in the landscape and variations in electromagnetic energy. While manual interpretation is still very important, objects that are spectrally distinct from one another can be readily mapped using computer algorithms.

      The imaging surface of a digital camera is an array of photosensitive cells that capture energy from incoming photons. Each of these cells corresponds to a pixel in the resulting formed image. The pixels are arranged in rectangular columns and rows. Each pixel contains one to three photovoltaic cells or photosites, which use the ability of silicon semiconductors to translate electromagnetic photons into electrons. The higher the intensity of the energy reaching the cells during exposure, the higher the number of electrons accumulated. The number of electrons accumulated in the cell is recorded and then converted into a digital signal.

      The size of the array and the size of each cell in the array affect the resolving power of the sensor. The larger the array, the more pixels captured in each image. Larger cells accumulate more electrons than smaller cells, allowing them to capture imagery in low-energy situations. However, the larger cells also result in a corresponding loss of spatial resolution across the image surface because fewer cells can occupy the surface.

       Source of Energy: Active versus Passive Sensors

      Passive sensors collect electromagnetic energy generated by a source other than the sensor. Active sensors generate their own energy, and then measure the amount reflected back as well as the time lapse between energy generation and reception. Figure 3.5 illustrates the difference in how active and passive sensors operate.

Images

      Figure 3.5. Comparison of how passive and active sensors operate

      Most remote sensors are passive sensors, and the most pervasive source of passive electromagnetic energy is the sun, which radiates electromagnetic energy upon objects on the earth that either absorb/emit, transmit, or reflect the energy. Passive energy can also be directly emitted from the earth, as from the eruption of a volcano or a forest fire. Examples of passive remote sensors include film aerial cameras, multispectral digital cameras, and multispectral/hyperspectral scanners. Passive sensors are able to sense electromagnetic energy in wavelengths from ultraviolet through radio waves.

      Passive sensors fall into three types: framing cameras, across-track scanners, and along-track scanners. Framing cameras either use film or matrixes of digital arrays (e.g., UltraCam airborne sensors, PlanetLabs satellite sensors). Each frame captures the portion of the earth visible in the sensor’s field of view (FOV) during exposure. Often, the frames are captured with greater than 50 percent overlap, which enables stereo viewing. Each image of a stereo pair is taken from a slightly different perspective as the platform moves. When two overlapped images are viewed side by side, each eye automatically takes the perspective of each image, enabling us to now “see” the overlapped areas in three dimensions. With stereo frame imaging, not only can distances be measured from the aerial images, but so can elevations and the heights of vegetation and structures, discussed in detail in chapter 9.

      Most across-track scanners (also called whisk broom scanners) move an oscillating mirror with a very small instantaneous field of view (IFOV) side to side as the platform moves. Each line of the image is built, pixel by pixel, as the mirror scans the landscape. Developed decades before the digital frame camera, across-track scanners were the first multispectral digital sensors and were used in multiple systems including the Landsats 1-7, GOES, AVHRR, and MODIS satellite sensors, and NASA’s AVIRIS hyperspectral airborne system.

      Along-track scanners (also called push broom scanners) rely on a linear array to sense entire lines of data simultaneously. Rather than mechanically building an image pixel by pixel or by groups of pixels, the along-track scanner builds an image line by line. Along-track scanners have higher spectral and radiometric resolution than across-track scanners because the sensor can spend more time (termed dwell time) over each area of ground being

Скачать книгу