Change Detection and Image Time Series Analysis 2. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Change Detection and Image Time Series Analysis 2 - Группа авторов страница 7
In this chapter, this joint fusion problem is addressed. First, an overview of the major concepts and of the recent literature in the area of remote sensing data fusion is presented (see section 1.1.3). Then, two advanced methods for the joint supervised classification of multimission image time series, including multisensor optical and Synthetic Aperture Radar (SAR) components acquired at multiple spatial resolutions, are described (see section 1.2). The two techniques address different problems of supervised classification of satellite image time series and share a common methodological formulation based on hierarchical Markov random field (MRF) models. Examples of the experimental results obtained by the proposed approaches in the application to very-high-resolution time series are also presented and discussed (see section 1.3).
On the one hand, the use of multiresolution and multiband imagery has been previously shown to optimize the classification results in terms of accuracy and computation time. On the other hand, the integration of the temporal dimension into a classification scheme can both enhance the results in terms of reliability and capture the evolution in time of the monitored area. However, the joint problem of the fusion of several distinct data modalities (e.g. multitemporal, multiresolution and multisensor) has been much more scarcely addressed in the remote sensing literature so far.
1.1.2. Multisensor and multiresolution classification
The availability of different kinds of sensors is very advantageous for land cover mapping applications. It allows us to capture a wide variety of properties of the objects contained in a scene, as measured by each sensor at each acquisition time. These properties can be exploited to extract richer information about the imaged area. In particular, the opportunity of joint availability of SAR and optical images within a time series can possibly offer high-resolution, all-weather, day/night, short revisit time data with polarimetric, multifrequency and multispectral acquisition capabilities. This potential is especially emphasized by current satellite missions for Earth Observation (EO), for example, Sentinel-1 and -2, Pléiades, TerraSAR-X, COSMO-SkyMed and COSMO-SkyMed Second Generation, RADARSAT-2 and RADARSAT Constellation, GeoEye-1, WorldView-1, -2, -3, and WorldView Legion, or PRISMA, which convey a huge potential for multisensor optical and SAR observations. They allow a spatially distributed and temporally repetitive view of the monitored area at multiple spatial scales. However, the use of multisource image analysis for land cover classification purposes has been mostly addressed so far by focusing on single-resolution multisensor optical–SAR imagery, whereas the joint use of multisensor and multiresolution capabilities within a time series of images of the same scene has been more scarcely investigated. This approach bears the obvious advantage of simplicity but is, in general, suboptimal. From a methodological viewpoint, when multisensor (optical and SAR) or multiresolution images of a given scene are available, using them separately discards part of the correlations among these multiple data sources and, most importantly, their complementarity.
Figure 1.1. Sensitivity to cloud cover and object size using different wavelength ranges. For a color version of this figure, see www.iste.co.uk/atto/change2.zip
As illustrated in Figure 1.1, SAR and multispectral images exhibit complementary properties in terms of wavelength range (active microwave vs. passive visible and infrared), noisy behavior (often strong in SAR due to speckle, usually less critical in optical imagery), feasibility of photo-interpretation (usually easier with optical than with SAR data), impact of atmospheric conditions and cloud cover (strong for optical acquisitions and almost negligible for SAR) and sensitivity to sun-illumination (strong for optical imagery and negligible for SAR) (Landgrebe 2003; Ulaby and Long 2015). This makes the joint use of high-resolution optical and SAR imagery particularly interesting for many applications related to environmental monitoring and risk management (Serpico et al. 2012).
Within this framework, there is a definite need for classification methods that automatically correlate different sets of images taken at different times, in the same area, from different sensors and at different resolutions. One way to address this problem is to resort to an explicit statistical modeling by finding a joint probability distribution, given the class-conditional marginal probability density function (PDF) of the data collected by each sensor (see Figure 1.2). The joint statistics can be designed by resorting to meta-Gaussian distributions (Storvik et al. 2009), multivariate statistics such as multivariate copulas (Voisin et al. 2014) or non-parametric density estimators (Fukunaga 2013). However, employing heterogeneous data (SAR–optical in our case) makes the task of finding an appropriate multivariate statistical model complex, time demanding and possibly prone to overfitting.
Figure 1.2. Multivariate statistical modeling for optical–SAR data fusion. For a color version of this figure, see www.iste.co.uk/atto/change2.zip
In this context, the rationale of both approaches described in section 1.2 is to benefit from the data fusion capabilities of hierarchical MRFs and avoid the computation of joint statistics. An approach based on multiple quad-trees in cascade and applied to multisensor and multiresolution fusion is described. In the first proposed method, for each sensor, the input images of the series are associated with separate quad-tree structures according to their resolutions. The goal is to generate a classification map based on a series of SAR and optical images acquired over the same area. The proposed approach formalizes, within this multiple quad-tree topology, a supervised Bayesian classifier that combines a class-conditional statistical model for pixelwise information and a hierarchical MRF for multisensor and multiresolution contextual information. The second proposed method regards the case of the multimission fusion of multifrequency SAR data collected by the COSMO-SkyMed and RADARSAT-2 sensors, together with optical Pléiades data. A multiple quad-tree structure is used again, but optical and SAR images are both included in all cascaded quad-trees to take into account the specifics of the spatial resolutions of the considered satellite instruments. Compared to the first method, which considers the fusion of data from generally arbitrary SAR and optical sensors, this second method focuses on a specific combination of spaceborne SAR and optical sensors, in order to investigate the synergy among the multifrequency and multiresolution information they provide.
1.1.3. Previous work
The literature in remote sensing data fusion is extensive, indicating intense interest in this topic, as highlighted by the recent sharp increase in the number of papers published in the major remote sensing journals, and the increasing number of related sessions in international conferences. Indeed, data fusion has given rise to a continuing tradition in remote sensing, since