Graph Spectral Image Processing. Gene Cheung

Чтение книги онлайн.

Читать онлайн книгу Graph Spectral Image Processing - Gene Cheung страница 10

Graph Spectral Image Processing - Gene Cheung

Скачать книгу

labeled from 1 to N,we can simply treat a graph signal as a vector x RN .

      Denote by W RN×N an adjacency matrix, where the (i, j)th entry is Wi,j = wi,j. Next, denote by D RN×N a diagonal degree matrix, where the (i, i)th entry is Di,i = j Wi,j. A combinatorial graph Laplacian matrix L is L = D W (Shuman et al. 2013). Because L is real and symmetric, one can show, via the spectral theorem, that it can be eigen-decomposed into:

      [I.2]

      The set of eigenvectors U for L collectively form the GFT (Shuman et al. 2013), which can be used to decompose a graph signal x into its frequency components via α = Ux. In fact, one can interpret GFT as a generalization of known discrete transforms like the Discrete Cosine Transform (DCT) (see Shuman et al. 2013 for details).

      Note that if the multiplicity mk of an eigenvalue λk is larger than 1, then the set of eigenvectors that span the corresponding eigen-subspace of dimension mk is non-unique. In this case, it is necessary to specify the graph spectrum as the collection of eigenvectors U themselves.

      If we also consider negative edge weights wi,j that reflect inter-pixel dissimilarity/anti-correlation, then graph Laplacian L can be indefinite. We will discuss a few recent works (Su et al. 2017; Cheung et al. 2018) that employ negative edges in later chapters.

      Closely related to the combinatorial graph, Laplacian L, are other variants of Laplacian operators, each with their own unique spectral properties. A normalized graph Laplacian Ln = D−1/2LD−1/2 is a symmetric normalized variant of L. In contrast, a random walk graph Laplacian Lr = D−1L is an asymmetric normalized variant of L.A generalized graph Laplacian Lg = L + diag(D) is a graph Laplacian with self-loops di,i at nodes i – called the loopy graph Laplacian in Dörfler and Bullo (2013) – resulting in a general symmetric matrix with non-positive off-diagonal entries for a positive graph (Biyikoglu et al. 2005). Eigen-decomposition can also be performed on these operators to acquire a set of graph frequencies and graph Fourier modes. For example, normalized variants Ln and Lr (which are similarity transforms of each other) share the same eigenvalues between 0 and 2. While L and Ln are both symmetric, Ln does not have the constant vector as an eigenvector. Asymmetric Lr can be symmetrized via left and right diagonal matrix multiplications (Milanfar 2013a). Different variation operators will be used throughout the book for different applications.

      Traditionally, for graph G with positive edge weights, signal x is considered smooth if each sample xi on node i is similar to samples xj on neighboring nodes j with large wi,j. In the graph frequency domain, it means that x mostly contains low graph frequency components, i.e. coefficients α = Ux are zeros (or mostly zeros) for high frequencies. The smoothest signal is the constant vector – the first eigenvector u1 for L, corresponding to the smallest eigenvalue λ1 = 0.

      Mathematically, we can declare that a signal x is smooth if its graph Laplacian regularizer (GLR) xLx is small (Pang and Cheung 2017). GLR can be expressed as:

      [I.3]

      Because L is PSD, xLx is lower bounded by 0 and achieved when x = cu1 for some scalar constant c. One can also define GLR using the normalized graph Laplacian Ln instead of L, resulting in xLnx. The caveats is that the constant vector u1 – typically the most common signal in imaging – is no longer the first eigenvector, and thus

.

      In Chen et al. (2015), the adjacency matrix W is interpreted as a shift operator, and thus, graph signal smoothness is instead defined as the difference between a signal x and its shifted version Wx. Specifically, graph total variation (GTV) based on lp-norm is:

Скачать книгу