Industrial Data Analytics for Diagnosis and Prognosis. Yong Chen

Чтение книги онлайн.

Читать онлайн книгу Industrial Data Analytics for Diagnosis and Prognosis - Yong Chen страница 20

Industrial Data Analytics for Diagnosis and Prognosis - Yong Chen

Скачать книгу

with mean vector μ and covariance matrix Σ, the probability density function of X has the form

      We denote the p-dimensional normal distribution by Np(μ, Σ).

      From (3.8), the density of a p-dimensional normal distribution depends on x through the term (xμ)T Σ−1 (xμ), which is the square of the distance from x to Σ standardized by the covariance matrix. Then it is clear that the set of x values yielding a constant height for the density form an ellipsoid. The set of points with the same height for the density is called a contour. The constant probability density contour of a p-dimensional normal distribution is:

left curly bracket bold x vertical line left parenthesis bold x minus bold italic mu right parenthesis to the power of T capital sigma to the power of negative 1 end exponent left parenthesis bold x minus bold italic mu right parenthesis equals c squared right curly bracket comma

      Example 3.1: Consider a bivariate (p = 2) normally distributed random vector X = (X1 X2)T. Suppose the mean vector is μ = (0 0)T and the covariance matrix is

table row cell bold capital sigma equals open parentheses table row 1 rho row rho 1 end table close parentheses. end cell end table

      So the variance of both variables is equal to one and the covariance matrix coincides with the correlation matrix. The inverse of the covariance matrix is

table row cell capital sigma to the power of negative 1 end exponent equals fraction numerator 1 over denominator 1 minus rho squared end fraction open parentheses table row 1 cell negative rho end cell row cell negative rho end cell 1 end table close parentheses end cell end table

      and |Σ| = 1 − ρ2. Substituting Σ−1 and |Σ| in (3.8), we have

      From (3.9), if ρ = 0, the joint density can be written as f(x1,x2) = f(x1)f(x2), where f(x) is the univariate normal density as given in (3.7), with μ = 0 and σ = 1. So in this case X1 and X2 are independent. This result is true for general multivariate normal distribution, as discussed later in this section.

      By solving the characteristic equation |Σ − λI| = 0, the two eigenvalues of Σ are λ1 = 1 + ρ and λ2 = 1 – ρ. Based on Σv = λv, the corresponding eigenvectors can be obtained as

table row cell bold v subscript 1 equals open parentheses table row cell fraction numerator square root of 2 over denominator 2 end fraction end cell row cell fraction numerator square root of 2 over denominator 2 end fraction end cell end table close parentheses comma space bold v subscript 2 equals open parentheses table row cell fraction numerator negative square root of 2 over denominator 2 end fraction end cell row cell fraction numerator square root of 2 over denominator 2 end fraction end cell end table close parentheses. end cell end table

      Figure 3.1 Two bivariate normal distributions, (a) ρ = 0 (b) ρ = 0.75

      Figure 3.2 Contour plots for the distributions in Figure 3.1

      Properties of the Multivariate Normal Distribution

      We list some of the most useful properties of the multivariate normal distribution. These properties make it convenient to manipulate normal distributions, which is one of the reasons for the popularity of the normal distribution. Suppose the random vector X follows a p-dimensional normal distribution Np(μ,Σ).

       Normality of linear combinations of the variables in X. Let c be a vector of constants. From (3.3) and (3.4), we have E(cT X) = cT μ and var(cT X) (cT Σc. This is true for any random vector X. When X follows a multivariate normal distribution, we have the additional property that cT X also follows a (univariate) normal distribution. That is, if X ∼ Np(μ,Σ, then cT X ∼ N(cT μ, cT Σc). In general, if C is a q × p matrix, CX still follows a multivariate normal distribution. From (

Скачать книгу