Data Science in Theory and Practice. Maria Cristina Mariani
Чтение книги онлайн.
Читать онлайн книгу Data Science in Theory and Practice - Maria Cristina Mariani страница 18
Now we can define the moments of the random vector. The first moment is a vector
The expectation applies to each component in the random vector. Expectations of functions of random vectors are computed just as with univariate random variables. We recall that expectation of a random variable is its average value.
The second moment requires calculating all the combination of the components. The result can be presented in a matrix form. The second central moment can be presented as the covariance matrix.
(2.1)
where we used the transpose matrix notation and since the
We note that the covariance matrix is positive semidefinite (nonnegative definite), i.e. for any vector
Now we explain why the covariance matrix has to be semidefinite. Take any vector
(2.2)
is a random variable (one dimensional) and its variance must be nonnegative. This is because in the one‐dimensional case, the variance of a random variable is defined as
Since the variance is always nonnegative, the covariance matrix must be nonnegative definite (or positive semidefinite). We recall that a square symmetric matrix
The covariance matrix is discussed in detail in Chapter 3.
We now present examples of multivariate distributions.
2.3.1 The Dirichlet Distribution
Before we discuss the Dirichlet distribution, we define the Beta distribution.
Definition 2.22 (Beta distribution) A random variable