Applied Univariate, Bivariate, and Multivariate Statistics. Daniel J. Denis

Чтение книги онлайн.

Читать онлайн книгу Applied Univariate, Bivariate, and Multivariate Statistics - Daniel J. Denis страница 34

Applied Univariate, Bivariate, and Multivariate Statistics - Daniel J. Denis

Скачать книгу

is E(y1) = μ, E(y2) = μ, … E(yn) = μ, we can write

equation

      We note that the n values in numerator and denominator cancel, and so we end up with

equation

      We now need a measure of the dispersion of a sampling distribution of the mean. At first glance, it may seem reasonable to assume that the variance of the sampling distribution of means should equal the variance of the population from which the sample means were drawn. However, this is not the case. What is true is that the variance of the sampling distribution of means will be equal to only a fraction of the population variance. It will be equal to images of it, where n is equal to the size of samples we are collecting for each sample mean. Hence, the variance of means of the sampling distribution is equal to

equation

      or simply,

equation

      The mathematical proof of this statistical fact is in most mathematical statistics texts. A version of the proof can also be found in Hays (1994). The idea, however, can be easily and perhaps even more intuitively understood by recourse to what happens as n changes. We consider first the most trivial and unrealistic of examples to strongly demonstrate the point. Suppose that we calculate the sample mean from a sample size of n = 1, sampled from a population with μ = 10.0 and σ2 = 2.0. Suppose the sample mean we obtain is equal to 4.0. Therefore, the sampling variance of the corresponding sampling distribution is equal to:

equation

      That is, the variance in means that you can expect to see if you sampled an infinite number of means based on samples of size n = 1 repeatedly from this population is equal to 2. Notice that 2 is exactly equal to the original population variance. In this case, the variance in means is based on only a single data point.

      Consider now the case where n > 1. Suppose we now sampled a mean from the population based on sample size n = 2, yielding

equation

      Analogous to how we defined the standard deviation as the square root of the variance, it is also useful to take the square root of the variance of means:

equation

      which we call the standard error of the mean, σM. The standard error of the mean is the standard deviation of the sampling distribution of the mean. Lastly, it is important to recognize that images is not “the” standard error. It is merely the standard error of the mean. Other statistics will have different SEs.

      It is not an exaggeration to say that the central limit theorem, in one form or another, is probably the most important and relevant theorem in theoretical statistics, which translates to it being quite relevant to applied statistics as well.

      We borrow our definition of the central limit theorem from Everitt (2002):

      If a random variable y has a population mean μ and population variance σ2, then the sample mean, images, based on n observations, has an approximate normal distribution with mean μ and variance images, for sufficiently large n. (p. 64)

      The relevance and importance of the central limit theorem cannot be overstated: it allows one to know, at least on a theoretical level, what the distribution of a statistic (e.g., sample mean) will look like for increasing sample size. This is especially important if one is drawing samples from a population for which the shape is not known or is known a priori to be nonnormal. Normality of the sampling distribution, for adequate sample size, is still assured even if samples are drawn from nonnormal populations. Why is this relevant? It is relevant because if we know what the distribution of means will look like for increasing sample size, then we know we can compare our obtained statistic to a normal distribution in order to estimate its probability of occurrence. Normality assumptions are also typically required for assuming independence between images and s2 in univariate contexts (Lukacs, 1942), and images (mean vector) and S (covariance matrix) in multivariate ones. When such estimators can be assumed to arise from normal or multivariate normal distributions (i.e., in the case of images and S) we can generally be assured one is independent of the other.

Скачать книгу