Probability with R. Jane M. Horgan

Чтение книги онлайн.

Читать онлайн книгу Probability with R - Jane M. Horgan страница 19

Probability with R - Jane M. Horgan

Скачать книгу

once more, we get

      mean(usage[3:9]) [1] 3.375714 median(usage[3:9]) [1] 3.22

      Now, we see that there is not much difference between the mean and median.

      When there are extremely high values in the data, using the mean as a measure of central tendency gives the wrong impression. A classic example of this is wage statistics where there may be a few instances of very high salaries, which will grossly inflate the average, giving the impression that salaries are higher than they actually are.

      Measures of dispersion, as the name suggests, estimate the spread or variation in a data set. There are many ways of measuring spread, and we consider some of the most common.

      Range: The simplest measure of spread of data is the range, which is the difference between the maximum and the minimum values.

      rangedown <- max(downtime) - min(downtime) rangedown [1] 51

      tells us that the range in the downtime data is 51 minutes.

      rangearch1 <- max(arch1, na.rm = T) - min(arch1, na.rm = T) rangearch1 [1] 97

      gives the range of the marks awarded in Architecture in Semester 1.

      The R function range may also be used.

      range(arch1, na.rm = TRUE) [1] 3 100

      which gives the minimum (3) and the maximum (100) of the marks obtained in Architecture in Semester 1.

      To get the range for all the examination subjects in results, we use the function sapply.

      sapply(results[2:5], range, na.rm = TRUE)

      gives the minimum and maximum of each subject.

       arch1 prog1 arch2 prog2 [1,] 3 12 6 5 [2,] 100 98 98 97

      Standard deviation: The standard deviation (sd) measures how much the data values deviate from their average. It is the square root of the average squared deviations from the mean. A small standard deviation implies most values are near the mean. A large standard deviation indicates that values are widely spread above and below the mean.

      In R

      sd(downtime)

      yields

      [1] 14.27164.

      Recall that we calculated the mean to be 25.04 minutes. We might loosely describe the downtime as being “25 minutes on average give or take 14 minutes.”

      For the data in

      sapply(results[2:5], sd, na.rm = TRUE)

      gives the standard deviation of each examination subject in

:

       arch1 prog1 arch2 prog2 24.37469 23.24012 21.99061 27.08082

      Quantiles: The quantiles divide the data into proportions, usually into quarters called quartiles, tenths called deciles, and percentages called percentiles. The default calculation in R is quartiles.

      quantile(downtime)

      gives

       0% 25% 50% 75% 100% 0.0 16.0 25.0 31.5 51.0

      The second quartile (25.0) is the value that breaks the data so that 50% is below and 50% is above (notice that the 2nd quartile is the median).

      The third quartile (31.5) is the value that breaks the data so that 75% is below and 25% is above.

      We could say that 25% of the computer systems in the laboratory experienced less than 16 minutes of downtime, another 25% of them were down for between 16 and 25 minutes, and so on.

      Interquartile range: The difference between the first and third quartiles is called the interquartile range and is sometimes used as a rough estimate of the standard deviation. In downtime it is

, not too far away from 14.27, which we calculated to be the standard deviation.

      Deciles: Deciles divide the data into tenths. To get the deciles in R, first define the required break points

      deciles <- seq(0, 1, 0.1)

      The function seq creates a vector consisting of an equidistant series of numbers. In this case, seq assigns values in [0, 1] in intervals of 0.1 to the vector called deciles. Writing in R

      deciles

      shows what the vector contains

      [1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

      Adding this extra argument to the quantile function

      quantile(downtime, deciles)

      yields

       0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0.0 4.0 12.8 19.8 22.6 25.0 29.2 30.0 34.8 44.8 51.0

      Interpreting this output, we could say that 90% of the computer systems in the laboratory experienced less than 45 minutes of downtime.

      Similarly, for the percentiles, use

      percentiles <- seq(0, 1, 0.01)

      as an argument in the quantile function, and write

      quantile(downtime, percentiles)

      summary(downtime)

      returns

       Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00 16.00 25.00 25.04 31.50 51.00

      which are the minimum the first quartile, the median, the mean, the third quartile, and the maximum, respectively.

      For

, we might write

      summary(arch1)

      which gives

       Min. 1st Qu. Median Mean 3rd Qu. Max. NA's 3.00 46.75

Скачать книгу