Experimental Design and Statistical Analysis for Pharmacology and the Biomedical Sciences. Paul J. Mitchell

Чтение книги онлайн.

Читать онлайн книгу Experimental Design and Statistical Analysis for Pharmacology and the Biomedical Sciences - Paul J. Mitchell страница 20

Experimental Design and Statistical Analysis for Pharmacology and the Biomedical Sciences - Paul J. Mitchell

Скачать книгу

Uniform distribution Uniform distribution Bernoulli distribution Exponential distribution Binomial distribution Gamma distribution Poisson distribution Normal distribution Geometric distribution Chi‐square distribution Negative binomial distribution Student‐t distribution Hypergeometric distribution F distribution Beta distribution Weibull distribution Gumbel distribution

      In contrast, the discrete and continuous probability distributions described briefly below are either of theoretical interest or are distributions which may inform our decisions regarding which statistical methods we use to describe and analyse data from our experiments. Care must be taken, however, to ensure that we interpose our experimental data on the correct mathematical distribution, since if we make an incorrect decision, then we risk arriving at erroneous conclusions.

      1 Discrete uniform distribution (Figure 4.2)This is a very simple distribution where the probability of each equally spaced possible integer values is equal. An example would be rolling fair six‐sided dice; here the probability of each side occurring would be equal, i.e. there would be six equally like probabilities. In standard statistical nomenclature, p is the variable used to denote probability. So here p = 1/6 = 0.167.

      2 Bernoulli distribution (Figure 4.3)Whenever you toss a coin, then the only outcomes are either the coin lands heads or tails uppermost; the question being asked here is ‘will this single trial succeed’. This is an example of the most basic of random events where a single event may only have one or two possible outcomes with a fixed probability of each occurring. The Bernoulli distribution has only one controlling parameter which is the probability of success according to whether you call heads or tails; in both cases the probabilities of success or failure in a single trial are equal and will have a probability of 0.5 (i.e. p = 0.5).Figure 4.2 The discrete uniform distribution. X‐axis values indicate the resulting number shown on the throw of a six‐sided dice. Y‐axis values indicate the relative probability density.Figure 4.3 The Bernoulli distribution. X‐axis values indicate the resulting outcome from only two possibilities, e.g. success or failure to throw a heads on the toss of a coin. Y‐axis values indicate the relative probability density.

      3 Binomial distribution (Figure 4.4)The binomial distribution is an extension of the Bernoulli distribution to include multiple success or failure trials with a fixed probability. Consequently, the binomial distribution addresses the question ‘out of a given number of trials, how many will be successful’? So, if you tossed a coin 10 times, in how many of these trials would the coin land heads? With a fair coin you would expect five heads and five tails as the outcomes of the 10 trials. But what is the probability of only two heads, or nine heads, or no heads at all? For those who are interested (!), the probability of obtaining exactly k success in n trials is given by the binomial probability mass function and is discussed in detail in Appendix A.1.Figure 4.4 The binomial distribution. 250 undergraduate students were asked to toss a coin 10 times and count the number of times the coin landed heads uppermost. The X‐axis indicates the number of times the coin was successfully tossed to land heads from 10 trials. The Y‐axis indicates a) the predicted number of students for each level of success according to probability mass function for the binomial distribution (thin solid line) and b) the observed number of students for each level of outcome (open bars). For further discussion and calculation of the binomial probability mass function, see Appendix A.1.

      4 Poisson distributionThe Poisson distribution (which is very similar to the binomial distribution) examines how many times a discrete event will occur within a given period of time.

      5 Continuous uniform distributionThis is very simple distribution where (as with the discrete uniform distribution, see Figure 4.2) the probability densities for each value are equal. In this situation, however, the measured values are not limited to integers.

      6 Exponential distribution (Figure 4.5)The exponential distribution is used to map the time between independent events that happen at a constant rate. Examples of this include the rate at which radioactive particles decay and the rate at which drugs are eliminated from the body according to first‐order pharmacokinetic principles (Figure 4.6). For calculation of the exponential probability mass function, see Appendix A.2.Figure 4.5 The exponential distribution. The probability density function of the exponential distribution for events that happen at a constant rate, λ. Curves shown are for rate values of λ = 0.5 (bold line), 1.0 (thin line), 1.5 (dashed line), and 2.0 (dotted line). X‐axis values indicate the stochastic variable, x. Y‐axis values indicate probability (see also Appendix A.2).

      7 Normal Distribution (Figure 4.7)The Normal Distribution (also known as the Gaussian distribution) is the most widely used distribution, particularly in the biological sciences where most (but not all) biologically derived data follow a Normal Distribution. A Normal Distribution assumes that all measurement/observations are tightly clustered around the population mean, μ, and that the frequency of the observations decays rapidly and equally the further the observations are above and below the mean, thereby producing a characteristic bell shape (see Figure 4.7; for calculation of the probability density of the Normal Distribution, see Appendix A.3.). The spread of the data either side of the population mean is quantified by the variance, σ 2; where the square root of the variance is the Standard Deviation, σ (see Chapter 5). The normal distribution with these parameters is usually denoted as N with the values of the population mean and standard deviation immediately following in parenthesis, thus; N(μ, σ). Every normal distribution is a version of the simplest case where the mean is set to zero and the standard deviation equals 1; this is denoted as N(0, 1) and is known as the Standard Normal Distribution (see Chapter 7). Furthermore, the area under the curve of the Standard Normal Distribution is equal to 1, the consequence of which means that sections defined by multiples of the standard deviation either side of the mean equate to specific proportions of the total area under the curve. Because Normal Distribution curves are parameterised by their corresponding Mean and Standard Deviation values, then such data are known as parametric. Consequently, parametric statistics (both Descriptive and Inferential Statistics) assume that sample data sets come from data populations that follow a fixed set of parameters. In contrast, non‐parametric data sets, and their

Скачать книгу