Industrial Data Analytics for Diagnosis and Prognosis. Yong Chen

Чтение книги онлайн.

Читать онлайн книгу Industrial Data Analytics for Diagnosis and Prognosis - Yong Chen страница 26

Industrial Data Analytics for Diagnosis and Prognosis - Yong Chen

Скачать книгу

rel="nofollow" href="#fb3_img_img_50b1562c-ccc1-5c86-ae38-24f897f262e6.png" alt="table row cell bold italic mu subscript n end cell cell equals left parenthesis bold capital sigma subscript 0 superscript negative 1 end superscript plus n bold capital sigma to the power of negative 1 end exponent right parenthesis to the power of negative 1 end exponent left parenthesis bold capital sigma subscript 0 superscript negative 1 end superscript bold italic mu subscript 0 plus n bold capital sigma to the power of negative 1 end exponent bold x with bar on top right parenthesis end cell end table"/> (3.30)

      where is the sample mean of the data, which is the MLE of μ. It is easy to see the similarity between the results for the univariate data in (3.28) and (3.29) and the results for the multivariate data in (3.30) and (3.31). The MAP of μ is exactly μn. Similar to the univariate case, when n is large, or when the prior distribution is flat, the MAP is close to the MLE.

      One advantage of the Bayesian inference is that the prior knowledge can be included naturally. Suppose, for example, a randomly sampled product turns out to be defective. A MLE of the defective rate based on this single observation would be equal to 1, implying that all products are defective. By contrast, a Bayesian approach with a reasonable prior should give a much less extreme conclusion. In addition, the Bayesian inference can be performed in a sequential manner very naturally. To see this, we can write the posterior distribution of μ with the contribution from the last data point xn separated out as

      Example 3.3: For the side_temp_defect data set from a hot rolling process, suppose the true covariance matrix of the side temperatures measured at location 2, 40, and 78 of Stand 5 is known and given by

table row cell bold S equals open parentheses table row cell 2547.4 end cell cell negative 111.0 end cell cell 133.7 end cell row cell negative 111.0 end cell cell 533.1 end cell cell 300.7 end cell row cell 133.7 end cell cell 300.7 end cell cell 562.5 end cell end table close parentheses. end cell end table

      We use the nominal mean temperatures as given in Example 3.2 as the mean of the prior distribution and a diagonal matrix with variance equal to 100 for each temperature variable as its covariance matrix:

table row cell mu subscript 0 equals open parentheses table row 1926 row 1851 row 1872 end table close parentheses comma capital sigma subscript 0 equals open parentheses table row 100 0 0 row 0 100 0 row 0 0 100 end table close parentheses. end cell end table

      Based on (3.30) and (3.31), the following R codes calculate the posterior mean and covariance matrix for μ using the first five (n = 5) observations in the data set.

      The posterior mean and covariance matrix are obtained as

table row cell mu subscript 5 equals open parentheses table row 1930 row 1856 row 1854 end table close parentheses comma capital sigma subscript 5 equals open parentheses table row cell 83.37 end cell cell negative 2.61 end cell cell 2.83 end cell row cell negative 2.61 end cell cell 46.85 end cell cell 15.37 end cell row cell 2.83 end cell cell 15.37 end cell cell 48.235 end cell end table close parentheses. end cell end table

      Compared to the sample mean of the first five observations, which is (1943 1850 1838)T, the posterior mean has some deviations from both the sample mean and the prior mean μ0. Now we use the first 100 (n = 100) observations to find the posterior mean by changing n in the R codes from 5 to 100. The posterior mean and covariance matrix are

table row cell mu subscript 100 equals open parentheses table row 1940 row 1849 row 1865 end table close parentheses comma capital sigma subscript 100 equals open parentheses table row cell 20.28 end cell cell negative 0.87 end cell cell 1.03 end cell row cell negative 0.87 end cell cell 4.97 end cell cell 2.72 end cell row cell 1.03 end cell cell 2.72 end cell cell 5.235 end cell end table close parentheses. end cell end table

      Compared to the sample mean vector of the first 100 observations, which is (1944 1849 1865)T, the posterior mean with n = 100 observations is very close to the sample mean, while the influence of the prior mean is very small. In addition, the posterior variance for the mean temperature at each of the three locations is much smaller for n = 100 than for n = 5.

      Bibliographic Notes

      Exercises

      1 Consider two discrete random variables X and Y with joint probability mass function p(x, y) given in the following table:

x y p(x, y)
–1 –1

Скачать книгу