Computational Statistics in Data Science. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Computational Statistics in Data Science - Группа авторов страница 64

Computational Statistics in Data Science - Группа авторов

Скачать книгу

href="#fb3_img_img_13ee1715-3cc9-5791-85dc-50d87a6402a0.png" alt="theta"/> (given lamda) is

theta bar x comma lamda tilde upper N left-parenthesis StartFraction lamda x Over lamda plus 1 EndFraction comma StartFraction lamda upper I Subscript p Baseline Over lamda plus 1 EndFraction right-parenthesis

      If the true value of lamda is unknown, it is often estimated from the marginal distribution of upper X, upper X tilde upper N Subscript p Baseline left-parenthesis 0 comma left-parenthesis lamda plus 1 right-parenthesis upper I Subscript p Baseline right-parenthesis via maximum‐likelihood estimation as

ModifyingAbove lamda With Ì‚ equals left-parenthesis StartFraction double-vertical-bar x double-vertical-bar squared Over p EndFraction minus 1 right-parenthesis Superscript plus

      Robert and Casella [4] consider estimating h left-parenthesis theta right-parenthesis equals double-vertical-bar theta double-vertical-bar squared using the posterior mean normal upper E left-bracket double-vertical-bar theta double-vertical-bar squared vertical-bar x comma ModifyingAbove lamda With Ì‚ right-bracket. Under a quadratic loss, the Bayes estimator is

modifying above h with caret Subscript e b Baseline equals left-parenthesis double-vertical-bar x double-vertical-bar squared minus p right-parenthesis Superscript plus

      The risk for modifying above h with caret Subscript e b

eta Subscript e b Baseline left-parenthesis double-vertical-bar theta double-vertical-bar right-parenthesis equals normal upper E left-bracket left-parenthesis double-vertical-bar theta double-vertical-bar squared minus left-parenthesis double-vertical-bar x double-vertical-bar squared minus p right-parenthesis Superscript plus Baseline right-parenthesis squared bar theta right-bracket

      is difficult to obtain analytically (although not impossible, see Robert and Casella [4]). Instead, we can estimate the risk over a grid of double-vertical-bar theta double-vertical-bar values using Monte Carlo. To do this, we fix m choices theta 1 comma ellipsis comma theta Subscript m Baseline over a grid, and for each k equals 1 comma ellipsis comma m, generate n Monte Carlo samples from upper X vertical-bar theta Subscript k Baseline tilde upper N left-parenthesis theta Subscript k Baseline comma 1 right-parenthesis yielding estimates

ModifyingAbove eta With Ì‚ Subscript e b Baseline left-parenthesis double-vertical-bar theta Subscript k Baseline double-vertical-bar right-parenthesis equals StartFraction 1 Over n EndFraction sigma-summation Underscript t equals 1 Overscript n Endscripts left-parenthesis double-vertical-bar theta Subscript k Baseline double-vertical-bar minus left-parenthesis double-vertical-bar upper X Subscript t Baseline double-vertical-bar squared minus p right-parenthesis Superscript plus Baseline right-parenthesis squared

      The resulting estimate of the risk is an m‐dimensional vector of means, for which we can utilize the sampling distribution in Theorem 1 to construct large‐sample confidence regions. An appropriate choice of a sequential stopping rule here is the relative‐magnitude sequential stopping rule, which stops simulation when the Monte Carlo variance is small relative to the average risk over all values of theta considered. It is important to note that the risk at a particular theta could be zero, but it is unlikely.

stat08283fgy002
(a) and at
(b) with pointwise Bonferroni corrected confidence intervals.

      7.3 Bayesian Nonlinear Regression

Скачать книгу