Nonlinear Filters. Simon Haykin

Чтение книги онлайн.

Читать онлайн книгу Nonlinear Filters - Simon Haykin страница 33

Nonlinear Filters - Simon  Haykin

Скачать книгу

time k, hence the union of these two sets is called the information set, bold upper I Subscript k Baseline equals StartSet bold upper U Subscript k Baseline comma bold upper Y Subscript k Baseline EndSet equals StartSet bold u Subscript 0 colon k Baseline comma bold y Subscript 0 colon k Baseline EndSet [47].

      A filter uses the inputs and available observations up to time instant k, to estimate the state at k, ModifyingAbove bold x With Ì‚ Subscript k vertical-bar k. In other words, a filter tries to solve an inverse problem to infer the states (cause) from the observations (effect). Due to uncertainties, different values of the state could have led to the obtained measurement sequence, bold y Subscript 0 colon k. The Bayesian framework allows us to associate a degree of belief to these possibly valid values of state. The main idea here is to start from an initial density for the state vector, p left-parenthesis bold x 0 right-parenthesis, and recursively calculate the posterior PDF, p left-parenthesis bold x Subscript k Baseline vertical-bar bold u Subscript 0 colon k Baseline comma bold y Subscript 0 colon k Baseline right-parenthesis based on the measurements. This can be done by a filtering algorithm that includes two‐stages of prediction and update [46].

      When a new measurement bold y Subscript k plus 1 is obtained, the prediction stage is followed by the update stage, where the above prediction density will play the role of the prior. Bayes' rule is used to compute the posterior density of the state as [46, 47]:

      where the normalization constant in the denominator is obtained as:

      (4.9)p left-parenthesis bold y Subscript k plus 1 Baseline vertical-bar bold u Subscript 0 colon k plus 1 Baseline comma bold y Subscript 0 colon k Baseline right-parenthesis equals integral p left-parenthesis bold y Subscript k plus 1 Baseline vertical-bar bold x Subscript k plus 1 Baseline comma bold u Subscript k plus 1 Baseline right-parenthesis p left-parenthesis bold x Subscript k plus 1 Baseline vertical-bar bold u Subscript 0 colon k Baseline comma bold y Subscript 0 colon k Baseline right-parenthesis normal d bold x Subscript k plus 1 Baseline period

       Minimum mean‐square error (MMSE) estimator(4.10) This is equivalent to minimizing the trace (sum of the diagonal elements) of the estimation‐error covariance matrix. The MMSE estimate is the conditional mean of :(4.11) where the expectation is taken with respect to the posterior, .

       Risk‐sensitive (RS) estimator(4.12) Compared to the MMSE estimator, the RS estimator is less sensitive to uncertainties. In other words, it is a more robust estimator [49].

       Maximum a posteriori (MAP) estimator(4.13)

       Minimax estimator(4.14) The minimax estimate is the medium of the posterior, . The minimax technique is used to achieve optimal performance under the worst‐case condition [50].

       The most probable (MP) estimator(4.15) MP estimate is the mode of the posterior, . For a uniform prior, this estimate will be identical to the maximum likelihood (ML) estimate.(4.16)

Скачать книгу