Position, Navigation, and Timing Technologies in the 21st Century. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Position, Navigation, and Timing Technologies in the 21st Century - Группа авторов страница 28

Position, Navigation, and Timing Technologies in the 21st Century - Группа авторов

Скачать книгу

      The applications covered include survey and mobile mapping (Chapter 55), precision agriculture (Chapter 56), wearable navigation technology (Chapter 57), driverless vehicles (Chapter 58), train control (Chapter 59), unmanned aerial systems (Chapter 60), aviation (Chapter 61), spacecraft navigation and orbit determination (Chapter 62), spacecraft formation flying and rendezvous (Chapter 63), and finally Arctic navigation (Chapter 64).

      Taken together, Volume 2 shows the incredible value of navigation systems and the variety of approaches that are available in cases where GNSS is not sufficient. Whether we realize it or not, our day‐to‐day lives are heavily dependent on the ability of many systems that interact with (or that are behind the scenes) to determine time and position, and there is an increasing number of creative options and opportunities for precise navigation and time that can meet the needs of current and future applications.

       Michael J. Veth

       Veth Research Associates, United States

      Almost immediately following its introduction in 1960, the Kalman filter and the extended Kalman filter have served as the primary algorithms used to solve navigation problems [1–3]. The optimal, recursive, and online characteristics of the algorithm are perfectly suited to serve a wide range of applications requiring real‐time navigation solutions.

      The traditional Kalman filter and extended Kalman filter are based on the following assumptions:

       Linear (or nearly linear) system dynamics and observations.

       All noise and error sources are Gaussian.

      While these assumptions are valid in many cases, there is increasing interest in incorporating sensors and systems that are non‐Gaussian, nonlinear, or both. Because these characteristics inherently violate the fundamental assumptions of the Kalman filter, when Kalman filters are used, performance suffers. More specifically, this can result in filter estimates that are inaccurate, inconsistent, or unstable. To address this limitation, researchers have developed a number of algorithms designed to provide improved performance for nonlinear and non‐Gaussian problems [4–6].

      In this chapter, we provide an overview of some of the most common and useful classes of nonlinear recursive estimators. The goal is to introduce the fundamental theories supporting the algorithms, identify their associated performance characteristics, and finally present their respective applicability from a navigation perspective.

      The chapter is organized as follows. First, an overview of the notation and essential concepts related to estimation and probability theory are presented as a foundation for nonlinear filtering development. Some of the concepts include recursive estimation frameworks, the implicit assumptions and limitations of traditional estimators, and the deleterious effects on performance when these assumptions are not satisfied. Next an overview of nonlinear estimation theory is presented with the goal of demonstrating and deriving three main classes of nonlinear recursive estimators. These include Gaussian sum filters, grid particle filters, and sampling particle filters. Each of these classes of nonlinear recursive estimators is demonstrated and evaluated using a simple navigation example. The chapter is concluded with a discussion regarding the strengths and weaknesses of the approaches discussed with an emphasis on helping navigation engineers decide which estimation algorithm to apply to a given problem of interest.

      36.1.1 Notation

      The following notation is used in this chapter:

       State vector: The state vector at time k is represented by the vector xk.

       State estimate: An estimated quantity is represented using the hat operator. For example, the estimated state vector at time k is .

       A priori/a posteriori estimates: A priori and a posteriori estimates are represented using the + and – superscript notation. For example, the a priori state estimate at time k is , and the a posteriori state estimate at time k is .

       State error covariance estimates: The state error covariance matrix is represented using the matrix P with superscripts and subscripts as required. For example, the a priori state error covariance matrix at time k is given by .

       State transition matrix: The state transition matrix from time k – 1 to k is given by . Note that the time indices may be omitted when they are explained contextually.

       Process noise vector and covariance: The process noise vector at time k is wk. The process noise covariance matrix at time k is Qk.

       Observation vector: The observation vector at time k is given by zk.

       Observation influence matrix: The observation influence matrix at time k is given by Hk. Note that the time index may be omitted when contextually unnecessary.

       Measurement noise vector and covariance: The measurement noise vector at time k is represented by vk. The measurement noise covariance is represented by Rk.

       Probability density function: Probability density functions are expressed as p(·).

      The goal of any estimator is to estimate one (or more) parameters of interest based on a model of the system, observations from sensors, or both. Because the parameters are, by definition, random vectors, they can be completely characterized by their associated probability density function (pdf). If we define our parameter vector and observation vectors at time k as xk and zk, respectively, the overarching objective of a recursive estimator is to estimate the pdf of all of the previous state vector epochs, conditioned on all observations received up to the current epoch. Mathematically, this is expressed as the following pdf:

      where

      (36.2)equation

      and

Скачать книгу