Martingales and Financial Mathematics in Discrete Time. Benoîte de Saporta

Чтение книги онлайн.

Читать онлайн книгу Martingales and Financial Mathematics in Discrete Time - Benoîte de Saporta страница 12

Martingales and Financial Mathematics in Discrete Time - Benoîte de Saporta

Скачать книгу

Convergence of sequences of random variables

      To conclude this section on random variables, we will review some classic results of convergence for sequences of random variables. Throughout the rest of this book, the abbreviation r.v. signifies random variable.

      DEFINITION 1.17.– Let (Xn)n≥1 and X be r.v.s defined on (Ω,

, ℙ).

      1 1) It is assumed that there exists p > 0 such that, for any n ≥ 0, [|Xn|p] < ∞, and [|X|p] < ∞. It is said that the sequence of random variables (Xn)n≥1 converges on the average of the order p or converges in Lp towards X, ifWe then write In the specific case where p = 2, we say there is a convergence in quadratic mean.

      2 2) The sequence of r.v. (Xn)n≥1 is called almost surely (a.s.) convergent towards X, if

image

       We then write

      THEOREM 1.1 (Monotone convergence theorem).– Let (Xn)n≥1 be a sequence of positive and non-decreasing random variables and let X be an integrable random variable, all of these defined on the same probability space (Ω,

P). If (Xn) converges almost surely to X, then

image

      THEOREM 1.2 (Dominated convergence theorem).– Let (Xn)n≥1 be a sequence of random variables and let X be another random variable, all defined on the same probability space (Ω,

, ℙ). If the sequence (Xn) converges to X a.s., and for any n ≥ 1, |Xn|≤ Z, where Z is an integrable random variable, then and, in particular,

image

      THEOREM 1.3 (Strong law of large numbers).– Let (Xn)n≥1 be a sequence of integrable, independent random variables from the same distribution. Then,

image

      The main objective of this book is to study certain families of stochastic (or random) processes in discrete time. There are two ways of seeing such objects:

       – as a sequence (Xn)n∈ℕ of real random variables;

       – as a single random variable X taking values in the set of real sequences.

      The index n represents time. Since n ∈ ℕ, we speak of processes in discrete time. In the rest of this book, unless indicated otherwise, we will only consider processes taking discrete real values. The notation E thus denotes a finite or countable subset of ℝ and ε =

(E), the set of subsets of E.

      DEFINITION 1.18.– A stochastic process is a sequence X = (Xn)n∈ℕ of random variables taking values in (E, ε). The process X is then a random variable taking values in (E, ε⊗ℕ).

      EXAMPLE 1.22.– A coin is tossed an infinite number of times. This experiment is modeled by Ω = {T, H}ℕ∗ . For n ∈ ℕ, consider the mappings Xn to Ω indefined by

image

      the number of tails at the nth toss. Therefore, Xn, n ∈ ℕ are discrete, real random variables and the sequence X = (Xn)n∈ℕ is a stochastic process.

      DEFINITION 1.19.– Let X = (Xn)n∈ℕ be a stochastic process. For all n ∈ ℕ, the distribution of the vector (X0, X1,..., Xn) is denoted by μn. The probability distributions (μn)n∈ℕ are called finite-dimensional distributions or finite-dimensional marginal distributions of the process X = (Xn)n∈ℕ.

      PROPOSITION 1.10.– Let X = (Xn)n∈ℕ be a stochastic process and let (μn)n∈ℕ be its finite-dimensional distributions. Then, for all n ∈ N∗ and (A0,..., An−1) ∈ εn, we have

image

      In other words, the restriction of the marginal distribution of the vector (X0,..., Xn) to its first n coordinates is exactly the distribution of the vector (X0,..., Xn−1).

      PROOF.– This proof directly follows from the definition of the objects. We have

image

      and hence, the desired equality.

      □

      Indeed, this property completely characterizes the distribution of the process X according to the following theorem.

      THEOREM 1.4 (Kolmogorov).– The canonical space (Ω,

) is defined in the following manner. Let Ω = E. The coordinate mappings (Xn)n∈ℕ are defined by Xn(ω) = ωn for any ω = (ωn)n∈ℕ ∈ Ω and we write
= σ(Xn,n ∈ ℕ). Let (μn)n∈ℕ be a family of probability distributions such that

      1 1) for any n ∈ ℕ, μn is defined on (En+1, ε⊗(n+1)),

      2 2) for any n ∈ ℕ∗ and (A0,..., An−1) ∈ εn, we have μn−1(A0 × ... × An−1) = μn(A0 × ... × An−1 × E).

      Therefore, there exists a unique probability distribution μ over the canonical space (Ω,

) such that the process X = (Xn)n∈ℕ for the coordinate

Скачать книгу