Numerical Methods in Computational Finance. Daniel J. Duffy

Чтение книги онлайн.

Читать онлайн книгу Numerical Methods in Computational Finance - Daniel J. Duffy страница 49

Numerical Methods in Computational Finance - Daniel J. Duffy

Скачать книгу

      An inner product (a generalisation of dot product from high school calculus) on a real vector space V is a scalar-valued function on the Cartesian product of V with itself having the following axioms:

      Inner products on complex vector spaces are also possible, but a discussion is outside the scope of this chapter. We also note that inner products are sometimes written as less-than x vertical-bar y greater-than (for example, in physics) instead of left-parenthesis x comma y right-parenthesis where x and y are vectors. For the specific case of n-dimensional vectors we usually write the inner product as follows:

      (5.5)x Superscript down-tack Baseline y identical-to left-parenthesis x comma y right-parenthesis equals sigma-summation Underscript j equals 1 Overscript n Endscripts x Subscript j Baseline y Subscript j Baseline x comma y element-of normal double struck upper R Superscript n

      where x Superscript down-tack is the transpose of vector x.

      An inner product space is a vector space on which an inner product is defined. A finite-dimensional real inner product space is known as a Euclidean space, and a complex inner product space is known as a unitary space. The length of a vector x in Euclidean space is defined to be StartRoot left-parenthesis x comma x right-parenthesis EndRoot equals left-parenthesis x comma x right-parenthesis Superscript 1 slash 2, and the angle between two vectors x and y is given by:

      (5.6)cosine theta equals StartFraction left-parenthesis x comma y right-parenthesis Over left-parenthesis x comma x right-parenthesis Superscript 1 slash 2 Baseline left-parenthesis y comma y right-parenthesis Superscript 1 slash 2 Baseline EndFraction period

      We say that two vectors x and y are orthogonal if left-parenthesis x comma y right-parenthesis equals 0. We immediately see that the zero vector is orthogonal to every other vector. Another example is x equals left-parenthesis 2 comma 3 comma 1 right-parenthesis comma y equals left-parenthesis 4 comma negative 2 comma negative 2 right-parenthesis; then left-parenthesis x comma y right-parenthesis equals 0.

      Definition 5.4 The set StartSet x 1 comma ellipsis comma x Subscript n Baseline EndSet in an inner product space is orthonormal if:

      (5.7)left-parenthesis x Subscript i Baseline comma x Subscript j Baseline right-parenthesis equals StartLayout Enlarged left-brace 1st Row 1st Column 1 comma 2nd Column i equals j 2nd Row 1st Column 0 comma 2nd Column i not-equals j period EndLayout

      Finding an orthonormal set in an inner product space is analogous to choosing a set of mutually perpendicular unit vectors in elementary vector analysis.

      5.3.1 Orthonormal Basis

      Let X be an inner product space. The set StartSet e 1 comma ellipsis comma e Subscript n Baseline EndSet subset-of upper X is orthonormal if double-vertical-bar e Subscript j Baseline double-vertical-bar equals StartRoot left-parenthesis e Subscript j Baseline comma e Subscript j Baseline right-parenthesis EndRoot equals 1 for 1 less-than-or-equal-to j less-than-or-equal-to n and left-parenthesis e Subscript j Baseline comma e Subscript k Baseline right-parenthesis equals 0 for 1 less-than-or-equal-to j comma k less-than-or-equal-to n comma j not-equals k. The set StartSet e 1 comma ellipsis comma e Subscript n Baseline EndSet subset-of upper X is called an orthonormal basis.

      Then x equals sigma-summation Underscript j equals 1 Overscript n Endscripts left-parenthesis x comma e Subscript j Baseline right-parenthesis e Subscript j Baseline for-all x element-of upper X.

      The inner product space upper C left-bracket negative pi comma pi right-bracket (continuous functions) has:

       Orthnormal basis .

       Orthogonality because .

      An interesting application of inner products is to kernel theory to statistical learning in Learning with Kernels, Schölkopf and Smola (2002). In this case we do not work in an original (let's say n-dimensional) space X but in a feature space H. To this end, consider the map:

      (5.8)normal upper Phi colon upper X right-arrow upper H comma y equals normal upper Phi left-parenthesis x right-parenthesis comma x element-of upper X period

      We embed data into H, and this approach offers several advantages, one of which is that we can define a similarity measure from the inner product in H:

      (5.9)k left-parenthesis x comma x Superscript prime Baseline right-parenthesis colon-equal left-parenthesis x comma x Superscript prime Baseline right-parenthesis equals left-parenthesis normal upper Phi left-parenthesis x right-parenthesis comma normal upper Phi left-parenthesis x Superscript prime Baseline right-parenthesis right-parenthesis comma 
						<noindex><p style= Скачать книгу