The Geometric Tools of Hilbert Spaces

Size: px
Start display at page:

Download "The Geometric Tools of Hilbert Spaces"

Transcription

1 The Geometric Tools of Hilbert Spaces Erlendur Karlsson February 2009 c 2009 by Erlendur Karlsson All rights reserved No part of this manuscript is to be reproduced without the writen consent of the author

2 Contents 1 Orientation 1 2 Definition of Inner Product Spaces 2 3 Examples of Inner Product Spaces 3 4 Basic Objects and Their Properties 5 41 Vectors and Tuples 5 42 Sequences 7 43 Subspaces 8 5 Inner Product and Norm Properties 9 6 The Projection Operator 11 7 Orthogonalizations of Subspace Sequences Different Orthogonalizations of S y t,p The Backward Orthogonalization of S y t,p The Forward Orthogonalization of S y t,p The Eigenorthogonalization of S y t,p Different Orthogonalizations of St,p Z The Backward Block Orthogonalization of St,p Z The Forward Block Orthogonalization of St,p Z 19 8 Procedures for Orthogonalizing S y t,p and SZ t,p The Gram Schmidt Orthogonalization Procedure The Schur Orthogonalization Procedure 20 9 The Dynamic Levinson Orthogonalization Procedure The Basic Levinson Orthogonalization Procedure The Basic Levinson Orthogonalization Procedure for S y t,p The Basic Block Levinson Orthogonalization Procedure for S Z t,p Relation Between The Projection Coefficients of Different Basis Relation Between Direct Form and Lattice Realizations on S y t,i Relation Between Direct Form and Lattice Realizations on S Z t,i Evaluation of Inner Products Lattice Inner Products for S y t,p Lattice Inner Products for S Z t,p The Complete Levinson Orthogonalization Procedure 31 i

3 1 1 Orientation Our goal in this course is to study and develop model based methods for adaptive systems Although it is possible to do this without explicit use of the geometric tools of Hilbert spaces, there are great advantages to be gained from a geometric formulation These are largely derived from our familiarity with Euclidean geometry and in particular with the concepts of orthogonality and orthogonal projections The intuition gained from Euclidean geometry will serve as our guide to constructive development of complicated algorithms and will often help us to understand and interpret complicated algebraic results in a geometrically obvious way These notes are devoted to a study of those aspects of Hilbert space theory which are needed for the constructive geometric development For the reader who wishes to go deeper into the theory of Hilbert spaces we can recommend the books by Kreyszig [1] and Máté [2] In two and three dimensional Euclidean spaces we worked with geometric objects such as vectors and subspaces We studied various geometric properties of these objects such as the norm of a vector, the angle between two vectors and the orthogonality of one vector with respect to another We also learned about geometrical operations on these objects such as projecting a vector onto a subspace and orthogonalizing a subspace These powerful geometric concepts and tools were also intuitive, natural and easy to grasp as we could actually see them or visualize them in two and three dimensional drawings These geometric concepts and tools were easily extended to the n dimensional Euclidean space (n > 3), through the scalar product Even here simple two and three dimensional drawings can be useful in illustrating or visualizing geometric phenomenon One proof of how essetial the geometric machinery is in mathematics is that it can also be extended to infinite dimensional vector spaces, this time using an extesion of the scalar product, namely the inner product Going to an infinite dimensional vector space makes things more complicated The lack of a finite basis is one such complication that creates difficulties in applications Fortunately, many infinite dimensional normed spaces contain an infinite sequence of vectors called a fundamental sequence, such that every vector in the space can be approximated to any accuracy with a linear combination of vectors belonging to the fundamental sequence Such infinite dimensional spaces occur frequently in engineering and one can work with those in pretty much the same way one works with finite dimensional spaces Another complication is that of completeness When an inner product space is not complete, it has holes, meaning that there exist elements of finite norm that are not in the inner product space but can be approximated to any desired accuracy by an element of the inner product space The good news is that all incomplete inner product spaces can be constructively extended to complete inner product spaces A Hilbert space is a complete inner product space

4 2 2 DEFINITION OF INNER PRODUCT SPACES In these notes we go through the following moments: 2 Definition of Inner Product Spaces 3 Examples of Inner Product Spaces 4 Basic Objects and Their Properties 5 Inner Product and Norm Properties 6 The Projection Operator 7 Orthogonalizations of the Subspaces S y t,p and SZ t,p 8 Procedures for Orthogonalizing S y t,p and SZ t,p 9 The Dynamic Levinson Orthogonalization Procedure 2 Definition of Inner Product Spaces Definition 2-1 An Inner Product Space An inner product space {H,, } is a vector space H equipped with an inner product,, which is a mapping from H H onto the scalar field of H (real or complex) For all vectors x, y and z in H and scalar α, the inner product satisfies the four axioms IP1 : x + y, z = x, z + y, z IP2 : αx, y = α x, y IP3 : x, y = y, x IP4 : x, x 0 x, x = 0 x = 0 An inner product on H defines a norm on H given by x = x, x and a metric on H given by d(x, y) = x y = x y, x y Remark 2-1 Think Euclidean The inner product is a natural generalization of the scalar product of two vectors in an n dimensional Euclidean space Since many of the properties of the Euclidean space carry over to inner product spaces, it will be helpful to keep the Euclidean space in mind in all that follows

5 3 3 Examples of Inner Product Spaces Example 3-1 The Euclidean space {C n, x, y = n i=1 x i y i } The n dimensional vector space over the complex field, C n, with the inner product between two vectors defined by where yi norm x = x 1 x n and y = n x, y = x i yi i=1 is the complex conjugation of y i, is an inner product space This gives the y 1 y n x = x, x 1/2 = n x i 2 and the Euclidean metric d(x, y) = x y = x y, x y 1/2 = n x i y i 2 i=1 i=1 Example 3-2 The space of square summable sequences {l 2, x, y = i=1 x i y i } The vector space of square summable sequences with the inner product between the two sequemces x = {x i } and y = {y i } defined by where yi norm and the metric x, y = x i yi i=1 is the complex conjugation of y i, is an inner product space This gives the x = x, x 1/2 = x i x i i=1 d(x, y) = x y = x y, x y 1/2 = x i y i 2 i=1 Example 3-3 The space of square integrable functions {L 2, f, g = fg } The vector space of square integrable functions defined on the interval [a, b], with the inner product defined by f, g = b a f(τ)g (τ)dτ

6 4 3 EXAMPLES OF INNER PRODUCT SPACES where g (τ) is the complex conjugation of g(τ), is an inner product space This gives the norm b f = f, f 1/2 = f(τ)f (τ)dτ and the metric b d(f, g) = f g = f g, f g 1/2 = f(τ) g(τ) 2 dτ a a Example 3-4 The continuous functions over [0, 1], {C([0, 1]), f, g = fg } The vector space of continuous functions defined on the interval [0, 1], with the inner product defined by f, g = 1 0 f(τ)g (τ)dτ where g (τ) is the complex conjugation of g(τ), is an inner product space This gives the norm 1 f = f, f 1/2 = f(τ)f (τ)dτ and the metric 1 d(f, g) = f g = f g, f g 1/2 = f(τ) g(τ) 2 dτ 0 0 Example 3-5 Random variables with finite second order moments, {L 2 (Ω, B, P ), x, y = E(xy ) = Ω x(ω)y (ω)dp (w)} The space of random variables with finite 2nd moments, L 2 (Ω, B, P ), with the inner product defined by x, y = E(xy ) = x(ω)y (ω)dp (w) where E is the expectation operator, is an inner product space Here a random variable denotes the equivalence class of random variables on (Ω, B, P ), where y is equivalent to x if E x y 2 = 0 Ω This gives the norm x = x, x 1/2 = E x 2 and the metric d(x, y) = x y = x y, x y 1/2 = E x y 2

7 5 4 Basic Objects and Their Properties Here we will get acquainted with the main geometrical objects of the space and their properties We begin by looking at the most basic objects vectors and tuples We then proceed with sequences and subspaces 41 Vectors and Tuples Definition 41-1 Vector The basic element of a vector space is called a vector In the case of the n dimensional Euclidean vector space this is the standard n dimensional vector, in the space of square summable sequences this is a square summable sequence and in the space of square integrable functions this is a square integrable function Definition 41-2 Tuple An n tuple is an ordered set of n elements An n tuple made up of the n vectors x 1, x 2,, x n is denoted by {x 1, x 2,, x n } We will mostly be using n tuples of vectors and parameter coefficients (scalars) Definition 41-3 Norm of a Vector The norm of a vector x, denoted by x, is a measure of its size More precisely it is a mapping from the vector space onto the positive reals, which for all vectors x and y in H and scalar α satisfies the following axioms: N1 : x 0 x = 0 x = 0 N2 : N3 : x + y x + y αx = α x In an inner product space the norm is always derived from the inner product as follows: x = x, x 1/2 It is an easy exercise to show that the above definition satisfies the norm axioms Definition 41-4 Orthogonality Two vectors x and y of an inner product space H are said to be orthogonal if and we write x y x, y = 0, An n tuple of vectors, {x 1, x 2, x n }, is said to be orthogonal if x n, x m = 0 for all n m Definition 41-5 Angle Between Two Vectors In a real inner product space the angle θ between the two vectors x and y is defined by the formula x, y cos(θ) = x y

8 6 4 BASIC OBJECTS AND THEIR PROPERTIES Definition 41-6 Linear Combination of Vectors and Tuples of Vectors A linear combination of vectors x 1, x 2,, x n is an expression of the form n α(1)x 1 + α(2)x α(n)x n = α(i)x i i=1 where the coefficients α(1), α(2), α(n) are any scalars Sometimes we will also want to express this sum in the form of a matrix multiplication α(1) n α(i)x i = {x 1, x 2,, x n } = {x 1, x 2,, x n }α, i=1 α(n) }{{} α where we consider the n tuple {x 1, x 2,, x n } as being a form of a row vector multiplying the column vector α Similarly a linear combination of m tuples X 1, X 2,, X n where X k is an m tuple of vectors {x k,1, x k,2,, x k,m } is actually a linear combination of the vectors x 1,1,, x 1,m, x 2,1,, x 2,m,, x n,1,, x n,m given by n m α(i, j)x i,j i=1 j=1 which can be expressed in matrix notation as n m α(i, j)x i,j = i=1 j=1 = n {x i,1,, x i,m } i=1 n X i α i i=1 = {X 1,, X n } α(i, 1) } α(i, m) {{ α i } α 1 α n Still another linear combination is when an m-tuple Z = {z 1, z 2,, z m } is formed from linear combinations of the n-tuple X = {x 1, x 2,, x n }, where α i (1) n z i = α i (j)x j = {x 1, x 2,, x n } = {x 1, x 2,, x n }α i j=1 α i (n) }{{} α i We can then write Z = {z 1, z 2,, z m } = {x 1, x 2,, x n } (α 1,, α m ) }{{} α = Xα

9 42 Sequences 7 Definition 41-7 Linear Independence and Dependence The set of vectors x 1, x 2, x n is said to be linearly independent if the only n tuple of scalars α 1, α 2, α n that makes α 1 x 1 + α 2 x α n x n = 0 (41) is the all zero n tuple with α 1 = α 2 = = α n = 0 The set of vectors is said to be linearly dependent if it is not linearly independent In that case there exists an n tuple of scalars different from the all zero n tuple, that satisfies equation (41) Definition 41-8 Inner Product Between Tuples of Vectors Given the p tuple of vectors X = {x 1,, x p } and the q tuple of vectors Y = {y 1,, y q }, the inner product between the two is defined as the matrix x 1, y 1 x p, y 1 X, Y = x 1, y q x p, y q This inner product between tuples of vectors is often used to simplify notaion For the vector tuples X, Y and Z and the coefficient matrices α and β the following properties hold: IP1 : X + Y, Z = X, Z + Y, Z IP2 : IP3 : Xα, Y = X, Y α X, Yβ = β H X, Y X, Y = Y, X H IP4 : X, X 0 X, X = 0 X = 0 42 Sequences Definition 42-1 Sequence A sequence is an infinite ordered set of elements We will denote sequences of vectors by {x n } or {x n : n = 1, 2, } Definition 42-2 Convergence of Sequences A sequence {x n } in a normed space X is said to converge to x if x X and lim x n x = 0 n Definition 42-3 Cauchy Sequence A sequence {x n } in a normed space X is said to be Cauchy if for every ɛ > 0 there is a positive integer N such that x n x m < ɛ for all m, n > N

10 8 4 BASIC OBJECTS AND THEIR PROPERTIES 43 Subspaces Definition 43-1 Linear Subspace A subset S of H is called a linear subspace if for all x, y S and all scalars α, β in the scalar field of H, it holds that αx + βy S S is therefore a linear space over the same field of scalars S is said to be a proper subspace of H if S H Definition 43-2 Finite and Infinite Dimensional Vector Spaces A vector space H is said to be finite dimensional if there is a positive integer n such that H contains a linearly independent set of n vectors whereas any set of n + 1 or more vectors is linearly dependent n is called the dimension of H, written dim H = n By definition, H = {0} is finite dimensional and dim H = 0 If H is not finite dimensional, it is said to be infinite dimensional Definition 43-3 Span and Set of Spanning Vectors For any nonempty subset M H the set S of all linear combinations of vectors in M is called the span of M, written S = Sp{M} Obviously S is a subspace of H and we say that S is spanned or generated by M We also say that M is the set of spanning vectors of S Definition 43-4 Basis for Vector Spaces For an n dimensional vector space S, an n tuple of linearly independent vectors in S is called a basis for S If {x 1, x 2, x n } is a basis for S, every x S has a unique representation as a linear combination of the basis vectors x = α 1 x 1 + α 2 x α n x n This idea of a basis can be extended to infinite dimensional vector spaces with the so called Schauder basis Not all infinite dimensional vector spaces have a Schauder basis, however, the ones of interest for engineers do For an infinite dimensional vector space H, a sequence of vectors in H, {x n }, is a Schauder basis for H, if for every x H there is a unique sequence of scalars {α n } such that lim x n α k x k = 0 n k=1 The sum k=1 α k x k is then called the expansion of x with respect to {x n }, and we write x = α k x k k=1 Proposition 43-1 Tuples of Vectors Spanning the Same Subspace Given an n dimensional subspace S = Sp{X}, where X is an n tuple of the linearly independent vectors x 1,, x n, then X is a basis for S and each vector in S is uniquely written as a linear combination of the vectors in X If we now pick the n vectors y 1,, y n in S, where y k = n α k (i)x i i=1

11 9 = {x 1,, x n } α k (1) } α k (n) {{ α k } = Xα k, then Y = {y 1,, y n } is also a basis for S if the parameter vectors α k for k = 1, 2,, n are linearly independent A convenient way of expressing this linear mapping from X to Y is to write Y = Xα, where α is the matrix of parameter vectors α 1 (1) α n (1) α = = (α 1,, α n ) α 1 (n) α n (n) Here it helps to look at Xα as the matrix multiplication between the n tuple X (considered as a row vector) and the matrix α Then y k can be seen as Xα k = n i=1 α k (i)x i When the parameter vectors in α are linearly independent, α has full rank and is invertible Then it is also possible to write X as a linear mapping from Y where α 1 is the inverse matrix of α X = Yα 1, Definition 43-5 Orthogonality A vector y is said to be orthogonal to a finite subspace S n = Sp{x 1,, x n } if y is orthogonal to all the spanning vectors of S n, x 1, x n, and we write y S n Definition 43-6 Completeness In some inner product spaces there exist sequences that converge to a vector that is not a member of the inner product space It is for example easy to construct a sequence of continous functions that converges to a discontinous function These types of inner product spaces have holes and are said to be incomplete Inner product spaces where every Cauchy sequence converges to a member of the inner product space are said to be complete A Hilbert space is a complete inner product space 5 Inner Product and Norm Properties Theorem 5-1 The Cauchy Schwartz inequality In an inner product space H there holds: x, y x y for all x, y H (51) Equality holds in (51) if and only if x and y are linearly dependent or either of the two is zero

12 10 6 THE PROJECTION OPERATOR Proof: The equality obviously holds if x = 0 or y = 0 If x 0 and y 0 then for any scalar λ we have 0 x λy, x λy = x, x + λ 2 y, y λy, x x, λy (52) = x, x + λ 2 y, y λy, x λy, x = x, x + λ 2 y, y 2Re(λ y, x ) For λ = x,y y,y we have 0 x, x x, y 2 y, y and the Cauchy Schwartz inequality is obtained by suitable rearrangement From the properties of the inner product it follows at once that if x = λy, then the equality holds Conversely, if the equality holds, then either x = 0, y = 0, or else it follows from (52) that there exists a λ such that ie x and y are linearly dependent x λy = 0, Proposition 5-1 The triangle inequality In an inner product space H there holds: x + y 2 ( x + y ) 2 for all x, y H (53) Equality holds in (53) if and only if x and y are linearly dependent or either of the two is zero Proof: This follows directly from the Cauchy Schwartz inequality x + y 2 = x + y, x + y = x, x + 2Re( x, y ) + y, y x x, y + y 2 x x y + y 2 = ( x + y ) 2 Proposition 5-2 The parallelogram equality In an inner product space H there holds: x + y 2 + x y 2 = 2( x 2 + y 2 ) for all x, y H (54) Proof: There holds: x ± y 2 = x 2 ± 2Re( x, y ) + y 2 By adding we get the parallelogram equality

13 11 y s = P S (y) H y P S (y) S S 6 The Projection Operator Figure 1: The Projection Operator The projection operator is an operator that maps a given vector y to a vector s in a given subspace S in a way that minimizes the norm between the two vectors, y s and what characterizes the projection is that the projection error vector is orthogonal to the subspace S This is easily visualized in the simple two dimensional plot in Figure 1 Now for a formal statement and proof of this Theorem 6-1 The Orthogonal Projection Theorem Let S be a proper subspace of a Hilbert space H Then for every y H there is a unique vector P S (y) S that minimizes the metric y s over all s S, ie there is a unique vector P S (y) S that satisfies y P S (y) = inf y s = d(y, S) (61) s S P S (y) is called the projection of y onto the subspace S A unique property of the projection P S (y) is that the projection error y P S (y) is orthogonal to S, ie y P S (y), x = 0 for all x S Proof: The theorem has two main components First there is the existence and uniqueness of the projection, then there is the unique orthogonality property of the projection error Let us begin by proving the existence and uniqueness Existence and uniqueness: We begin by noting that inf s S y s must be positive and hence there exists a sequence {s n } S such that y s n d = inf y s as n s S Since S is a linear subspace 1 2 (s n + s m ) S and thus 1 2 (s n + s m ) y d From the parallelogram equality it then follows that s n s m 2 = (s n y) (s m y) 2 = 2 s n y s m y) 2 4 s n + s m 2 2 s n y s m y) 2 4d 2 y) 2 (62)

14 12 6 THE PROJECTION OPERATOR The right-hand side of (62) tends to zero as m, n and thus {s n } is a Cauchy sequence Since H is complete and S is closed s n P S (y) S and y P S (y) = d If we substitute in the expression (62) s n = z and s m = P S (y) we obtain z = P S (y), from which the uniqueness follows Unique orthogonality property of the projection error: What we need to show is that z = P S (y) if and only if y z is orthogonal to every s S, or in other words if and only if y z S We begin by showing that e = y P S (y) S Let w S with w = 1 Then for any scalar λ, P S (y) + λw S The minimal property of P S (y) then implies that for any scalar λ That is e 2 = y P S (y) 2 y P S (y) λw 2 = e λw 2 = e λw, e λw (63) 0 λ w, e λ e, w + λ 2 for all scalars λ Choosing λ = e, w then gives 0 e, w 2 Consequently e, w = 0 for all w S of unit norm, and it follows this orthogonal relation holds for all w S Next we show that if y z, w = 0 for some z S and all w S, then z = P S (y) Let z S and assume that y z, w = 0 for all w S Then for z S we have that y z S and that x y y z Consequently we have that As a result, x z 2 = (x y) + (y z) 2 = (x y) 2 + (y z) 2 x z 2 (x y) 2 for all z S, with equality only if y z = 0, as required Corollary 6-1 Projecting a Vector onto a Finite Subspace Given a finite subspace spanned by the linearly independent vectors x 1,, x n, S = Sp{x 1,, x n }, the projection of a vector y onto S is given by P S (y) = n α(i)x i i=1 α(1) = {x 1,, x n } } α(n) {{ α } where the projection coefficient vector α satisfies the normal equation x 1, x 1 x n, x 1 α(1) y, x 1 = x 1, x n x n, x n α(n) y, x n Defining the n tuple of vectors X = {x 1,, x n } and using the definition of an inner product between tuples of vectors the normal equation can be written in the more compact form X, X α = y, X

15 13 For a 1-dim subspace S = Sp{x}, the projection of y onto S simply becomes P S (y) = αx, where α = y, x y, x = x, x x 2 Proof: The normal equation comes directly from the orthogonality property of the projection error, namely y P Sn (y) x 1,, x n y P Sn (y), x q = 0 for q = 1,, n y, x q = P Sn (y), x q for q = 1,, n n α(i) x i, x q = y, x q for q = 1,, n i=1 Writing this up in matrix format gives the above normal equation Corollary 6-2 Projecting a Tuple of Vectors onto a Finite Subspace Given a finite subspace spanned by the linearly independent n tuple X = {x 1,, x n }, S = Sp{X} = Sp{x 1,, x n }, we have seen in Corollary 6-1 that the projection of a vector y k onto S is given by P S (y k ) = n α k (i)x i i=1 α k (1) = {x 1,, x n } } α k (n) {{ α k } where the projection coefficient vector α k satisfies the normal equation x 1, x 1 x n, x 1 α k (1) x 1, x n x n, x n α k (n) = y k, x 1 y k, x n It is then easily seen that the projection of the m-tuple Y = {y 1,, y m } onto S is given by P S (Y) = {P S (y 1 ),, P S (y m )} = n n { α 1 (i)x i,, α m (i)x i } i=1 i=1 α 1 (1) α m (1) = {x 1,, x n } } α 1 (n) α m (n) {{ α }

16 14 6 THE PROJECTION OPERATOR where the projection coefficient matrix α satisfies the normal equation x 1, x 1 x n, x 1 α 1 (1) α m (1) y 1, x 1 y m, x 1 =, x 1, x n x n, x n α 1 (n) α m (n) y 1, x n y m, x n which is more compactly writen as X, X α = Y, X Corollary 6-3 Projecting an m tuple Y onto the span of the m tuples X 1,, X n Given a finite subspace S spanned by the linearly independent set of m tuples X 1,, X n with X i = {x i,1,, x i,m } we have that S = Sp{X 1, X 2,, X n } = Sp{x 1,1,, x 1,m, x 2,1,, x 2,m,, x n,1,, x n,m } Then from Corollary 6-1 we have that the projection of a vector y k onto S is given by P S (y k ) = n m α k (i, j)x i,j i=1 j=1 α k (1, 1) α k(1) α k (1, m) α k (2, 1) = {x 1,1,, x 1,m, x 2,1,, x 2,m,, x n,1,, x n,m } α k(2) α k (2, m) α k (n, 1) α k(n) α k (n, m) }{{} α k where the projection coefficient vector α k satisfies the normal equation x 1,1, x 1,1 x 1,m, x 1,1 x n,1, x 1,1 x n,m, x 1,1 α k (1, 1) y k, x 1,1 x 1,1, x 1,m x 1,m, x 1,m x n,1, x 1,m x n,m, x 1,m α k (1, m) y k, x 1,m =, x 1,1, x n,1 x 1,m, x n,1 x n,1, x n,1 x n,m, x n,1 α k (n, 1) y k, x n,1 x 1,1, x n,m x 1,m, x n,m x n,1, x n,m x n,m, x n,m α k (n, m) y k, x n,m which again is written in more compact for as X 1, X 1 X n, X 1 α k (1) X 1, X n X n, X n α k (n) = y k, X 1 y k, X n

17 15 It is then easily seen that the projection of the m-tuple Y = {y 1,, y m } onto S is given by P S (Y) = {P S (y 1 ),, P S (y m )} = n m n n { α 1 (i, j)x i,j,, α m (i, j)x i,j } i=1 j=1 i=1 i=1 = n n { X i α 1 (i),, X i α m (i)} i=1 i=1 α 1 (1) α m (1) = {X 1,, X n } } α 1 (n) α m (n) {{ α } where the projection coefficient matrix α satisfies the normal equation X 1, X 1 X n, X 1 α 1 (1) α m (1) y 1, X 1 y m, X 1 =, X 1, X n X n, X n α 1 (n) α m (n) y 1, X n y m, X n which can be written slightly more compactly as X 1, X 1 X n, X 1 α 1 (1) α m (1) X 1, X n X n, X n α 1 (n) α m (n) = Y, X 1 Y, X n Remark 6-1 Subspace Based Orthogonal Decomposition of H From the above we also see that given a subspace S H, H can be decomposed into the two orthogonal subspaces S and S, where S is the subspace of all vectors in H that are orthogonal to S, and all vectors in H can be uniquely written as the sum of a vector in S and a vector in S, Sn P Sn (y), P S n (y) = 0 H P S n (y) P Sn (y) y S n Corollary 6-4 Projections onto an Orthogonally Decomposed Subspace A projection of a vector onto a finite subspace that is orthogonally decomposed into other sub-subspaces equals the sum of the projections of the vector onto each of the orthogonal sub-subspaces

18 16 7 ORTHOGONALIZATIONS OF SUBSPACE SEQUENCES 7 Orthogonalizations of Subspace Sequences One of the most important subspaces that shows up in adaptive systems is the subspace S y t,p = Sp{y t 1, y t 2,, y t p } (71) where {y t } t=0 is a sequence of vectors y t in some Hilbert space H, which can be either deterministic or stochastic This kind of subspace comes up when modelling single input single output (SISO) moving average (MA) and autoregressive (AR) systems Another important subspace is S Z t,p = Sp {Z t 1, Z t 2,, Z t p } (72) where {Z t } t=0 is a sequence of q-tuples of vectors, Z t = {z t,1,, z t,q }, in some Hilbert space H This subspaces can be seen as a block version of the former and comes up when modelling single input single output ARX systems, certain types of time varying systems and multi input multi output (MIMO) systems The adaptive algorithms we will be studying in this course are based on orthogonalizations of these two subspaces Let us therefore take a closer look at the main orthogonalizations 71 Different Orthogonalizations of S y t,p There exist infinitely many orthogonalizations of S y t,p, but there are three commonly used orthogonalizations that are of interest in adaptive systems These are the backward orthogonalization, the forward orthogonalization and the eigenorthogonalization 711 The Backward Orthogonalization of S y t,p The most common orthogonalization of S y t,p = Sp{y t 1, y t 2,, y t p } is the backward orthogonalization where the latest spanning vector y t 1 is choosen as the first basis vector and one then moves backwards in time to orthogonalize the rest of the vectors with respect to the vectors in their future domain as shown in Figure 2 below y t 1,, y t i }{{} y t (i+1) }{{} y t (i+2),, y t p S y t,i b t,i = y t (i+1) P t,i (y t (i+1) ) Figure 2: Backward Orthogonalization of S y t,p = Sp{y t 1, y t 2,, y t p } The vectors b t,i are called the backward projection errors and with this orthogonalization we can write S y t,p as S y t,p = Sp{b t,0, b t,1,, b t,p 1 } 712 The Forward Orthogonalization of S y t,p The forward orthogonalization of S y t,p is a dual orthogonalization to the backward orthogonalization There the oldest spanning vector y t p is chosen as the first basis vector and one then moves forward in time to orthogonalize the rest of the vectors with respect to the vectors in their past domain The orthogonalization will then be the one shown below

19 71 Different Orthogonalizations of S y t,p 17 y t 1,, y t (p i 1) y t (p i) }{{} y t (p i+1),, y t p }{{} y t (p i) P t (p i),i (y t (p i) ) = e t (p i),i S y t (p i),i Figure 3: Forward Orthogonalization of S y t,p = Sp{y t 1, y t 2,, y t p } The vectors e t (p i),i are called the forward projection errors and with this orthogonalization we can write S y t,p as S y t,p = Sp{e t 1,p 1,, e t (p 1),1, e t p,0 } 713 The Eigenorthogonalization of S y t,p The eigenorthogonalization is the most decoupled orthogonalization there is, in that it orthogonalizes S y t,p in such a way that the projection coefficient vectors describing the orthogonal basis vectors in terms of the given spanning vectors also form an orthogonal set of vectors The name eigenorthogonalization comes from the close coupling of this orthogonalization to the eigendecomposition of the related Gram matrix This is probably too abstract to follow, so let us go through this more slowly As we mentioned in the beginning of this section, there are infinitely many ways of orthogonalizing the subspace S y t,p = Sp{y t 1, y t 2, y t 3,, y t p } An interesting question is, therefore, if there exists an orthogonalization of S y t,p where the orthogonal basis vectors are given by p x t,k = α k (i)y t i i=1 for k = 1, 2,, p and the parameter vectors describing the basis vectors in terms of the given spanning vectors α k (1) α k (2) α k = α k (p) also form an orthogonal basis in C p? The answer turnes out to be yes To obtain a unique solution the parameter vector basis α 1,, α p is made orthonormal (orthogonal and unit norm) We call the orthogonalization the eigenorthogonalization because the parameter vectors α k satisfy the eigenequation Y t, Y t α k = x t,k 2 α k, (73) where Y t is the p tuple {y t 1, y t 2,, y t p } To see this we first need the relation between X t, X t and Y t, Y t The result then follows from looking at the inner product x t,k, Y t Now as X t = Y t (α 1,, α p ) = Y t A we have that X t, X t = x t, x t,p 2

20 18 7 ORTHOGONALIZATIONS OF SUBSPACE SEQUENCES and = Y t A, Y t A = A H Y t, Y t A Y t, Y t = A H X t, X t A 1 Then as the parameter vector basis {α 1,, α p } is orthonormal we have that A H A = I, ie A 1 = A H and A H = A, which again gives Y t, Y t = A X t, X t A H With these two relations between X t, X t and Y t, Y t we can move onto the inner product x t,k, Y t : which gives us the eigenequation (73) x t,k, Y t = Y t α k, Y t = Y t, Y t α k = A X t, X t A H α k }{{} e k = A x t,k 2 e k = x t,k 2 α k Assuming that the spanning vectors y t 1,, y t p are linearly independent, Y t, Y t is a positive definite matrix, which will have real and positive eigenvalues and orthogonal eigenvectors We see here that the eigenvalues will be the squared norms of the eigenorthogonal basis vectors and the parameter vectors describing those basis vectors in terms of the given spanning vectors will be the eigenvectors From this we see that the eigenorthogonalization actually exists and that it can be evaluated with the use of the eigendecomposition of the Gram-matrix Y t, Y t or the singular decomposition of Y t 72 Different Orthogonalizations of S Z t,p The difference between St,p Z and S y t,p is that SZ t,p is the span of the n q tuples Z t 1,, Z t p, while S y t,p is the span of the n vectors y t 1,, y t p Instead of fully orthogonalizing St,p Z into nq one dimensional subspaces, we can orthogonalize it into n q dimensional subspaces in pretty much the same way we orthogonalized S y t,p, working with q tuples instead of vectors Let us look at the backward block orthogonalization and the forward block orthogonalization of St,p Z 721 The Backward Block Orthogonalization of S Z t,p The most common orthogonalization of S Z t,p = Sp{Z t 1, Z t 2,, Z t p } is the backward block orthogonalization where the latest spanning tuple Z t 1 is choosen as the first basis tuple and one then moves backwards in time to orthogonalize the rest of the vector tuples with respect to the vector tuples in their future domain as shown in Figure 4 The vector tuples B t,i are called the backward projection errors and with this orthogonalization we can write S Z t,p as S Z t,p = Sp{B t,0, B t,1,, B t,p 1 }

21 19 Z t 1,, Z t i }{{} Z t (i+1) }{{} Z t (i+2),, Z t p S Z t,i B t,i = Z t (i+1) P t,i (Z t (i+1) ) Figure 4: Backward Block Orthogonalization of S Z t,p = Sp{Z t 1, Z t 2,, Z t p } 722 The Forward Block Orthogonalization of S Z t,p The forward block orthogonalization of St,p Z is a dual orthogonalization to the backward orthogonalization There the oldest spanning vector-tuple Z t p is chosen as the first basis tuple and one then moves forward in time to orthogonalize the rest of the tuples with respect to the ones in their past domain The orthogonalization will then be the one shown in Figure 5 Z t 1,, Z t (p i 1) Z t (p i) }{{} Z t (p i+1),, Z t p }{{} Z t (p i) P t (p i),i (Z t (p i) ) = E t (p i),i S Z t (p i),i Figure 5: Forward Block Orthogonalization of S Z t,p = Sp{Z t 1, Z t 2,, Z t p } The vector tuples E t (p i),i are called the forward projection errors and with this orthogonalization we can write S Z t,p as S Z t,p = Sp{E t 1,p 1,, E t (p 1),1, E t p,0 } 8 Procedures for Orthogonalizing S y t,p and S Z t,p We have often been confronted with the problem of orthogonalizing a given subspace, and are well familiar with the Gram-Schmidt orthogonalization procedure A familiar example is the subspace of n-th order polynomials defined on the interval [ 1, 1], where the inner product between the functions f and g is defined by f, g = 1 1 f(t)g(t)dt This space is spanned by the functions f 0, f 1,, f n, where f k (t) = t k for t [ 1, 1] and k = 0, 1,, n Orthogonalizing this subspace with the Gram-Schmidt orthogonalization procedure results in the well known Legendre polynomial basis functions In this section we will study that and the related Schur orthogonalization procedure for orthogonalizing the subspaces S y t,p and SZ t,p 81 The Gram Schmidt Orthogonalization Procedure The Gram Schmidt orthogonalization procedure is an efficent orthogonalization procedure for orthogonalizations such as the backward and forward orthogonalizations The main idea behind the procedure is that the subspace is orthogonalized in an order recursiva manner so that the subspaces beeing projected onto are always orthogonal or block orthogonal This allows each projection to be evaluated in terms of orthogonal or block orthogonal basis vectors, thus making the procedure relatively efficient The orthogonalization precedure for the backward orthogonalization of S y t,p below, where P t,k (x) denotes the projection of the vector x onto S y t,k : is shown

22 20 8 PROCEDURES FOR ORTHOGONALIZING S Y T,P AND SZ T,P 1-st orth basis vector: b t,0 = y t 1 2-nd orth basis vector: b t,1 = y t 2 P t,1 (y t 2 ) = y t 2 y 2, b t,0 b t,0 2 b t,0 3-rd orth basis vector: b t,2 = y t 3 P t,2 (y t 3 ) = y t 3 1 i=0 y t 3, b t,i b t,i 2 b t,i p-th orth basis vector: b t,p 1 = y t p P t,p 1 (y t p ) p 2 = y t p i=0 y t p, b t,i b t,i 2 b t,i Counting the operations in the evaluation procedure we see that it involves (p 1) = p(p 1)/2 projections on to one dimensional subspaces, which is relatively efficient The orthogonalization precedure for the backward block orthogonalization of S Z t,p is very similar and shown below, where P t,k (X) denotes the projection of the vector tuple X onto S Z t,k : 1-st orth basis tuple: B t,0 = Z t 1 2-nd orth basis tuple: B t,1 = Z t 2 P t,1 (Z t 2 ) = Z t 2 B t,0 B t,0, B t,0 1 Z t 2, B t,0 3-rd orth basis tuple: B t,2 = Z t 3 P t,2 (Z t 3 ) = Z t 3 1 B t,i B t,i, B t,i 1 Z t 3, B t,i p 1 p-th orth basis tuple: B t,p 1 = Z t p B t,i B t,i, B t,i 1 Z t p, B t,i Counting the operations in the evaluation procedure we see that it involves (p 1) = p(p 1)/2 projections on to q-dimensional subspaces, which is relatively efficient 82 The Schur Orthogonalization Procedure The Schur orthogonalization procedure is another efficient orthogonalization procedure for orthogonalizations such as the backward and forward orthogonalizations It too, like the Gram-Schmidt orthogonalization procedure, accomplishes the job with (p 1) = p(p 1)/2 projections onto one or q dimensional subspaces The approach is, however, somewhat differrent It is best explained with a concrete example, so let us begin by looking at the Schur algorithm for evaluating the backward orthogonalization for S y t,p First we define the following projection errors: x j t,i = y t j P t,i (y t j ) for i = 0,, p 1 and j = i + 1,, p i=0 i=0

23 82 The Schur Orthogonalization Procedure 21 where P t,i (y t j ) denotes the projection of y t j onto S y t,i As in the Gram-Schmidt procedure, the first orthogonal basis vector in the backward orthogoalization, b t,0, is chosen as b t,0 = y t 1 To obtain the second orthogonal basis vector, b t,1, the rest of the spanning vectors in S y t,p = Sp{y t 1, y t 2,, y t p } are orthogonalized with respect to y t 1 This gives b t,1 as the first vector in the resulting subspace, which has dimension one less than that of S y t,p This operation is shown below Sp{y t 1 } I P Sp{y t 2, y t 3,, y t p } Sp{x 2 t,1, x3 t,1,, xp t,1 } = Sx t,1 where x j t,1 is the projection error from projecting y t j onto the span of y t 1 and b t,1 = x 2 t,1 The third orthogonal basis vector, b t,2, is similarly obtained by orthogonalizing the rest of the spanning vectors in S x t,1 with respect to x2 t,1 This gives b t,2 as the first vector in the resulting subspace, which has dimension one less than that of S x t,1 This operation is shown below Sp{x 2 t,1 } I P Sp{x 3 t,1, x4 t,1,, xp t,1 } Sp{x 3 t,2, x4 t,2,, xp t,2 } = Sx t,2 where x j t,2 is the projection error from projecting xj t,1 onto the span of x2 t,1, and b t,2 = x 3 t,2 Iterating this operation until there is only a one dimensional subspace left will then give the complete orthogonal basis of the backward orthogonalization Defining the original subspace S y t,p as Sx t,0 = Sp{x1 t,0, x2 t,0,, xp t,0 }, where xj t,0 = y t j for k = 1,, p, the complete orthogonalization procedure can be summarized as shown below: 1-st orth basis vector: Sp{x 1 t,0, x 2 t,0,, x p t,0 } = Sx t,0 b t,0 = x 1 t,0 2-nd orth basis vector: Sp{x 2 t,0, x 3 t,0,, x p t,0 } (I P) onto the span of x 1 t,0 Sp{x 2 t,1, x 3 t,1,, x p t,1 } = Sx t,1 b t,1 = x 2 t,1 3-rd orth basis vector: Sp{x 3 t,1, x 4 t,1,, x p t,1 } (I P) onto the span of x 2 t,1 Sp{x 3 t,2, x 4 t,2,, x p t,2 } = Sx t,2 b t,2 = x 3 t,2 p-th orth basis vector: Sp{x p t,p 2 } (I P) onto the span of x p 1 t,p 2 Sp{x p t,p 1 } = Sx t,p 1 b t,p 1 = x p t,p 1 Counting the operations in the evaluation procedure we see that it involves (p 1) + (p 2) = p(p 1)/2 projections on to one dimensional subspaces, which is

24 22 9 THE DYNAMIC LEVINSON ORTHOGONALIZATION PROCEDURE exactly the same number of operations as in the Gram-Schmidt procedure The corresponding Schur algorithm for the backward block orthogonalization of S Z t,p is summarized below, where S X t,0 = SZ t,p and X 0 t,k = Y t 1 k 1-st orth basis tuple: Sp{X 1 t,0, X 2 t,0,, X p t,0 } = SX t,0 B t,0 = X 1 t,0 2-nd orth basis tuple: Sp{X 2 t,0, X 3 t,0,, X p t,0 } (I P) onto the span of X 1 t,0 Sp{X 2 t,1, X 3 t,1,, X p t,1 } = SX t,1 B t,1 = X 2 t,1 3-rd orth basis tuple: Sp{X 3 t,1, X 4 t,1,, X p t,1 } (I P) onto the span of X 2 t,1 Sp{X 3 t,2, X 4 t,2,, X p t,2 } = SX t,2 B t,2 = X 3 t,2 p-th orth basis tuple: Sp{X p t,p 2 } (I P) onto the span of X p 1 t,p 2 Sp{X p t,p 1 } = SX t,p 1 B t,p 1 = X p t,p 1 Again, counting the operations in the evaluation procedure we see that it involves (p 1) + (p 2) = p(p 1)/2 projections on to q dimensional subspaces, which is exactly the same number of operations as in the Gram-Schmidt procedure 9 The Dynamic Levinson Orthogonalization Procedure In the previous two sections we have studied orthogonalizations and orthogonalization procedures for the two subspaces S y t,p and SZ t,p When dealing with adaptive systems, however, the problem is not to orthogonalize the single subspaces S y t,p and SZ t,p, but to orthogonalize the sequences of subspaces {S y t,p } t=0, and {SZ t,p} t=0 in a time recursive manner We can of course always use the methods we have just studied to orthogonalize these sequences of subspaces, but it is not unlikely that we will be able to do this more efficiently if we use the special structure of these sequences of subspaces A method that accomplishes this is the Levinson orthogonalization procedure What characterizes the subspace sequence {S y t,p } t=0 ({SZ t,p} t=0 ) is that Sy t+1,p (SZ t+1,p ) is obtained from Sy t,p (SZ t,p) by removing the span of the vector y t p (vector tuple Z t p ) and adding the span of y t (Z t ) The Levinson procedure uses this structure to efficiently evaluate the backward orthogonal basis vectors at time t + 1 from the ones at time t using only one projection onto the span of a single vector (vector tuple) 91 The Basic Levinson Orthogonalization Procedure We begin by studying the basic Levinson orthogonalization procedure for orthogonalizing following with the corresponding Levinson block orthogonalization procedure for S y t,p St,p Z

25 91 The Basic Levinson Orthogonalization Procedure The Basic Levinson Orthogonalization Procedure for S y t,p We begin by looking at the backward orthogonal basis vectors at time t+1 The (i+1)-th orthogonal basis vector is given by b t+1,i+1 = y t+1 (i+2) P t+1,i+1 (y t+1 (i+2) ) (91) = y t (i+1) P t+1,i+1 (y t (i+1) ) which is the projection error from projecting y t (i+1) onto S y t+1,i+1 as shown below S y t+1,i+1 = Sp{y t, y t 1,, y t i } P y t (i+1) : P t+1,i+1 (y t (i+1) ) The orthogonal basis vector at time t involving the same y t (i+1) vector is b t,i = y t (i+1) P t,i (y t (i+1) ), which is the projection error from projecting y t (i+1) onto S y t,i as shown below S y t,i = Sp{y t 1,, y t i } P y t (i+1) : P t,i (y t (i+1) ) Comparing the two projections we see that at time t + 1 we are projecting onto the subspace S y t+1,i+1 which contains the subspace we projected onto at time t, Sy t,i, but has one additional dimension, namely the span of y t We can easily orthogonalize this one dimensional subspace with respect to S y t,i and obtain S y t+1,i+1 = Sp{y t } + Sp{y t 1,, y t i } = Sp{e t,i } Sp{y t 1,, y t i } (92) = Sp{e t,i } S y t,i where e t,i is the projection error from projecting y t onto S y t,i, e t,i = y t P t,i (y t ), and the denotes that we have a sum of two orthogonal subspaces With S y t+1,i+1 now decomposed into two orthogonal subspaces the projection of y t (i+1) onto S y t+1,i+1 is simply obtained as the sum of the projections of y t (i+1) onto each of the two orthogonal subspaces, namely P t+1,i+1 (y t (i+1) ) = P t,i (y t (i+1) ) + P et,i (y t (i+1) ) (93) where P et,i (y t (i+1) ) denotes the projection of y t (i+1) onto the span of e t,i With this relation between the projections at time t + 1 and t we obtain the following relation between the orthogonal projection errors, b t+1,i+1 = y t (i+1) P t+1,i+1 (y t (i+1) ) ( ) = y t (i+1) P t,i (y t (i+1) ) + P et,i (y t (i+1) ) = b t,i P et,i (y t (i+1) ) (94) Now as we can write y t (i+1) as the sum of its projection onto S y t,i and the resulting projection error, y t (i+1) = P t,i (y t (i+1) ) + b t,i, and the projection is orthogonal to e t,i we have that P et,i (y t (i+1) ) = P et,i (b t,i ) which then gives us the relation b t+1,i+1 = b t,i P et,i (b t,i ) = b t,i b t,i, e t,i e t,i 2 e t,i (95)

26 24 9 THE DYNAMIC LEVINSON ORTHOGONALIZATION PROCEDURE From this we see that we have a very efficent way of updating the orthogonal basis vectors at time t+1 from the ones at time t, if we can efficently genearte the projection errors e t,i In the signal processing litterature, the projection errors e t,i and b t,i are respectively called the forward and the backward projection errors This is related to the fact that both involve projections onto the same subspace S y t,i, where the forward projection involves the projection of a vector that is timewise in front of S y t,i, while the backward projection involves the projection of a vector that is timewise in back of it, P P P t,i (y t ) : y t Sp{yt 1,, y t i } y }{{} t (i+1) : P t,i (y t (i+1) ) S y t,i Due to the structure of S y t,i the forward projections can be evaluated in pretty much the same way as the backward projections The (i + 1)-th forward projection is a projection of y t onto S y t,i+1, which is easily orthogonally decomposed as follows: S y t,i+1 = Sp{y t 1,, y t i } + Sp{y t (i+1) } = Sp{y t 1,, y t i } Sp{b t,i } (96) = S y t,i Sp{b t,i} This orthogonal decomposition allows the projection of y t onto S y t,i+1 to be evaluated as the sum of the projections of y t onto each orthogonal subspace, namely P t,i+1 (y t ) = P t,i (y t ) + P bt,i (y t ) (97) where P bt,i (y t ) denotes the projection of y t onto the span of b t,i Then following the exact same line of reasoning that was done on the backward projections we obtain the recursion e t,i+1 = e t,i P bt,i (e t,i ) = e t,i e t,i, b t,i b t,i 2 b t,i, (98) which is a very efficient way of generating the forward projection error vectors given the availability of the backward projection error vectors Combining the two projection error recursions (equations (95) and (98)) we obtain the following block diagram of the projection operations: e t,i kt b (i) kt e (i) e t,i+1 = e t,i e t,i, b t,i b t,i 2 }{{} kt e (i) b t+1,i D b t+1,i+1 = b t,i b t,i, e t,i e t,i 2 e t,i }{{} kt b (i) This block takes two input vectors, e t,i and b t,i, and outputs the projection errors from projecting one vector onto the other To evaluate the basis vectors b t+1,0, b t+1,1,, b t+1,p 1 all we need to do is to cascade p 1 of these blocks together and we have the complete orthogonalization procedure: b t,i

27 91 The Basic Levinson Orthogonalization Procedure 25 y t e t,0 b t+1,0 D k b t (0) e t,1 e t,p 2 kt e (0) b t+1,1 b t+1,p 2 k b t (p 2) e t,p 1 kt e (p 2) D b t+1,p 1 Evaluating the b t,i s in this manner only requires the evaluation of 2(p 1) projections onto one dimensional subspaces, which is much more efficient than the p 2 (p 1) operations needed by the Gram Schmidt and Schur procedures 912 The Basic Block Levinson Orthogonalization Procedure for S Z t,p We begin by looking at the backward block orthogonal basis vectors at time t + 1 The (i+1)-th block orthogonal basis tuple is given by B t+1,i+1 = Z t+1 (i+2) P t+1,i+1 (Z t+1 (i+2) ) (99) = Z t (i+1) P t+1,i+1 (Z t (i+1) ) which is the projection error from projecting Z t (i+1) onto S Z t+1,i+1 as shown below S Z t+1,i+1 = Sp{Z t, Z t 1,, Z t i } P Z t (i+1) : P t+1,i+1 (Z t (i+1) ) The orthogonal basis tuple at time t involving the same Z t (i+1) tuple is B t,i = Z t (i+1) P t,i (Z t (i+1) ), which is the projection error from projecting Z t (i+1) onto St,i Z as shown below St,i Z = Sp{Z P t 1,, Z t i } Z t (i+1) : P t,i (Z t (i+1) ) Comparing the two projections we see that at time t + 1 we are projecting onto the subspace St+1,i+1 Z which contains the subspace we projected onto at time t, SZ t,i, but has the additional span of Z t We can easily orthogonalize the additional subspace with respect to St,i Z and obtain S Z t+1,i+1 = Sp{Z t } + Sp{Z t 1,, Z t i } = Sp{E t,i } Sp{Z t 1,, Z t i } (910) = Sp{E t,i } S Z t,i where E t,i is the projection error from projecting Y t onto S Z t,i, E t,i = Z t P t,i (Z t ), and the denotes that we have a sum of two orthogonal subspaces With S Z t+1,i+1 now decomposed into two orthogonal subspaces the projection of Z t (i+1) onto S Z t+1,i+1 is simply obtained as the sum of the projections of Z t (i+1) onto each of the two orthogonal subspaces, namely P t+1,i+1 (Z t (i+1) ) = P t,i (Z t (i+1) ) + P Et,i (Z t (i+1) ) (911) where P Et,i (Z t (i+1) ) denotes the projection of Z t (i+1) onto the span of E t,i With this relation between the projections at time t + 1 and t we obtain the following relation between the orthogonal projection errors, B t+1,i+1 = Z t (i+1) P t+1,i+1 (Z t (i+1) ) ( ) = Z t (i+1) P t,i (Z t (i+1) ) + P Et,i (Z t (i+1) ) = B t,i P Et,i (Z t (i+1) ) (912)

28 26 9 THE DYNAMIC LEVINSON ORTHOGONALIZATION PROCEDURE Now as we can write Z t (i+1) as the sum of its projection onto St,i Z and the resulting projection error, Z t (i+1) = P t,i (Z t (i+1) )+B t,i, and the projection is orthogonal to E t,i we have that P Et,i (Z t (i+1) ) = P Et,i (B t,i ) which then gives us the relation B t+1,i+1 = B t,i P Et,i (B t,i ) = B t,i E t,i E t,i, E t,i 1 B t,i, E t,i (913) From this we see that we have a very efficent way of updating the orthogonal basis tuplets at time t + 1 from the ones at time t, if we can efficently genearte the projection error tuplets E t,i Due to the structure of St,i Z the forward projections can be evaluated in pretty much the same way as the backward projections The (i + 1)-th forward projection is a projection of Z t onto St,i+1 Z, which is easily orthogonally decomposed as follows: S Z t,i+1 = Sp{Z t 1,, Z t i } + Sp{Z t (i+1) } = Sp{Z t 1,, Z t i } Sp{B t,i } (914) = S Z t,i Sp{B t,i } This orthogonal decomposition allows the projection of Z t onto St,i+1 Z to be evaluated as the sum of the projections of Z t onto each orthogonal subspace, namely P t,i+1 (Z t ) = P t,i (Z t ) + P Bt,i (Y t ) (915) where P Bt,i (Z t ) denotes the projection of Z t onto the span of B t,i Then following the exact same line of reasoning that was done on the backward projections we obtain the recursion E t,i+1 = E t,i P Bt,i (E t,i ) = E t,i B t,i B t,i, B t,i 1 E t,i, B t,i, (916) which is a very efficient way of generating the forward projection error vectors given the availability of the backward projection error vectors Combining the two projection error recursions (equations (913) and (916)) we obtain the following block diagram of the projection operations: E t,i B t+1,i D k B t (i) k E t (i) E t,i+1 = E t,i B t,i B t,i, B t,i 1 E t,i, B t,i }{{} k E t (i) B t+1,i+1 = B t,i E t,i E t,i, E t,i 1 B t,i, E t,i }{{} k B t (i) This block takes two input tuples, E t,i and B t,i, and outputs the projection error tuples from projecting one tuple onto the other To evaluate the basis tuples B t+1,0, B t+1,1,, B t+1,p 1 all we need to do is to cascade p 1 of these blocks together and we have the complete orthogonalization procedure:

29 92 Relation Between The Projection Coefficients of Different Basis 27 Z t E t,0 B t+1,0 k B t (0) E t,1 E t,p 2 k E t (0) D B t+1,1 B t+1,p 2 k B t (p 2) E t,p 1 k E t (p 2) D B t+1,p 1 Evaluating the B t,i s in this manner only requires the evaluation of 2(p 1) projections onto q dimensional subspaces, which is much more efficient than the p 2 (p 1) operations needed by the Gram Schmidt and Schur procedures 92 Relation Between The Projection Coefficients of Different Basis We have seen that the projection of a vector onto a given subspace is parameterized as a linear combination of the basis vectors of the subspace Different sets of basis vectors will, therefore, lead to different parameterizations For us it is important to be able to map one set of projection coefficients to another A case of special interest is the parameterization of the forward and backward projections onto the spaces S y t,i and SZ t,i, which we have studied in some detail in this section These projections can be realized in terms of the given spanning vectors, y t 1,, y t i for S y t,i and Z t 1,, Z t i for St,i Z, which we will call the direct form realization Another important realization is the one in terms of the backward orthogonalized basis vectors, b t,0,, b t,i 1 for S y t,i and B t,0,, B t,i for St,i Z, which we call the lattice realization The relationship between the two sets of projection coefficients is embedded in the projection order recursions that have been the key components in the development of the Levinson orthogonalization procedure in sections 911 and 912 The derivation of this relationship is thus simply a matter of i) Writing down the projection order recursions ii) Writing the individual projections out in terms of the direct form realization vectors iii) Matching terms We begin by looking at the relationship for S y t,i and then summarize it for SZ t,i, as it follows in exactly the same manner 921 Relation Between Direct Form and Lattice Realizations on S y t,i Please recall the sequence of steps taken in developing the projection error recursions in Section 911 It began with the forward and backward orthogonal decompositions of the space S y t,i = Sp{y t 1,, y t i } S y t,i+1 = S y t,i Sp{b t,i} S y t+1,i+1 = Sp{e t,i } S y t,i where the backward projection error b t,i = y t (i+1) P t,i (y t (i+1) ) is the i-th backward orthogonal spanning vector at time t and e t,i is the forward projection error e t,i =

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

2. Signal Space Concepts

2. Signal Space Concepts 2. Signal Space Concepts R.G. Gallager The signal-space viewpoint is one of the foundations of modern digital communications. Credit for popularizing this viewpoint is often given to the classic text of

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

CHAPTER II HILBERT SPACES

CHAPTER II HILBERT SPACES CHAPTER II HILBERT SPACES 2.1 Geometry of Hilbert Spaces Definition 2.1.1. Let X be a complex linear space. An inner product on X is a function, : X X C which satisfies the following axioms : 1. y, x =

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India

Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India CHAPTER 9 BY Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India E-mail : mantusaha.bu@gmail.com Introduction and Objectives In the preceding chapters, we discussed normed

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Inner Product and Orthogonality

Inner Product and Orthogonality Inner Product and Orthogonality P. Sam Johnson October 3, 2014 P. Sam Johnson (NITK) Inner Product and Orthogonality October 3, 2014 1 / 37 Overview In the Euclidean space R 2 and R 3 there are two concepts,

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

Fourier and Wavelet Signal Processing

Fourier and Wavelet Signal Processing Ecole Polytechnique Federale de Lausanne (EPFL) Audio-Visual Communications Laboratory (LCAV) Fourier and Wavelet Signal Processing Martin Vetterli Amina Chebira, Ali Hormati Spring 2011 2/25/2011 1 Outline

More information

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,

More information

I teach myself... Hilbert spaces

I teach myself... Hilbert spaces I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

FUNCTIONAL ANALYSIS-NORMED SPACE

FUNCTIONAL ANALYSIS-NORMED SPACE MAT641- MSC Mathematics, MNIT Jaipur FUNCTIONAL ANALYSIS-NORMED SPACE DR. RITU AGARWAL MALAVIYA NATIONAL INSTITUTE OF TECHNOLOGY JAIPUR 1. Normed space Norm generalizes the concept of length in an arbitrary

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

Math Linear algebra, Spring Semester Dan Abramovich

Math Linear algebra, Spring Semester Dan Abramovich Math 52 0 - Linear algebra, Spring Semester 2012-2013 Dan Abramovich Fields. We learned to work with fields of numbers in school: Q = fractions of integers R = all real numbers, represented by infinite

More information

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

A Primer in Econometric Theory

A Primer in Econometric Theory A Primer in Econometric Theory Lecture 1: Vector Spaces John Stachurski Lectures by Akshay Shanker May 5, 2017 1/104 Overview Linear algebra is an important foundation for mathematics and, in particular,

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

Elements of Convex Optimization Theory

Elements of Convex Optimization Theory Elements of Convex Optimization Theory Costis Skiadas August 2015 This is a revised and extended version of Appendix A of Skiadas (2009), providing a self-contained overview of elements of convex optimization

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

Projection Theorem 1

Projection Theorem 1 Projection Theorem 1 Cauchy-Schwarz Inequality Lemma. (Cauchy-Schwarz Inequality) For all x, y in an inner product space, [ xy, ] x y. Equality holds if and only if x y or y θ. Proof. If y θ, the inequality

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b . FINITE-DIMENSIONAL VECTOR SPACES.. Fields By now you ll have acquired a fair knowledge of matrices. These are a concrete embodiment of something rather more abstract. Sometimes it is easier to use matrices,

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Introduction to Signal Spaces

Introduction to Signal Spaces Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM ORIGINATION DATE: 8/2/99 APPROVAL DATE: 3/22/12 LAST MODIFICATION DATE: 3/28/12 EFFECTIVE TERM/YEAR: FALL/ 12 COURSE ID: COURSE TITLE: MATH2800 Linear Algebra

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Definitions and Properties of R N

Definitions and Properties of R N Definitions and Properties of R N R N as a set As a set R n is simply the set of all ordered n-tuples (x 1,, x N ), called vectors. We usually denote the vector (x 1,, x N ), (y 1,, y N ), by x, y, or

More information

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spectral Theory, with an Introduction to Operator Means. William L. Green Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps

More information

96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2.

96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2. 96 CHAPTER 4. HILBERT SPACES 4.2 Hilbert Spaces Hilbert Space. An inner product space is called a Hilbert space if it is complete as a normed space. Examples. Spaces of sequences The space l 2 of square

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Section 7.5 Inner Product Spaces

Section 7.5 Inner Product Spaces Section 7.5 Inner Product Spaces With the dot product defined in Chapter 6, we were able to study the following properties of vectors in R n. ) Length or norm of a vector u. ( u = p u u ) 2) Distance of

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49 REAL ANALYSIS II HOMEWORK 3 CİHAN BAHRAN Conway, Page 49 3. Let K and k be as in Proposition 4.7 and suppose that k(x, y) k(y, x). Show that K is self-adjoint and if {µ n } are the eigenvalues of K, each

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

Exercises * on Linear Algebra

Exercises * on Linear Algebra Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1.

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1. August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei 1. Vector spaces 1.1. Notations. x S denotes the fact that the element x

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Linear Normed Spaces (cont.) Inner Product Spaces

Linear Normed Spaces (cont.) Inner Product Spaces Linear Normed Spaces (cont.) Inner Product Spaces October 6, 017 Linear Normed Spaces (cont.) Theorem A normed space is a metric space with metric ρ(x,y) = x y Note: if x n x then x n x, and if {x n} is

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

1 Inner Product Space

1 Inner Product Space Ch - Hilbert Space 1 4 Hilbert Space 1 Inner Product Space Let E be a complex vector space, a mapping (, ) : E E C is called an inner product on E if i) (x, x) 0 x E and (x, x) = 0 if and only if x = 0;

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A). Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale

More information

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n. Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:

More information