CHILE LECTURES: MULTIPLICATIVE ERGODIC THEOREM, FRIENDS AND APPLICATIONS

Size: px
Start display at page:

Download "CHILE LECTURES: MULTIPLICATIVE ERGODIC THEOREM, FRIENDS AND APPLICATIONS"

Transcription

1 CHILE LECTURES: MULTIPLICATIVE ERGODIC THEOREM, FRIENDS AND APPLICATIONS. Motivation Context: (Ω, P) a probability space; σ : Ω Ω is a measure-preserving transformation P(σ A) = P(A) for all measurable sets A. Generally, P will be ergodic as well: σ A = A implies P(A) is 0 or. This is an irreducibility condition. This is sufficient for the following form of Birkhoff s theorem (ergodic theory s strong law of large numbers): Theorem (Birkhoff). Let σ be an ergodic measure-preserving transformation of (Ω, P). Let f L (Ω) (in fact f + L suffices). Then for P-a.e. ω, N f(σ n ω) N n=0 f(x) dp(x) as N. The Kingman sub-additive ergodic theorem is an important generalization. A sequence of functions, (f n ) is sub-additive if f + L (P); and f n+m (ω) f n (ω) + f m (σ n ω) for each ω. Theorem 2 (Kingman sub-additive ergodic theorem (968)). Let σ be an ergodic measure-preserving transformation of (Ω, P) and let (f n ) be a sub-additive sequence of functions satisfying f + dp < Then for P-a.e. ω, f n n(ω) lim f k (x) dp(x) as n. k k Example 3 (First Passage Site Percolation). Let Ω be a probability space, and consider a family of measure-preserving maps (σ n ) n Z d from Ω to Ω such that σ n+m = σ n σ m Let f : Ω (0, ) be a sequence of weights and define T a,b (ω) = n inf a=v 0 v v n=b i=0 the time of the shortest path from a to b. f(σ vi ω),

2 2 MULTIPLICATIVE ERGODIC THEOREM LECTURES Now certainly T a,c (ω) T a,b (ω) + T b,c (ω). We also have T a,b (ω) = T 0,b a (σ a ω). In particular, for a fixed v Z d, T 0,(n+m)v (ω) T 0,nv (ω) + T 0,mv (σ nv ω) = T 0,nv (ω) + T 0,mv (σ n vω). Hence t n (ω) = T 0,nv (ω) is a sub-additive sequence of functions (with respect to the transformation σ v. Hence n T 0,nv(ω) converges to a constant, which we will call /s v, for all v Z d and almost all ω Ω. s v is the rate at which the water spreads in direction v. You can also make sense of s v for non-integer v s by looking at the time to get to the nearest lattice point to nv. A calculation shows s tv = (/t)s v, so it is natural to look at s v for v with v =. The set S = {xv : x s v } is called the shape of the percolation process. Corollary 4 (Shape Theorem). For a.e. ω, Occ t (ω)/t S, where Occ t (ω) is the set of sites that are occupied by time t. Example 5 (Matrix cocycles). Let σ, P and Ω be as above. Let A: Ω M d (R) and write A ω for A(ω). Then a matrix cocycle is given by A: N Ω M d (R) by A (n) ω = A σ n ω A σω A ω. These matrices satisfy the cocycle relation: A (n+m) ω = A (m) σ ωa (n) n ω. Equipping the matrices with the norm A = max x = Ax, it s immediate that AB A B, so the functions ω A (n) ω as n runs over N form a sub-multiplicative sequence.

3 MULTIPLICATIVE ERGODIC THEOREM LECTURES 3 In particular, f n (ω) = log A (n) ω is a sub-additive sequence of functions Corollary 6 (Furstenberg-Kesten (960)). Let σ be an ergodic measurepreserving transformation of (Ω, P). Let A: Ω M d (R) satisfy log A ω dp(ω) <. Then there is an α such that for P-a.e. ω, log A(n) n ω α. For a constant matrix cocycle: A ω = M for each ω, log M n n log ρ(m), the spectral radius (equal to the logarithm of absolute value of the largest eigenvalue). log A(n) n ω α is equivalent to the statement: for each ɛ > 0, for all sufficiently large n, e n(α ɛ) A (n) e n(α+ɛ). We will informally write A (n) e nα, so that α is the exponential growth rate of the matrix cocycle. For a single matrix, and for any v, M n v grows roughly like a n poly(n), where a is the absolute value of some eigenvalue. One might hope for a refinement of Furstenberg-Kesten giving analogues of the other eigenvalues. Question. What can be said about the matrix A (n) ω for typical ω? It is helpful to think of the matrix cocycle as a vector bundle map from Ω R d to itself, σ(ω, v) = (σ(ω), A ω v). A ω ω σω σ 2 ω Since different matrices are being applied at each ω, there s no reason to hope for the same vector space decomposition over each point.

4 4 MULTIPLICATIVE ERGODIC THEOREM LECTURES A ω ω σω σ 2 ω Figure. Oseledets subspaces Theorem 7 (Oseledets Multiplicative Ergodic Theorem (invertible) (965)). Let σ be an invertible ergodic measure-preserving transformation of (Ω, P). Let A: Ω GL d (R) be a cocycle of invertible matrices with log A ± ω dp(ω) <. Then there exist > λ > λ 2 >... > λ k > and subspaces V (ω),..., V k (ω) satisfying () Decomposition: R d = V (ω)... V k (ω); (2) Equivariance: A ω (V i (ω)) = V i (σ(ω)); (3) Growth: If v V i (ω) \ {0}, then log A(n) n ω v λ i as n ±. This may be thought of as something like a dynamical Jordan normal form. The V i (ω) are the Oseledets subspaces and the λ i are the Lyapunov exponents. There is also a non-invertible version. Theorem 8 (Oseledets Multiplicative Ergodic Theorem (non-invertible)). Let σ be an ergodic measure-preserving transformation of (Ω, P). Let A: Ω M d (R) be a cocycle of matrices with log A ω dp(ω) <. Then there exist > λ > λ 2 >... > λ k > and subspaces U (ω),..., U k (ω) satisfying: () Filtration: R d = U (ω)... U k (ω);

5 MULTIPLICATIVE ERGODIC THEOREM LECTURES 5 Figure 2. A decreasing collection of subspaces U U 2 U k is called a filtration or a flag. A physical flag has a distinguished -dimensional subspace (the direction of the pole) that is a subspace of a distinguished 2-dimensional subspace (the plane of the flag) sitting in R 3. (2) Equivariance: A ω (U i (ω)) U i (σ(ω)); (3) Growth: If v U i (ω) \ U i+ (ω), then log A(n) n ω v λ i. I sometimes call the U i (ω) slow spaces they are the vectors whose exponential growth rate is at most λ i : U i (ω) = {v : lim sup log A(n) n ω v λ i }. Notice: just like for a single matrix, the vectors growing at rate λ or less form a linear space; the vectors growing at rate λ or more do not form a linear space. I may also write V λi (ω) for the slow spaces. Fast spaces also play a role: V λi (ω). Importantly, V λi (ω) {v : lim sup n log A(n) ω v λ i }. Rather a fast space can be defined to be the largest equivariant measurable family subspace such that each vector expands at a rate at least λ i... Oseledets theorem motivation. Suppose that Ω is a manifold, σ is an invertible smooth map from Ω to itself and P is a (nice) ergodic σ-invariant measure. Then the Multiplicative ergodic theorem gives for P-a.e. ω, subspaces of the tangent space, V (ω),..., V k (ω) each with a different exponent λ,..., λ k. If λ,..., λ j are positive, while λ j+,..., λ k are negative, then the directions E u (ω) = V (ω)... V j (ω) are unstable directions at ω: perturbations in these directions tend to grow; while E s (ω) =

6 6 MULTIPLICATIVE ERGODIC THEOREM LECTURES V j+ (ω)... V k (ω) are stable directions. They are also equivariant: Dσ(E u (ω)) = E u (σ(ω)) and Dσ(E s (ω)) = E s (σ(ω)). If you can integrate these subspaces (patch them together), you obtain stable and unstable manifolds at ω for P-a.e. ω..2. Non-invertible to invertible I. Notice that the hypotheses for the non-invertible theorem are satisfied in the invertible case. You can deduce the invertible theorem from the non-invertible theorem as follows. Let σ be an ergodic invertible measure-preserving transformation; let (A ω ) be a cocycle of invertible matrices (with log A ω dp < and log A ω dp < ). () Form the spaces V λi (ω) from the non-invertible version of the theorem. (2) Form a new cocycle with base dynamical system σ and generator B ω = A. Notice that B(n) ω = (A (n) σ n ω ). Hence the σ ω exponents of the B cocycle are λ k >... > λ. Form spaces W λi (ω) consisting of vectors which when iterated backwards have expansion rate at most λ i. These same vectors (using equivariance here), when iterated forwards have expansion rate at least λ i, so that W λi (ω) = U λi (ω). (3) Finally V i (ω) = U λi (ω) W λi (ω). Corollary 9. The slow spaces are determined by the future: V λi (ω) is determined by (A σ n ω) n 0 ; the fast spaces are determined by the past: V λi (ω) is determined by (A σ n ω) n<0. I ll mention two other ways to obtain the invertible from the noninvertible case later. 2. Tricks of the trade 2.. Singular value decomposition (SVD). Given a square (or not square) matrix, A, there are essentially unique orthonormal bases v,..., v d and w,..., w d and non-negative µ i such that Av i = µ i w i. I like to write the µ i in decreasing order. These are the singular values of A. The non-uniqueness is in choice of signs in the v i, or orthonormal basis choice if a singular value is repeated. The v i are the singular vectors of A. As an alternative formulation of SVD, A may be expressed as O DO 2 where O and O 2 are orthogonal matrices and D is diagonal. Here the rows of O 2 are the v i and the columns of O are the w i. Geometrically, a linear map A maps a sphere to an ellipsoid. The w i are the semi-principal axes and the v i are their pre-images.

7 MULTIPLICATIVE ERGODIC THEOREM LECTURES 7 The proof of the singular value decomposition is straightforward, using diagonalization of A T A. The eigenvectors are the v i ; the eigenvalues are the µ 2 i. One can see from the O DO 2 formulation that det A = i µ i. Also A has the same singular values as A, while the singular values of A are the reciprocals of the singular values of A. A simple calculation shows that A = µ ; the vector achieving the maximum in the norm is just v. A generalization of this is the following: Proposition 0. Let A be a d d matrix. Let the singular values be µ µ 2... µ d and the singular vectors be v,..., v d. Then () µ k = max dim V =k min v V S Av (where S denotes the unit sphere); the maximum is attained by V = lin{v,..., v k }. (2) µ k = min dim V =d k+ max v V S Av ; the minimum is attained by V = lin{v k,..., v d }. (3) µ µ k = max dim V =k det(a V ), where det(a V ) is the k- dimensional volume growth of A. The proof (of the first two parts) is fairly straightforward. Since orthogonal matrices are isometries, it suffices to consider the case where A is diagonal. The last part can be proved using the Cauchy-Binet theorem (more on this below) Exterior algebra (user s guide apologies to Bourbaki). If V is a vector space, define an abstract vector space k V spanned by elements v v k satisfying relations: (Multi-linearity) v (av i ) v k = av v i v k ; (Antisymmetry) v v i v j v k = v v j v i v k. If {v,..., v d } forms a basis for V, then {v i v ik : i <... < i k } forms a basis for k V. In particular, k V is ( d k) - dimensional. If V has an inner product, it turns out that k V inherits a natural inner product structure. A crude way to do this is to take an orthonormal basis for V, e,..., e d and declare {e i e ik } to be an orthonormal basis for k V. This turns out to define an inner product that does not depend on the choice of orthonormal basis (the proof is due to a clever determinant equality, the Cauchy-Binet theorem). If A is a linear map on V, then there is a natural action on k V, A k (v v k ) = (Av ) (Av k ) (you can check that the relations above are preserved). Singular value decomposition plays extremely nicely with exterior algebras. This is the basis of the so-called

8 8 MULTIPLICATIVE ERGODIC THEOREM LECTURES Raghunathan trick, which we will use below in our proof of the (noninvertible) multiplicative ergodic theorem. Suppose that v,... v d are the singular vectors for A with singular values µ,..., µ d. Also, let Av i = µ i w i, so that the w i are orthonormal. Then A k maps v i v ik to µ i µ ik w i w ik. The singular vectors of k A are the k-fold wedge products of the singular vectors of A; the singular values of k A are the k-fold products of the singular values of A. The top singular value of k A is µ µ k we can interpret k A as the rate of growth of k-dimensional volume (see Proposition 0 above). We sometimes think of v v k as representing a normal (with magnitude) to a subspace spanned by v,..., v k. (but what does v v k + w w k represent?) 2.3. Grassmannians. The Grassmannian, G k (V ), of a vector space V is the collection of all k-dimensional subspaces of V. It has a lot of structure! compact manifold, homogeneous space, scheme,... Unlike exterior algebras, there s no additive structure though... Mostly we ll take V = R d and think of G k (V ) as a compact (and hence complete) metric space. The distance between two k-dimensional subspaces W and W is the Hausdorff distance d H (W S, W S), where S is the unit sphere. Recall that d H (W S, W S) = max( max d(v, W S), max d(v, W S)). v W S v W S It turns out that there is a symmetry of closeness for subspaces of the same dimension. Each of the terms in the maximum is bounded by a constant multiple of the other, where the constant depends only on k (even in Banach spaces). In particular, to show that two k-dimensional subspaces are close it suffices to show that either of the two quantities in the maximum is small. The manifold G k (R d ) turns out to be k(d k)-dimensional. I made use of a system of coordinates in a recent paper Inverse cocycle. We mentioned this one above. Here we require invertibility both of σ, and of the matrices defining the cocycle: A ω GL(d, R). The inverse cocycle is a cocycle over σ generated by B ω = A σ ω.

9 MULTIPLICATIVE ERGODIC THEOREM LECTURES 9 As mentioned above, B (n) ω = (A (n) σ n ω ). A little work with the subadditive ergodic theorem shows that the exponents of the inverse cocycle are µ d... µ Dual cocycle. This cocycle requires invertibility of σ, but not invertibility of the matrices A ω. Again it s a cocycle over σ. The generator is C ω = A σ ω. We then calculate C (n) ω = C σ (n ) ω C ω = A σ n ω A σ ω = (A σ ω A σ n ω) = (A (n) σ n ω ). Since the singular values of A (n) σ n ω are approximately enµ,..., e nµ d, we see that the singular values of the dual are the same, and hence the dual cocycle has the same Lyapunov exponents as the primal cocycle Temperedness. If σ is a measure-preserving transformation and f L, then for P-a.e. ω, f(σ n ω)/n 0. The cheapest proof uses the Birkhoff ergodic theorem, but there is a more elementary proof using the first Borel-Cantelli lemma. 3. Multiplicative Ergodic Theorem: proof sketch We ll give a rough sketch of the proof of the multiplicative ergodic theorem (the non-invertible version). The framework for the proof is from Raghunathan s proof of the MET (based on singular values). We ll deviate from it slightly, giving an inductive proof that we have modified to work in Banach spaces. We ve already indicated how to deduce the invertible version from the non-invertible version. Let σ be an ergodic measure-preserving transformation of a probability space (Ω, P). Let A: Ω M d d (R) generate a matrix cocycle and assume that A satisfies log A ω dp(ω) <. 3.. Finding the exponents. For each k, define a new cocycle A k ω. This satisfies log + A k ω dp(ω) k log + A ω dp(ω) <. Now, fn k (ω) = log A k(n) ω is a sub-additive sequence by sub-multiplicativity of norms, so that f k n n (ω) is convergent P-a.e. to lim n n f k n dp by the sub-additive ergodic theorem (or the Furstenberg-Kesten theorem).

10 0 MULTIPLICATIVE ERGODIC THEOREM LECTURES We let this limit be µ µ k ; that is, we define µ k = lim (fn k (ω) fn (k ) (ω)) dp(ω), n n where fn 0 0. From our comments before, µ µ k measures the long-term k-dimensional volume growth rate of A (n) ω. These µ k s will be the Lyapunov exponents. Recalling that A k(n) ω = k j= s j(a (n) ω ), we see e µ k = lim s k (A ω (n) ) /n. n In particular, the µ k are non-increasing as required. Notice also, that µ µ d = lim log ( s n n (A ω (n) ) s d (A ω (n) ) ) = lim log det A(n) n n ω n = lim log det A σ n n i ω = i=0 log det A ω dp(ω). In particular, if A ω SL(d, R) for each ω, then µ µ d = 0. We let λ >... > λ l be the Lyapunov exponents without repetition, where λ j occurs m j times among the µ i s. m j is called the multiplicity of λ j Finding the slow spaces. Let η < min 6 i(λ i λ i+ ). By the sub-additive ergodic theorem, there exists for P-a.e. ω an n 0 such that for each n > n 0 and each i, e (µ i η)n < s i (A (n) ω ) < e (µ i+η)n. Let j l and let U (n) j (ω) be the (m j +m j m l )-dimensional space spanned by the singular vectors with the smallest singular values (these are all e (µ j+η)n or smaller). Claim. For P-a.e. ω, the U (n) j (ω) form a Cauchy sequence in G k (R d ). We define the limit of the U (n) j (ω) as n to be U j (ω) Finishing the proof. Claim 2. U j (ω) is equivariant. Claim 3. For P-a.e. ω, if v U j (ω), then n log A(n) v λ j η for all large n. Claim 4. For P-a.e. ω, if v U j (ω), then n log A(n) v λ j + η for all large n.

11 3.4. Proof sketches. MULTIPLICATIVE ERGODIC THEOREM LECTURES Claim proof sketch. We want to show that for each j and for P-a.e. ω, U (n) j (ω) is a Cauchy sequence of subspaces of R d. The idea is quite simple: Take v U (n) j (ω) S and write v = u + w where u U (n+) j (ω) and w U (n+) j (ω). Notice that U (n+) j (ω) is the fast space for spanned by the top singular vectors, which all expand at least A (n+) ω at exponential rate λ j η. We have A (n+) ω v e (λj+η)n A σ n ω ; but also A (n+) ω v e (λj η)(n+) w. By temperedness, A σ n ω e ηn, so that we end up with an estimate like w e (λ j λ j 3η)n. Any vector v in U (n) j (ω) S is exponentially close to a vector, u, in U (n+) j (ω). The symmetry of closeness described above implies d(u (n) j (ω), U (n+) j (ω)) e (λ j λ j 3η)n. Since these distances are summable, the sequence of subspaces is Cauchy and () d(u (n) j (ω), U j (ω)) e (λ j λ j 3η)n. Claim 2 is proved similarly: we take an element v of U (n+) j (ω) S (σω) and w U (n) j (σω). Since and write A ω v as u + w with u in U (n) j v isn t too large, one deduces that w is exponentially small A (n+) ω Claim 3 is straightforward. If v U j (ω), then it has some distance δ from U j (ω). It then has distance at least δ (n) from U 2 j (ω) for large n. Writing v as u + w with u U (n) j (ω) and w U (n) j (ω), we have w δ, so that A(n) 2 ω v A (n) ω w e (λj η)n. Claim 4 is tricky, but is straightforward for j = 2. If v U 2 (ω) S, then v can be written as u+w with u U (n) 2 (ω) and w U (n) 2 (ω) with w e (λ λ 2 3η)n by (). Now applying A (n) ω to u and w separately, both are of size e λ 2n. For j > 2, Raghunathan splits w into pieces with different growth rates and proves stronger estimates on the faster growing pieces. A technically simpler (but less pleasing proof) is obtained by working inductively down from the top, giving a priori estimates on the growth of the fast part.

12 2 MULTIPLICATIVE ERGODIC THEOREM LECTURES All of the above can be made to work in separable Banach spaces. First, a sub-multiplicative notion of k-dimensional volume growth is needed. From this, we can define the exponents as above. The spaces are obtained in essentially the same way. This gives Theorem (Multiplicative ergodic theorem for Banach spaces; non-invertible version). Let σ be an ergodic measure-preserving transformation of a probability space (Ω, P). Let X be a Banach space with separable dual and let (A ω ) be a cocycle of quasi-compact operators on X. Then there exist λ > λ 2 >..., multiplicities m, m 2,... and (m m i )-codimensional spaces U i (ω) that are equivariant and have the correct growth conditions. We come back to the deduction of invertible-type METs from noninvertible METs. Taking a cue from Corollary 9, we insist on invertibility of the base transformation. We see what we can get without making any invertibility assumptions on the matrices Non-invertible to invertible II: Duality. Assume that X is reflexive: X = X (this is certainly true in the finite-dimensional case). Then X is separable and the dual cocycle C ω described above satisfies the non-invertible MET with the same exponents. For matrices, a left eigenvector and right eigenvector with different eigenvalues are orthogonal. Analogously, a dual Oseledets subspace annihilates a primal Oseledets subspace with a different exponent. One can then show Uj+(ω) is the fast space, V λj (ω). From here, you intersect with V λj (ω) to get V j (ω) as previously Non-invertible to invertible III: Power method. To find the largest eigenvector of a matrix (assuming that the largest eigenvalue has multiplicity ), you can just take (almost) any vector and iterate the matrix. There s an analogous method to build the V j (ω). I ll describe it for the case of R d, but there is a Banach space version also. V j (ω) = lim n A (n) σ n ω (U j(ω) U j+ (ω)). Start from the orthogonal complement of the V λj+ (σ n ω) inside V λj (σ n ω). This has some component of the fast space inside it (you can make sense of this using exterior algebra). When it is iterated, the fast part dominates and the sequence converges to the true fast space. The following version of the MET (that we call semi-invertible) follows from either of these approaches.

13 MULTIPLICATIVE ERGODIC THEOREM LECTURES 3 Theorem 2 (Multiplicative Ergodic Theorem (semi-invertible) - Froyland, Lloyd, Q). Let σ be an invertible ergodic measure-preserving transformation of (Ω, P). Let A: Ω GL d (R) be a cocycle of not necessarily invertible matrices with log A ω dp(ω) <. Then there exist > λ > λ 2 >... > λ k > and subspaces V (ω),..., V k (ω) satisfying () Decomposition: R d = V (ω)... V k (ω); (2) Equivariance: A ω (V i (ω)) = V i (σ(ω)); (3) Growth: If v V i (ω) \ {0}, then log A(n) n ω v λ i as n +. There is also a version for quasi-compact operators on separable Banach spaces 4. Last lecture To be decided: either METs on Banach spaces and applications to oceans (lighter weight); or sensitivity of Lyapunov exponents.

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

THE MULTIPLICATIVE ERGODIC THEOREM OF OSELEDETS

THE MULTIPLICATIVE ERGODIC THEOREM OF OSELEDETS THE MULTIPLICATIVE ERGODIC THEOREM OF OSELEDETS. STATEMENT Let (X, µ, A) be a probability space, and let T : X X be an ergodic measure-preserving transformation. Given a measurable map A : X GL(d, R),

More information

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1].

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1]. THEOREM OF OSELEDETS We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets []. 0.. Cocycles over maps. Let µ be a probability measure

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

On Equivariant Block Triangularization of Matrix Cocyles

On Equivariant Block Triangularization of Matrix Cocyles On Equivariant Block Triangularization of Matrix Cocyles Christopher Bose, Joseph Horan, Anthony Quas September 24, 24 Abstract The Multiplicative Ergodic Theorem shows that a real matrix cocycle is block

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

STOCHASTIC STABILITY OF LYAPUNOV EXPONENTS AND OSELEDETS SPLITTINGS FOR SEMI-INVERTIBLE MATRIX COCYCLES

STOCHASTIC STABILITY OF LYAPUNOV EXPONENTS AND OSELEDETS SPLITTINGS FOR SEMI-INVERTIBLE MATRIX COCYCLES STOCHASTIC STABILITY OF LYAPUNOV EXPONENTS AND OSELEDETS SPLITTINGS FOR SEMI-INVERTIBLE MATRIX COCYCLES GARY FROYLAND, CECILIA GONZÁLEZ-TOKMAN AND ANTHONY QUAS Abstract. We establish (i) stability of Lyapunov

More information

The Spinor Representation

The Spinor Representation The Spinor Representation Math G4344, Spring 2012 As we have seen, the groups Spin(n) have a representation on R n given by identifying v R n as an element of the Clifford algebra C(n) and having g Spin(n)

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Lebesgue Measure on R n

Lebesgue Measure on R n 8 CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

Complex manifolds, Kahler metrics, differential and harmonic forms

Complex manifolds, Kahler metrics, differential and harmonic forms Complex manifolds, Kahler metrics, differential and harmonic forms Cattani June 16, 2010 1 Lecture 1 Definition 1.1 (Complex Manifold). A complex manifold is a manifold with coordinates holomorphic on

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues

More information

(x, y) = d(x, y) = x y.

(x, y) = d(x, y) = x y. 1 Euclidean geometry 1.1 Euclidean space Our story begins with a geometry which will be familiar to all readers, namely the geometry of Euclidean space. In this first chapter we study the Euclidean distance

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Compact operators on Banach spaces

Compact operators on Banach spaces Compact operators on Banach spaces Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto November 12, 2017 1 Introduction In this note I prove several things about compact

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Representations of moderate growth Paul Garrett 1. Constructing norms on groups

Representations of moderate growth Paul Garrett 1. Constructing norms on groups (December 31, 2004) Representations of moderate growth Paul Garrett Representations of reductive real Lie groups on Banach spaces, and on the smooth vectors in Banach space representations,

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication 7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Determinant lines and determinant line bundles

Determinant lines and determinant line bundles CHAPTER Determinant lines and determinant line bundles This appendix is an exposition of G. Segal s work sketched in [?] on determinant line bundles over the moduli spaces of Riemann surfaces with parametrized

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

(2) E M = E C = X\E M

(2) E M = E C = X\E M 10 RICHARD B. MELROSE 2. Measures and σ-algebras An outer measure such as µ is a rather crude object since, even if the A i are disjoint, there is generally strict inequality in (1.14). It turns out to

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique Short note on compact operators - Monday 24 th March, 2014 Sylvester Eriksson-Bique 1 Introduction In this note I will give a short outline about the structure theory of compact operators. I restrict attention

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Homework set 4 - Solutions

Homework set 4 - Solutions Homework set 4 - Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

Periodic approximation of exceptional Lyapunov exponents for sem

Periodic approximation of exceptional Lyapunov exponents for sem Periodic approximation of exceptional Lyapunov exponents for semi-invertible operator cocycles Department of Mathematics, University of Rijeka, Croatia (joint work with Lucas Backes, Porto Alegre, Brazil)

More information

Overview of normed linear spaces

Overview of normed linear spaces 20 Chapter 2 Overview of normed linear spaces Starting from this chapter, we begin examining linear spaces with at least one extra structure (topology or geometry). We assume linearity; this is a natural

More information

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 110 Linear Algebra Midterm 2 Review October 28, 2017 Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections

More information

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler. Math 453 Exam 3 Review The syllabus for Exam 3 is Chapter 6 (pages -2), Chapter 7 through page 37, and Chapter 8 through page 82 in Axler.. You should be sure to know precise definition of the terms we

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information

The Lorentz group provides another interesting example. Moreover, the Lorentz group SO(3, 1) shows up in an interesting way in computer vision.

The Lorentz group provides another interesting example. Moreover, the Lorentz group SO(3, 1) shows up in an interesting way in computer vision. 190 CHAPTER 3. REVIEW OF GROUPS AND GROUP ACTIONS 3.3 The Lorentz Groups O(n, 1), SO(n, 1) and SO 0 (n, 1) The Lorentz group provides another interesting example. Moreover, the Lorentz group SO(3, 1) shows

More information

Topological vectorspaces

Topological vectorspaces (July 25, 2011) Topological vectorspaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ Natural non-fréchet spaces Topological vector spaces Quotients and linear maps More topological

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

5 Quiver Representations

5 Quiver Representations 5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT Abstract. These are the letcure notes prepared for the workshop on Functional Analysis and Operator Algebras to be held at NIT-Karnataka,

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

1.2 Fundamental Theorems of Functional Analysis

1.2 Fundamental Theorems of Functional Analysis 1.2 Fundamental Theorems of Functional Analysis 15 Indeed, h = ω ψ ωdx is continuous compactly supported with R hdx = 0 R and thus it has a unique compactly supported primitive. Hence fφ dx = f(ω ψ ωdy)dx

More information