CHILE LECTURES: MULTIPLICATIVE ERGODIC THEOREM, FRIENDS AND APPLICATIONS

Similar documents
MET Workshop: Exercises

THE MULTIPLICATIVE ERGODIC THEOREM OF OSELEDETS

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1].

NORMS ON SPACE OF MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

On Equivariant Block Triangularization of Matrix Cocyles

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

STOCHASTIC STABILITY OF LYAPUNOV EXPONENTS AND OSELEDETS SPLITTINGS FOR SEMI-INVERTIBLE MATRIX COCYCLES

The Spinor Representation

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

MATH 583A REVIEW SESSION #1

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Lecture 8 : Eigenvalues and Eigenvectors

Lebesgue Measure on R n

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Review of linear algebra

Complex manifolds, Kahler metrics, differential and harmonic forms

Chapter 6: Orthogonality

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

The following definition is fundamental.

Review problems for MA 54, Fall 2004.

Math Linear Algebra II. 1. Inner Products and Norms

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Elementary linear algebra

Trace Class Operators and Lidskii s Theorem

Conceptual Questions for Review

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

ECE 275A Homework #3 Solutions

Boolean Inner-Product Spaces and Boolean Matrices

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

(x, y) = d(x, y) = x y.

Optimization Theory. A Concise Introduction. Jiongmin Yong

Compact operators on Banach spaces

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

l(y j ) = 0 for all y j (1)

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Lecture 2: Linear Algebra Review

Math 108b: Notes on the Spectral Theorem

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

1 Math 241A-B Homework Problem List for F2015 and W2016

Recall the convention that, for us, all vectors are column vectors.

Differential Topology Final Exam With Solutions

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Linear Algebra Massoud Malek

Representations of moderate growth Paul Garrett 1. Constructing norms on groups

Lecture 7: Positive Semidefinite Matrices

5 Compact linear operators

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Stat 159/259: Linear Algebra Notes

Review of Some Concepts from Linear Algebra: Part 2

Determinant lines and determinant line bundles

REPRESENTATION THEORY WEEK 7

(2) E M = E C = X\E M

Econ Slides from Lecture 7

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Spectral Theorem for Self-adjoint Linear Operators

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Math 291-2: Lecture Notes Northwestern University, Winter 2016

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

. = V c = V [x]v (5.1) c 1. c k

Homework set 4 - Solutions

Eigenvalues and Eigenvectors A =

Periodic approximation of exceptional Lyapunov exponents for sem

Overview of normed linear spaces

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

The Lorentz group provides another interesting example. Moreover, the Lorentz group SO(3, 1) shows up in an interesting way in computer vision.

Topological vectorspaces

Linear Algebra in Actuarial Science: Slides to the lecture

5 Quiver Representations

7 Planar systems of linear ODE

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

Background Mathematics (2/2) 1. David Barber

Problem Set 6: Solutions Math 201A: Fall a n x n,

INNER PRODUCT SPACE. Definition 1

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

An introduction to some aspects of functional analysis

7. Symmetric Matrices and Quadratic Forms

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Math 3191 Applied Linear Algebra

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

1.2 Fundamental Theorems of Functional Analysis

Transcription:

CHILE LECTURES: MULTIPLICATIVE ERGODIC THEOREM, FRIENDS AND APPLICATIONS. Motivation Context: (Ω, P) a probability space; σ : Ω Ω is a measure-preserving transformation P(σ A) = P(A) for all measurable sets A. Generally, P will be ergodic as well: σ A = A implies P(A) is 0 or. This is an irreducibility condition. This is sufficient for the following form of Birkhoff s theorem (ergodic theory s strong law of large numbers): Theorem (Birkhoff). Let σ be an ergodic measure-preserving transformation of (Ω, P). Let f L (Ω) (in fact f + L suffices). Then for P-a.e. ω, N f(σ n ω) N n=0 f(x) dp(x) as N. The Kingman sub-additive ergodic theorem is an important generalization. A sequence of functions, (f n ) is sub-additive if f + L (P); and f n+m (ω) f n (ω) + f m (σ n ω) for each ω. Theorem 2 (Kingman sub-additive ergodic theorem (968)). Let σ be an ergodic measure-preserving transformation of (Ω, P) and let (f n ) be a sub-additive sequence of functions satisfying f + dp < Then for P-a.e. ω, f n n(ω) lim f k (x) dp(x) as n. k k Example 3 (First Passage Site Percolation). Let Ω be a probability space, and consider a family of measure-preserving maps (σ n ) n Z d from Ω to Ω such that σ n+m = σ n σ m Let f : Ω (0, ) be a sequence of weights and define T a,b (ω) = n inf a=v 0 v v n=b i=0 the time of the shortest path from a to b. f(σ vi ω),

2 MULTIPLICATIVE ERGODIC THEOREM LECTURES Now certainly T a,c (ω) T a,b (ω) + T b,c (ω). We also have T a,b (ω) = T 0,b a (σ a ω). In particular, for a fixed v Z d, T 0,(n+m)v (ω) T 0,nv (ω) + T 0,mv (σ nv ω) = T 0,nv (ω) + T 0,mv (σ n vω). Hence t n (ω) = T 0,nv (ω) is a sub-additive sequence of functions (with respect to the transformation σ v. Hence n T 0,nv(ω) converges to a constant, which we will call /s v, for all v Z d and almost all ω Ω. s v is the rate at which the water spreads in direction v. You can also make sense of s v for non-integer v s by looking at the time to get to the nearest lattice point to nv. A calculation shows s tv = (/t)s v, so it is natural to look at s v for v with v =. The set S = {xv : x s v } is called the shape of the percolation process. Corollary 4 (Shape Theorem). For a.e. ω, Occ t (ω)/t S, where Occ t (ω) is the set of sites that are occupied by time t. Example 5 (Matrix cocycles). Let σ, P and Ω be as above. Let A: Ω M d (R) and write A ω for A(ω). Then a matrix cocycle is given by A: N Ω M d (R) by A (n) ω = A σ n ω A σω A ω. These matrices satisfy the cocycle relation: A (n+m) ω = A (m) σ ωa (n) n ω. Equipping the matrices with the norm A = max x = Ax, it s immediate that AB A B, so the functions ω A (n) ω as n runs over N form a sub-multiplicative sequence.

MULTIPLICATIVE ERGODIC THEOREM LECTURES 3 In particular, f n (ω) = log A (n) ω is a sub-additive sequence of functions Corollary 6 (Furstenberg-Kesten (960)). Let σ be an ergodic measurepreserving transformation of (Ω, P). Let A: Ω M d (R) satisfy log A ω dp(ω) <. Then there is an α such that for P-a.e. ω, log A(n) n ω α. For a constant matrix cocycle: A ω = M for each ω, log M n n log ρ(m), the spectral radius (equal to the logarithm of absolute value of the largest eigenvalue). log A(n) n ω α is equivalent to the statement: for each ɛ > 0, for all sufficiently large n, e n(α ɛ) A (n) e n(α+ɛ). We will informally write A (n) e nα, so that α is the exponential growth rate of the matrix cocycle. For a single matrix, and for any v, M n v grows roughly like a n poly(n), where a is the absolute value of some eigenvalue. One might hope for a refinement of Furstenberg-Kesten giving analogues of the other eigenvalues. Question. What can be said about the matrix A (n) ω for typical ω? It is helpful to think of the matrix cocycle as a vector bundle map from Ω R d to itself, σ(ω, v) = (σ(ω), A ω v). A ω ω σω σ 2 ω Since different matrices are being applied at each ω, there s no reason to hope for the same vector space decomposition over each point.

4 MULTIPLICATIVE ERGODIC THEOREM LECTURES A ω ω σω σ 2 ω Figure. Oseledets subspaces Theorem 7 (Oseledets Multiplicative Ergodic Theorem (invertible) (965)). Let σ be an invertible ergodic measure-preserving transformation of (Ω, P). Let A: Ω GL d (R) be a cocycle of invertible matrices with log A ± ω dp(ω) <. Then there exist > λ > λ 2 >... > λ k > and subspaces V (ω),..., V k (ω) satisfying () Decomposition: R d = V (ω)... V k (ω); (2) Equivariance: A ω (V i (ω)) = V i (σ(ω)); (3) Growth: If v V i (ω) \ {0}, then log A(n) n ω v λ i as n ±. This may be thought of as something like a dynamical Jordan normal form. The V i (ω) are the Oseledets subspaces and the λ i are the Lyapunov exponents. There is also a non-invertible version. Theorem 8 (Oseledets Multiplicative Ergodic Theorem (non-invertible)). Let σ be an ergodic measure-preserving transformation of (Ω, P). Let A: Ω M d (R) be a cocycle of matrices with log A ω dp(ω) <. Then there exist > λ > λ 2 >... > λ k > and subspaces U (ω),..., U k (ω) satisfying: () Filtration: R d = U (ω)... U k (ω);

MULTIPLICATIVE ERGODIC THEOREM LECTURES 5 Figure 2. A decreasing collection of subspaces U U 2 U k is called a filtration or a flag. A physical flag has a distinguished -dimensional subspace (the direction of the pole) that is a subspace of a distinguished 2-dimensional subspace (the plane of the flag) sitting in R 3. (2) Equivariance: A ω (U i (ω)) U i (σ(ω)); (3) Growth: If v U i (ω) \ U i+ (ω), then log A(n) n ω v λ i. I sometimes call the U i (ω) slow spaces they are the vectors whose exponential growth rate is at most λ i : U i (ω) = {v : lim sup log A(n) n ω v λ i }. Notice: just like for a single matrix, the vectors growing at rate λ or less form a linear space; the vectors growing at rate λ or more do not form a linear space. I may also write V λi (ω) for the slow spaces. Fast spaces also play a role: V λi (ω). Importantly, V λi (ω) {v : lim sup n log A(n) ω v λ i }. Rather a fast space can be defined to be the largest equivariant measurable family subspace such that each vector expands at a rate at least λ i... Oseledets theorem motivation. Suppose that Ω is a manifold, σ is an invertible smooth map from Ω to itself and P is a (nice) ergodic σ-invariant measure. Then the Multiplicative ergodic theorem gives for P-a.e. ω, subspaces of the tangent space, V (ω),..., V k (ω) each with a different exponent λ,..., λ k. If λ,..., λ j are positive, while λ j+,..., λ k are negative, then the directions E u (ω) = V (ω)... V j (ω) are unstable directions at ω: perturbations in these directions tend to grow; while E s (ω) =

6 MULTIPLICATIVE ERGODIC THEOREM LECTURES V j+ (ω)... V k (ω) are stable directions. They are also equivariant: Dσ(E u (ω)) = E u (σ(ω)) and Dσ(E s (ω)) = E s (σ(ω)). If you can integrate these subspaces (patch them together), you obtain stable and unstable manifolds at ω for P-a.e. ω..2. Non-invertible to invertible I. Notice that the hypotheses for the non-invertible theorem are satisfied in the invertible case. You can deduce the invertible theorem from the non-invertible theorem as follows. Let σ be an ergodic invertible measure-preserving transformation; let (A ω ) be a cocycle of invertible matrices (with log A ω dp < and log A ω dp < ). () Form the spaces V λi (ω) from the non-invertible version of the theorem. (2) Form a new cocycle with base dynamical system σ and generator B ω = A. Notice that B(n) ω = (A (n) σ n ω ). Hence the σ ω exponents of the B cocycle are λ k >... > λ. Form spaces W λi (ω) consisting of vectors which when iterated backwards have expansion rate at most λ i. These same vectors (using equivariance here), when iterated forwards have expansion rate at least λ i, so that W λi (ω) = U λi (ω). (3) Finally V i (ω) = U λi (ω) W λi (ω). Corollary 9. The slow spaces are determined by the future: V λi (ω) is determined by (A σ n ω) n 0 ; the fast spaces are determined by the past: V λi (ω) is determined by (A σ n ω) n<0. I ll mention two other ways to obtain the invertible from the noninvertible case later. 2. Tricks of the trade 2.. Singular value decomposition (SVD). Given a square (or not square) matrix, A, there are essentially unique orthonormal bases v,..., v d and w,..., w d and non-negative µ i such that Av i = µ i w i. I like to write the µ i in decreasing order. These are the singular values of A. The non-uniqueness is in choice of signs in the v i, or orthonormal basis choice if a singular value is repeated. The v i are the singular vectors of A. As an alternative formulation of SVD, A may be expressed as O DO 2 where O and O 2 are orthogonal matrices and D is diagonal. Here the rows of O 2 are the v i and the columns of O are the w i. Geometrically, a linear map A maps a sphere to an ellipsoid. The w i are the semi-principal axes and the v i are their pre-images.

MULTIPLICATIVE ERGODIC THEOREM LECTURES 7 The proof of the singular value decomposition is straightforward, using diagonalization of A T A. The eigenvectors are the v i ; the eigenvalues are the µ 2 i. One can see from the O DO 2 formulation that det A = i µ i. Also A has the same singular values as A, while the singular values of A are the reciprocals of the singular values of A. A simple calculation shows that A = µ ; the vector achieving the maximum in the norm is just v. A generalization of this is the following: Proposition 0. Let A be a d d matrix. Let the singular values be µ µ 2... µ d and the singular vectors be v,..., v d. Then () µ k = max dim V =k min v V S Av (where S denotes the unit sphere); the maximum is attained by V = lin{v,..., v k }. (2) µ k = min dim V =d k+ max v V S Av ; the minimum is attained by V = lin{v k,..., v d }. (3) µ µ k = max dim V =k det(a V ), where det(a V ) is the k- dimensional volume growth of A. The proof (of the first two parts) is fairly straightforward. Since orthogonal matrices are isometries, it suffices to consider the case where A is diagonal. The last part can be proved using the Cauchy-Binet theorem (more on this below). 2.2. Exterior algebra (user s guide apologies to Bourbaki). If V is a vector space, define an abstract vector space k V spanned by elements v v k satisfying relations: (Multi-linearity) v (av i ) v k = av v i v k ; (Antisymmetry) v v i v j v k = v v j v i v k. If {v,..., v d } forms a basis for V, then {v i v ik : i <... < i k } forms a basis for k V. In particular, k V is ( d k) - dimensional. If V has an inner product, it turns out that k V inherits a natural inner product structure. A crude way to do this is to take an orthonormal basis for V, e,..., e d and declare {e i e ik } to be an orthonormal basis for k V. This turns out to define an inner product that does not depend on the choice of orthonormal basis (the proof is due to a clever determinant equality, the Cauchy-Binet theorem). If A is a linear map on V, then there is a natural action on k V, A k (v v k ) = (Av ) (Av k ) (you can check that the relations above are preserved). Singular value decomposition plays extremely nicely with exterior algebras. This is the basis of the so-called

8 MULTIPLICATIVE ERGODIC THEOREM LECTURES Raghunathan trick, which we will use below in our proof of the (noninvertible) multiplicative ergodic theorem. Suppose that v,... v d are the singular vectors for A with singular values µ,..., µ d. Also, let Av i = µ i w i, so that the w i are orthonormal. Then A k maps v i v ik to µ i µ ik w i w ik. The singular vectors of k A are the k-fold wedge products of the singular vectors of A; the singular values of k A are the k-fold products of the singular values of A. The top singular value of k A is µ µ k we can interpret k A as the rate of growth of k-dimensional volume (see Proposition 0 above). We sometimes think of v v k as representing a normal (with magnitude) to a subspace spanned by v,..., v k. (but what does v v k + w w k represent?) 2.3. Grassmannians. The Grassmannian, G k (V ), of a vector space V is the collection of all k-dimensional subspaces of V. It has a lot of structure! compact manifold, homogeneous space, scheme,... Unlike exterior algebras, there s no additive structure though... Mostly we ll take V = R d and think of G k (V ) as a compact (and hence complete) metric space. The distance between two k-dimensional subspaces W and W is the Hausdorff distance d H (W S, W S), where S is the unit sphere. Recall that d H (W S, W S) = max( max d(v, W S), max d(v, W S)). v W S v W S It turns out that there is a symmetry of closeness for subspaces of the same dimension. Each of the terms in the maximum is bounded by a constant multiple of the other, where the constant depends only on k (even in Banach spaces). In particular, to show that two k-dimensional subspaces are close it suffices to show that either of the two quantities in the maximum is small. The manifold G k (R d ) turns out to be k(d k)-dimensional. I made use of a system of coordinates in a recent paper. 2.4. Inverse cocycle. We mentioned this one above. Here we require invertibility both of σ, and of the matrices defining the cocycle: A ω GL(d, R). The inverse cocycle is a cocycle over σ generated by B ω = A σ ω.

MULTIPLICATIVE ERGODIC THEOREM LECTURES 9 As mentioned above, B (n) ω = (A (n) σ n ω ). A little work with the subadditive ergodic theorem shows that the exponents of the inverse cocycle are µ d... µ. 2.5. Dual cocycle. This cocycle requires invertibility of σ, but not invertibility of the matrices A ω. Again it s a cocycle over σ. The generator is C ω = A σ ω. We then calculate C (n) ω = C σ (n ) ω C ω = A σ n ω A σ ω = (A σ ω A σ n ω) = (A (n) σ n ω ). Since the singular values of A (n) σ n ω are approximately enµ,..., e nµ d, we see that the singular values of the dual are the same, and hence the dual cocycle has the same Lyapunov exponents as the primal cocycle. 2.6. Temperedness. If σ is a measure-preserving transformation and f L, then for P-a.e. ω, f(σ n ω)/n 0. The cheapest proof uses the Birkhoff ergodic theorem, but there is a more elementary proof using the first Borel-Cantelli lemma. 3. Multiplicative Ergodic Theorem: proof sketch We ll give a rough sketch of the proof of the multiplicative ergodic theorem (the non-invertible version). The framework for the proof is from Raghunathan s proof of the MET (based on singular values). We ll deviate from it slightly, giving an inductive proof that we have modified to work in Banach spaces. We ve already indicated how to deduce the invertible version from the non-invertible version. Let σ be an ergodic measure-preserving transformation of a probability space (Ω, P). Let A: Ω M d d (R) generate a matrix cocycle and assume that A satisfies log A ω dp(ω) <. 3.. Finding the exponents. For each k, define a new cocycle A k ω. This satisfies log + A k ω dp(ω) k log + A ω dp(ω) <. Now, fn k (ω) = log A k(n) ω is a sub-additive sequence by sub-multiplicativity of norms, so that f k n n (ω) is convergent P-a.e. to lim n n f k n dp by the sub-additive ergodic theorem (or the Furstenberg-Kesten theorem).

0 MULTIPLICATIVE ERGODIC THEOREM LECTURES We let this limit be µ +... + µ k ; that is, we define µ k = lim (fn k (ω) fn (k ) (ω)) dp(ω), n n where fn 0 0. From our comments before, µ +... + µ k measures the long-term k-dimensional volume growth rate of A (n) ω. These µ k s will be the Lyapunov exponents. Recalling that A k(n) ω = k j= s j(a (n) ω ), we see e µ k = lim s k (A ω (n) ) /n. n In particular, the µ k are non-increasing as required. Notice also, that µ +... + µ d = lim log ( s n n (A ω (n) ) s d (A ω (n) ) ) = lim log det A(n) n n ω n = lim log det A σ n n i ω = i=0 log det A ω dp(ω). In particular, if A ω SL(d, R) for each ω, then µ +... + µ d = 0. We let λ >... > λ l be the Lyapunov exponents without repetition, where λ j occurs m j times among the µ i s. m j is called the multiplicity of λ j. 3.2. Finding the slow spaces. Let η < min 6 i(λ i λ i+ ). By the sub-additive ergodic theorem, there exists for P-a.e. ω an n 0 such that for each n > n 0 and each i, e (µ i η)n < s i (A (n) ω ) < e (µ i+η)n. Let j l and let U (n) j (ω) be the (m j +m j+ +...+m l )-dimensional space spanned by the singular vectors with the smallest singular values (these are all e (µ j+η)n or smaller). Claim. For P-a.e. ω, the U (n) j (ω) form a Cauchy sequence in G k (R d ). We define the limit of the U (n) j (ω) as n to be U j (ω). 3.3. Finishing the proof. Claim 2. U j (ω) is equivariant. Claim 3. For P-a.e. ω, if v U j (ω), then n log A(n) v λ j η for all large n. Claim 4. For P-a.e. ω, if v U j (ω), then n log A(n) v λ j + η for all large n.

3.4. Proof sketches. MULTIPLICATIVE ERGODIC THEOREM LECTURES Claim proof sketch. We want to show that for each j and for P-a.e. ω, U (n) j (ω) is a Cauchy sequence of subspaces of R d. The idea is quite simple: Take v U (n) j (ω) S and write v = u + w where u U (n+) j (ω) and w U (n+) j (ω). Notice that U (n+) j (ω) is the fast space for spanned by the top singular vectors, which all expand at least A (n+) ω at exponential rate λ j η. We have A (n+) ω v e (λj+η)n A σ n ω ; but also A (n+) ω v e (λj η)(n+) w. By temperedness, A σ n ω e ηn, so that we end up with an estimate like w e (λ j λ j 3η)n. Any vector v in U (n) j (ω) S is exponentially close to a vector, u, in U (n+) j (ω). The symmetry of closeness described above implies d(u (n) j (ω), U (n+) j (ω)) e (λ j λ j 3η)n. Since these distances are summable, the sequence of subspaces is Cauchy and () d(u (n) j (ω), U j (ω)) e (λ j λ j 3η)n. Claim 2 is proved similarly: we take an element v of U (n+) j (ω) S (σω) and w U (n) j (σω). Since and write A ω v as u + w with u in U (n) j v isn t too large, one deduces that w is exponentially small A (n+) ω Claim 3 is straightforward. If v U j (ω), then it has some distance δ from U j (ω). It then has distance at least δ (n) from U 2 j (ω) for large n. Writing v as u + w with u U (n) j (ω) and w U (n) j (ω), we have w δ, so that A(n) 2 ω v A (n) ω w e (λj η)n. Claim 4 is tricky, but is straightforward for j = 2. If v U 2 (ω) S, then v can be written as u+w with u U (n) 2 (ω) and w U (n) 2 (ω) with w e (λ λ 2 3η)n by (). Now applying A (n) ω to u and w separately, both are of size e λ 2n. For j > 2, Raghunathan splits w into pieces with different growth rates and proves stronger estimates on the faster growing pieces. A technically simpler (but less pleasing proof) is obtained by working inductively down from the top, giving a priori estimates on the growth of the fast part.

2 MULTIPLICATIVE ERGODIC THEOREM LECTURES All of the above can be made to work in separable Banach spaces. First, a sub-multiplicative notion of k-dimensional volume growth is needed. From this, we can define the exponents as above. The spaces are obtained in essentially the same way. This gives Theorem (Multiplicative ergodic theorem for Banach spaces; non-invertible version). Let σ be an ergodic measure-preserving transformation of a probability space (Ω, P). Let X be a Banach space with separable dual and let (A ω ) be a cocycle of quasi-compact operators on X. Then there exist λ > λ 2 >..., multiplicities m, m 2,... and (m +... + m i )-codimensional spaces U i (ω) that are equivariant and have the correct growth conditions. We come back to the deduction of invertible-type METs from noninvertible METs. Taking a cue from Corollary 9, we insist on invertibility of the base transformation. We see what we can get without making any invertibility assumptions on the matrices. 3.5. Non-invertible to invertible II: Duality. Assume that X is reflexive: X = X (this is certainly true in the finite-dimensional case). Then X is separable and the dual cocycle C ω described above satisfies the non-invertible MET with the same exponents. For matrices, a left eigenvector and right eigenvector with different eigenvalues are orthogonal. Analogously, a dual Oseledets subspace annihilates a primal Oseledets subspace with a different exponent. One can then show Uj+(ω) is the fast space, V λj (ω). From here, you intersect with V λj (ω) to get V j (ω) as previously. 3.6. Non-invertible to invertible III: Power method. To find the largest eigenvector of a matrix (assuming that the largest eigenvalue has multiplicity ), you can just take (almost) any vector and iterate the matrix. There s an analogous method to build the V j (ω). I ll describe it for the case of R d, but there is a Banach space version also. V j (ω) = lim n A (n) σ n ω (U j(ω) U j+ (ω)). Start from the orthogonal complement of the V λj+ (σ n ω) inside V λj (σ n ω). This has some component of the fast space inside it (you can make sense of this using exterior algebra). When it is iterated, the fast part dominates and the sequence converges to the true fast space. The following version of the MET (that we call semi-invertible) follows from either of these approaches.

MULTIPLICATIVE ERGODIC THEOREM LECTURES 3 Theorem 2 (Multiplicative Ergodic Theorem (semi-invertible) - Froyland, Lloyd, Q). Let σ be an invertible ergodic measure-preserving transformation of (Ω, P). Let A: Ω GL d (R) be a cocycle of not necessarily invertible matrices with log A ω dp(ω) <. Then there exist > λ > λ 2 >... > λ k > and subspaces V (ω),..., V k (ω) satisfying () Decomposition: R d = V (ω)... V k (ω); (2) Equivariance: A ω (V i (ω)) = V i (σ(ω)); (3) Growth: If v V i (ω) \ {0}, then log A(n) n ω v λ i as n +. There is also a version for quasi-compact operators on separable Banach spaces 4. Last lecture To be decided: either METs on Banach spaces and applications to oceans (lighter weight); or sensitivity of Lyapunov exponents.