Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Similar documents
Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces

A Note on Hilbertian Elliptically Contoured Distributions

Kernel Method: Data Analysis with Positive Definite Kernels

Your first day at work MATH 806 (Fall 2015)

3. Some tools for the analysis of sequential strategies based on a Gaussian process prior

Universität Stuttgart. Fachbereich Mathematik

CHAPTER VIII HILBERT SPACES

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

A Concise Course on Stochastic Partial Differential Equations

I teach myself... Hilbert spaces

Convergence of Eigenspaces in Kernel Principal Component Analysis

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Principal Component Analysis

Multi-normed spaces and multi-banach algebras. H. G. Dales. Leeds Semester

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

MATH 590: Meshfree Methods

Malliavin Calculus: Analysis on Gaussian spaces

Chapter 8 Integral Operators

COMPACT OPERATORS. 1. Definitions

René Bartsch and Harry Poppe (Received 4 July, 2015)

Approximate Kernel Methods

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Hilbert Space Methods for Reduced-Rank Gaussian Process Regression

The Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20

arxiv: v1 [math.fa] 28 Dec 2018

WHITE NOISE REPRESENTATION OF GAUSSIAN RANDOM FIELDS

Non commutative Khintchine inequalities and Grothendieck s theo

Second-Order Inference for Gaussian Random Curves

Covariance function estimation in Gaussian process regression

Basic Operator Theory with KL Expansion

Gaussian Processes. 1. Basic Notions

Exercises to Applied Functional Analysis

Interest Rate Models:

arxiv: v2 [math.pr] 27 Oct 2015

Stochastic Spectral Approaches to Bayesian Inference

Math 113 Final Exam: Solutions

Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Learning gradients: prescriptive models

1 Singular Value Decomposition and Principal Component

Maximum variance formulation

Statistical Convergence of Kernel CCA

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

E.7 Alaoglu s Theorem

Tests for separability in nonparametric covariance operators of random surfaces

CS 7140: Advanced Machine Learning

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Intertibility and spectrum of the multiplication operator on the space of square-summable sequences

Lecture 3: Review of Linear Algebra

Introduction to Functional Analysis With Applications

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

MAT 578 FUNCTIONAL ANALYSIS EXERCISES

Functional Analysis I

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

An introduction to some aspects of functional analysis

5 Compact linear operators

Functional Analysis F3/F4/NVP (2005) Homework assignment 3

The Laplacian PDF Distance: A Cost Function for Clustering in a Kernel Feature Space

Measure, Integration & Real Analysis

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

The Dirichlet-to-Neumann operator

A Hilbert Space for Random Processes

1 Continuity Classes C m (Ω)

The Dual of a Hilbert Space

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

Kernel-Based Contrast Functions for Sufficient Dimension Reduction

A Limit Theorem for the Squared Norm of Empirical Distribution Functions

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

Representations of Gaussian measures that are equivalent to Wiener measure

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations

BERGMAN KERNEL ON COMPACT KÄHLER MANIFOLDS

Analysis of Fractals, Image Compression and Entropy Encoding

Kernel Methods. Machine Learning A W VO

STUDY PLAN MASTER IN (MATHEMATICS) (Thesis Track)

Stable Process. 2. Multivariate Stable Distributions. July, 2006

FINITE-DIMENSIONAL LINEAR ALGEBRA

Independent component analysis for functional data

Convergence of the Ensemble Kalman Filter in Hilbert Space

On compact operators

Stochastic Processes

A brief introduction to trace class operators

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

Tools of stochastic calculus

Your first day at work MATH 806 (Fall 2015)

Math Real Analysis II

Gaussian process regression for Sensitivity analysis

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Gaussian vectors and central limit theorem

Spectral Theory, with an Introduction to Operator Means. William L. Green

Five Mini-Courses on Analysis

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel)

SYMPLECTIC GEOMETRY: LECTURE 1

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Support Vector Machines

Paradigms of Probabilistic Modelling

About Grupo the Mathematical de Investigación Foundation of Quantum Mechanics

Lecture 22: Variance and Covariance

Transcription:

Karhunen-Loève decomposition of Gaussian measures on Banach spaces Jean-Charles Croix jean-charles.croix@emse.fr Génie Mathématique et Industriel (GMI) First workshop on Gaussian processes at Saint-Etienne - November 2017, 7th. 1 / 42

Contents 1 Motivation 2 Karhunen-Loève decomposition in Hilbert spaces Gaussian vectors in R n Gaussian elements in Hilbert spaces 3 Karhunen-Loève decomposition in Banach spaces 4 Particular case of (R n,. ) 5 Particular case of C(K), K metric and compact. Continuous Gaussian processes Illustration on Brownian motion Summability of λ 6 Conclusion and open questions 7 References 2 / 42

Motivation 3 / 42

Motivation The problem of representation of Gaussian elements in linear series is used in: 1 Simulation (e.g. truncated Karhunen-Loève series), 2 Approximation and Dimension reduction (e.g. PCA or POD), 3 Optimal quantization, 4 Bayesian inverse problems. If the existence of an optimal basis is well known in Hilbert spaces...... it s not always explicit (eigenvalue problem),... it s not the case in Banach spaces. What if we are interested in non-hilbertian norms? 4 / 42

Motivation In the case of a continuous Gaussian process on [0, 1] we may inject it in L 2 ([0, 1], dx) and use Hilbertian geometry... 1.5 Basis functions 1.0 0.5 0.0 0.5 1.0 f0 f1 f2 f3 f4 1.5 0.0 0.2 0.4 0.6 0.8 1.0 Figure: Karhunen-Loève basis of Brownian motion in L 2 ([0, 1], dx). 5 / 42

Motivation or tackle the problem directly in C([0, 1])... 1.2 1.0 f0 f1 f2 f3 f4 f5 f6 f7 Basis functions 0.8 0.6 0.4 0.2 0.0 0.2 0.0 0.2 0.4 0.6 0.8 1.0 Figure: Brownian motion basis functions in C([0, 1]) (Paul Levy s construction)....in the hope of better approximation w.r.t. supremum norm! 6 / 42

Karhunen-Loève decomposition in Hilbert spaces 7 / 42

Gaussian vector in Euclidian spaces Consider the euclidian space R n with canonical inner product.,.. Gaussian random vector Let (Ω, F, P) a probability space and X : (Ω, F, P) (R n, B(R n )) a measurable mapping, then X is a Gaussian vector if and only if y R n, X, y is a Gaussian random variable. 8 / 42

Gaussian vector in Euclidian spaces Consider the euclidian space R n with canonical inner product.,.. Gaussian random vector Let (Ω, F, P) a probability space and X : (Ω, F, P) (R n, B(R n )) a measurable mapping, then X is a Gaussian vector if and only if y R n, X, y is a Gaussian random variable. Covariance Given a (centered) Gaussian random vector X, define the bilinear form: x, y R n, Cov(x, y) = E[ X, x X, y ], (1) which uniquely defines Σ : R n R n such that: x, y R n, Σx, y = Cov(x, y). (2) 8 / 42

Cameron-Martin and Gaussian spaces Considering the Gaussian vector X gives a fundamental structure: Gaussian space: Vect( X, e k, k [1, n]) L 2 (P) Cameron-Martin space H X = Range(Σ) equipped with.,. X = Σ 1.,. Loève isometry The following application: is an isometry. X, x L 2 (P) Σx H X (3) 9 / 42

Representation of Gaussian vectors in R n For any orthonormal basis (e k ) in R n, we can write: n X (ω) = X (ω), e k e k a.s. (4) k=1 10 / 42

Representation of Gaussian vectors in R n For any orthonormal basis (e k ) in R n, we can write: n X (ω) = X (ω), e k e k a.s. (4) k=1 X (ω), e i, X (ω), e j are independent if and only if Cov(e i, e j ) = Σe i, e j = Σe i, Σe j X = 0. 10 / 42

Representation of Gaussian vectors in R n For any orthonormal basis (e k ) in R n, we can write: n X (ω) = X (ω), e k e k a.s. (4) k=1 X (ω), e i, X (ω), e j are independent if and only if Cov(e i, e j ) = Σe i, e j = Σe i, Σe j X = 0. For any orthonormal basis (x k ) in H X : X (ω) = dim(h X ) k=1 X (ω), x k X x k (5) 10 / 42

Representation of Gaussian vectors in R n For any orthonormal basis (e k ) in R n, we can write: n X (ω) = X (ω), e k e k a.s. (4) k=1 X (ω), e i, X (ω), e j are independent if and only if Cov(e i, e j ) = Σe i, e j = Σe i, Σe j X = 0. For any orthonormal basis (x k ) in H X : X (ω) = dim(h X ) k=1 X (ω), x k X x k (5) The Spectral theorem (on covariance operator) exhibits a particular (Karhunen-Loève) basis (bi-orthogonal and Σh k = λ k h k ): X (ω) = dim(h X ) k=1 λk X (ω), h k h k (6) 10 / 42

Representation of Gaussian vectors in R n For any orthonormal basis (e k ) in R n, we can write: n X (ω) = X (ω), e k e k a.s. (4) k=1 X (ω), e i, X (ω), e j are independent if and only if Cov(e i, e j ) = Σe i, e j = Σe i, Σe j X = 0. For any orthonormal basis (x k ) in H X : X (ω) = dim(h X ) k=1 X (ω), x k X x k (5) The Spectral theorem (on covariance operator) exhibits a particular (Karhunen-Loève) basis (bi-orthogonal and Σh k = λ k h k ): X (ω) = dim(h X ) k=1 λk X (ω), h k h k (6) Note P k the linear projector on the k-th biggest eigenvalues, then: k n, min rank(p)=k E[ X PX 2 ] = E[ X P k X 2 ] = λ k+1 +...+λ dim(hx ) 10 / 42

Representation of Gaussian vectors in R n e 2 λ1 λ2 B HX (0, 1) e 1 B E (0, 1) Figure: Karhunen-Loève basis in dimension 2. 11 / 42

Gaussian element in Hilbert spaces Consider the (real) Hilbert space H with inner product.,.. Gaussian random element Let (Ω, F, P) a probability space and X : (Ω, F, P) (H, B(H)) a measurable mapping, then X is a Gaussian element if and only if y H, X, y is a Gaussian random variable. Covariance Given a Gaussian random element X, define the bilinear form: x, y H, Cov(x, y) = E[ X, x X, y ], (8) and the associated covariance operator C : H H such that: x, y H, Cx, y = Cov(x, y). (9) 12 / 42

Representation of Gaussian elements in Hilbert spaces The covariance operator C : H H is positive, symmetric and trace-class (see Vakhania 1987). 13 / 42

Representation of Gaussian elements in Hilbert spaces The covariance operator C : H H is positive, symmetric and trace-class (see Vakhania 1987). The Cameron-Martin space H X is the completion of Range(C) w.r.t. x, y C = C 1 x, y (it s a proper subspace of H, H X H, see Vakhania 1987). 13 / 42

Representation of Gaussian elements in Hilbert spaces The covariance operator C : H H is positive, symmetric and trace-class (see Vakhania 1987). The Cameron-Martin space H X is the completion of Range(C) w.r.t. x, y C = C 1 x, y (it s a proper subspace of H, H X H, see Vakhania 1987). For any Hilbert basis (e k ) in H we can write: X (ω) = k 0 X (ω), e k e k a.s. (10) 13 / 42

Representation of Gaussian elements in Hilbert spaces The covariance operator C : H H is positive, symmetric and trace-class (see Vakhania 1987). The Cameron-Martin space H X is the completion of Range(C) w.r.t. x, y C = C 1 x, y (it s a proper subspace of H, H X H, see Vakhania 1987). For any Hilbert basis (e k ) in H we can write: X (ω) = k 0 X (ω), e k e k a.s. (10) For any basis (x k ) in H X and (ξ k ) i.i.d. N (0, 1): X (ω) d = k 0 ξ k (ω)x k (11) 13 / 42

Representation of Gaussian elements in Hilbert spaces The covariance operator C : H H is positive, symmetric and trace-class (see Vakhania 1987). The Cameron-Martin space H X is the completion of Range(C) w.r.t. x, y C = C 1 x, y (it s a proper subspace of H, H X H, see Vakhania 1987). For any Hilbert basis (e k ) in H we can write: X (ω) = k 0 X (ω), e k e k a.s. (10) For any basis (x k ) in H X and (ξ k ) i.i.d. N (0, 1): X (ω) d = k 0 ξ k (ω)x k (11) Spectral theorem applies and exhibits a (Karhunen-Loève) bi-orthogonal basis (h k ): X (ω) = k 0 λk ξ k (ω)h k a.s. 13 / 42

Representation of Gaussian elements in Hilbert spaces The covariance operator C : H H is positive, symmetric and trace-class (see Vakhania 1987). The Cameron-Martin space H X is the completion of Range(C) w.r.t. x, y C = C 1 x, y (it s a proper subspace of H, H X H, see Vakhania 1987). For any Hilbert basis (e k ) in H we can write: X (ω) = k 0 X (ω), e k e k a.s. (10) For any basis (x k ) in H X and (ξ k ) i.i.d. N (0, 1): X (ω) d = k 0 ξ k (ω)x k (11) Spectral theorem applies and exhibits a (Karhunen-Loève) bi-orthogonal basis (h k ): X (ω) = k 0 λk ξ k (ω)h k a.s. Eckart-Young theorem is still valid (functional PCA,...) k > 0, min E[ X PX rank(p)=k 2 H] = E[ X P k X 2 H] = λ i (12) i>k 13 / 42

Karhunen-Loève decomposition in Banach spaces 14 / 42

Gaussian element in Banach spaces Consider the (real) Banach space E with duality pairing.,. E,E. Gaussian random element Let (Ω, F, P) a probability space and X : (Ω, F, P) (E, B(E)) a measurable mapping, then X is a Gaussian element if and only if f E, X, f E,E is a Gaussian random variable. Covariance Given a Gaussian random element X, define the bilinear form: f, g E, Cov(f, g) = E[ X, f E,E X, g E,E ], (13) and the associated covariance operator C : E E (see Vakhania 1987 or Bogachev 1998) such that: f, g E, Cf, g E,E = Cov(f, g). (14) 15 / 42

Representation of Gaussian elements in Banach spaces The covariance operator C : E E is positive, symmetric and nuclear (see Vakhania 1987). 16 / 42

Representation of Gaussian elements in Banach spaces The covariance operator C : E E is positive, symmetric and nuclear (see Vakhania 1987). The space H X is the completion of Range(C) w.r.t. x, y X = y, C 1 x E,E (it s a proper subspace of E, H X E, see Vakhania 1987, Bogachev 1998). 16 / 42

Representation of Gaussian elements in Banach spaces The covariance operator C : E E is positive, symmetric and nuclear (see Vakhania 1987). The space H X is the completion of Range(C) w.r.t. x, y X = y, C 1 x E,E (it s a proper subspace of E, H X E, see Vakhania 1987, Bogachev 1998). For any Hilbert basis (x k ) in H X and (ξ k ) i.i.d. N (0, 1), we have: X d = k 1 ξ k x k (15) 16 / 42

Representation of Gaussian elements in Banach spaces The covariance operator C : E E is positive, symmetric and nuclear (see Vakhania 1987). The space H X is the completion of Range(C) w.r.t. x, y X = y, C 1 x E,E (it s a proper subspace of E, H X E, see Vakhania 1987, Bogachev 1998). For any Hilbert basis (x k ) in H X and (ξ k ) i.i.d. N (0, 1), we have: X d = k 1 ξ k x k (15) C : E E No Spectral theorem. 16 / 42

Representation of Gaussian elements in Banach spaces The covariance operator C : E E is positive, symmetric and nuclear (see Vakhania 1987). The space H X is the completion of Range(C) w.r.t. x, y X = y, C 1 x E,E (it s a proper subspace of E, H X E, see Vakhania 1987, Bogachev 1998). For any Hilbert basis (x k ) in H X and (ξ k ) i.i.d. N (0, 1), we have: X d = k 1 ξ k x k (15) C : E E No Spectral theorem. We know that (Bogachev 1998, Vakahnia 1991): (x k ) Range(C) Hilbert basis in H X, (x k ) H X such that k 0 h k 2 E < +, 16 / 42

Representation of Gaussian elements in Banach spaces The covariance operator C : E E is positive, symmetric and nuclear (see Vakhania 1987). The space H X is the completion of Range(C) w.r.t. x, y X = y, C 1 x E,E (it s a proper subspace of E, H X E, see Vakhania 1987, Bogachev 1998). For any Hilbert basis (x k ) in H X and (ξ k ) i.i.d. N (0, 1), we have: X d = k 1 ξ k x k (15) C : E E No Spectral theorem. We know that (Bogachev 1998, Vakahnia 1991): (x k ) Range(C) Hilbert basis in H X, (x k ) H X such that k 0 h k 2 E < +, If the basis (x k ) is in Range(C) with x k = Cx k, then: X (ω) = k 0 X (ω), x k E,E x k (16) 16 / 42

Representation of Gaussian elements in Banach spaces Factorization of covariance operators (Vakhania 1987, Bogachev 1998). Let C be the covariance operator of a Gaussian element, then: C = SS, (17) where S : H E is a bounded operator and H a Hilbert space. 17 / 42

Representation of Gaussian elements in Banach spaces Factorization of covariance operators (Vakhania 1987, Bogachev 1998). Let C be the covariance operator of a Gaussian element, then: C = SS, (17) where S : H E is a bounded operator and H a Hilbert space. Examples: In the Hilbert case, S = C 1 2, S : H X E the inclusion map, S : f E L2 (P) E[f (X )X ] E (S is the injection from E to L 2 ) 17 / 42

Representation of Gaussian elements in Banach spaces Factorization of covariance operators (Vakhania 1987, Bogachev 1998). Let C be the covariance operator of a Gaussian element, then: C = SS, (17) where S : H E is a bounded operator and H a Hilbert space. Examples: In the Hilbert case, S = C 1 2, S : H X E the inclusion map, S : f E L2 (P) E[f (X )X ] E (S is the injection from E to L 2 ) (Luschgy and Pagès 2009) Let C = SS with S : H E. Then for any basis (h n ) in H: X d = k 0 ξ k Sh k, (18) where (ξ k ) are i.i.d. N (0, 1). 17 / 42

Representation of Gaussian elements in Banach spaces This methodology has been widely used: When E = C([0, 1]), S : f E L 2 ([0, 1], dx)... We will now give a different methodology, mimicking the Hilbert case, to construct a Hilbert basis in H X. We will proceed as follows: 1 Find directions of maximum variance, 2 Choose a right notion of orthogonality to iterate, 3 Study the asymptotics. 18 / 42

Decomposition of the Cameron-Martin space 1 For any (Gaussian) covariance operator, f E Cov(f, f ) R + may be interpreted as a Rayleigh quotient, is weakly sequentially continuous, is quadratic. 19 / 42

Decomposition of the Cameron-Martin space 1 For any (Gaussian) covariance operator, f E Cov(f, f ) R + may be interpreted as a Rayleigh quotient, is weakly sequentially continuous, is quadratic. 2 f 0 B E (0, 1) (non-unique) such that Cov(f 0, f 0 ) = max f E 1 Cov(f, f ) = λ 0 (Banach-Alaoglu), 19 / 42

Decomposition of the Cameron-Martin space 1 For any (Gaussian) covariance operator, f E Cov(f, f ) R + may be interpreted as a Rayleigh quotient, is weakly sequentially continuous, is quadratic. 2 f 0 B E (0, 1) (non-unique) such that Cov(f 0, f 0 ) = max f E 1 Cov(f, f ) = λ 0 (Banach-Alaoglu), 3 If λ 0 > 0, let x 0 E such that Cf 0 = λ 0 x 0, then P 0 : x E x, f 0 E,E x 0 is a projector of unit norm. 19 / 42

Decomposition of the Cameron-Martin space 1 For any (Gaussian) covariance operator, f E Cov(f, f ) R + may be interpreted as a Rayleigh quotient, is weakly sequentially continuous, is quadratic. 2 f 0 B E (0, 1) (non-unique) such that Cov(f 0, f 0 ) = max f E 1 Cov(f, f ) = λ 0 (Banach-Alaoglu), 3 If λ 0 > 0, let x 0 E such that Cf 0 = λ 0 x 0, then P 0 : x E x, f 0 E,E x 0 is a projector of unit norm. 4 X = (P 0 X, (I P 0 )X ). 19 / 42

Decomposition of the Cameron-Martin space 1 For any (Gaussian) covariance operator, f E Cov(f, f ) R + may be interpreted as a Rayleigh quotient, is weakly sequentially continuous, is quadratic. 2 f 0 B E (0, 1) (non-unique) such that Cov(f 0, f 0 ) = max f E 1 Cov(f, f ) = λ 0 (Banach-Alaoglu), 3 If λ 0 > 0, let x 0 E such that Cf 0 = λ 0 x 0, then P 0 : x E x, f 0 E,E x 0 is a projector of unit norm. 4 X = (P 0 X, (I P 0 )X ). 5 Iterate on X 1 = (I P 0 )X. 19 / 42

Decomposition of the Cameron-Martin space 1 For any (Gaussian) covariance operator, f E Cov(f, f ) R + may be interpreted as a Rayleigh quotient, is weakly sequentially continuous, is quadratic. 2 f 0 B E (0, 1) (non-unique) such that Cov(f 0, f 0 ) = max f E 1 Cov(f, f ) = λ 0 (Banach-Alaoglu), 3 If λ 0 > 0, let x 0 E such that Cf 0 = λ 0 x 0, then P 0 : x E x, f 0 E,E x 0 is a projector of unit norm. 4 X = (P 0 X, (I P 0 )X ). 5 Iterate on X 1 = (I P 0 )X. Bay & Croix 2017 (λ n ) is non-increasing, λ n 0 and ( λ k x k ) is a Hilbert basis in H X. 19 / 42

Decomposition of the Cameron-Martin space 1 For any (Gaussian) covariance operator, f E Cov(f, f ) R + may be interpreted as a Rayleigh quotient, is weakly sequentially continuous, is quadratic. 2 f 0 B E (0, 1) (non-unique) such that Cov(f 0, f 0 ) = max f E 1 Cov(f, f ) = λ 0 (Banach-Alaoglu), 3 If λ 0 > 0, let x 0 E such that Cf 0 = λ 0 x 0, then P 0 : x E x, f 0 E,E x 0 is a projector of unit norm. 4 X = (P 0 X, (I P 0 )X ). 5 Iterate on X 1 = (I P 0 )X. Bay & Croix 2017 (λ n ) is non-increasing, λ n 0 and ( λ k x k ) is a Hilbert basis in H X. We will note x0 = f 0 and xn = f n n 1 k=0 x k, f n E,E xk P n = n k=0 x k x k (similar to Gram-Schmidt). such that 19 / 42

Properties of the decomposition Few comments about the previous construction... Properties This construction gives the following properties: X (ω) = k 0 X (ω), x k E,E x k a.s. C = SS with S : f E k 0 λ k x k, f E,E x k Note C n = n k=0 λ kx k x k then C C n L(E,E) = λ n+1. Recovers Karhunen-Loève basis in the Hilbert case. (x k, x k ) E E is bi-orthogonal. Vect(x k, k 0) E = H X E, k N, f k E = x k E = 1. Remarks k 0 λ k need not be finite (not a nuclear representation of C). Optimality seems out of reach (Rate optimality?). 20 / 42

Particular case of (R n,. ) 21 / 42

Application in R n Here, E = (R n,. 1 ) and (x, f ) E E, x, f E,E Suppose a centered dataset (y i ) i [1,p] (R n ) p, Form the usual covariance matrix Σ M n (R), Find the direction of maximal variance f 0 R n : = n i=1 x if i. λ 0 = f T 0 Σf 0 = max i [1,n] Σ (i,i) with f 0 = (0,..., 0, 1, 0,..., 0) (19) If λ 0 > 0 then x 0 = Σf0 λ 0. Σ 1 = Σ λ 0 x 0 x T 0 Iterate. 22 / 42

Application in R n Residuals norm as r k = 1 n n i=1 x i k j=0 x i, xj E,E x j, Projections norm as p k = 1 n n i=1 k j=0 x i, xj E,E x j. Figure: Residuals (decreasing) and projections (increasing) norm (blue for L2-PCA, red for Banach PCA) 23 / 42

Particular case of C(K), K metric and compact. 24 / 42

E = C(K) with K metric compact. Let X be a Gaussian element such that X C(K) almost-surely and suppose that k : (s, t) K 2 Cδ s, δ t E,E = Cov(X s, X t ) R (20) is continuous. The Cameron-Martin space H X coincides here with the Reproducing Kernel Hilbert space (RKHS). The previous decomposition becomes: 1 Set n = 0 and k n = k, 2 Find x n K such that k n (x n, x n ) = max x K k n (x, x), 3 If λ n = k n (x n, x n ) > 0 then k n+1 (s, t) = k n (s, t) kn(s,xn)kn(xn,t) λ n, 4 n n + 1. 25 / 42

Illustration on Brownian motion: Step 1 Figure: Left: Brownian motion samples and point-wise (symmetrized) variance. Right: First basis function x 0(t) = t associated to f 0 = δ 1. 26 / 42

Illustration on Brownian motion: Step 2 Figure: Left: Brownian bridge samples and point-wise (symmetrized) variance. Right: Second basis function x 1(t) = min(t, 0.5) 0.5t, associated to f 1 = δ 1 2. 27 / 42

Illustration on Brownian motion: Step 3 Figure: Left: Conditional Brownian motion samples and point-wise (symmetrized) variance. Right: Third and fourth basis functions. 28 / 42

A word on summability The proposed decomposition need not be a nuclear representation of C, that is k 0 λ k [0, + ]. Indeed, the Brownian case shows us: k 0 λ k = 1 + 1 2 + 2 1 4 + 4 1 16 + 8 1 +... = + (21) 32 However, a simple transformation gives a nuclear representation in this case: f 2 1 = f 2 1 f 2 2 Cov( f 2 1, f 1 2 2 ) = Cov(f 2 1, f 1 2 f 2 2 = f 2 1 + f 2 2 Cov( f 2 2, 2 f 2 2 ) = Cov(f 2 2, f 2 2 Doing similar transformations at each step gives: k 0 2 ) 2 ) = 1 8 = 1 8 (22) (23) λ k = 1 + 1 2 + 2 1 8 + 4 1 64 + 8 1 +... = 2 (24) 256 Remark that E[ k n ξ kh k 2 ] k n λ k, thus the approximation error is exponentially decreasing! 29 / 42

Conclusion and open questions 30 / 42

Conclusion and open questions Conclusions New practical method to represent Gaussian elements, based on covariance operator. Numerical solution in the case of E = C(K) (K metric and compact). Rediscovers Paul Levy s construction of Brownian motion. Open question & future work Other properties of (x k, x k ) E E? Rate optimality of finite dimensional approximations in C(K)? Multiplicity of decomposition elements? Links with approximation numbers (ex: l(x ) = E[ X 2 ] 1 2 ) approximation theory (s-numbers)? Properties of Banach spaces? Optimality for projection norm? Dual problem? Interpretation of k 0 λ k when it is finite? Evolution of x k E? 31 / 42

References 32 / 42

References X. Bay, J.-C. Croix. Karhunen-Loève decomposition of Gaussian measures on Banach spaces. ArXiV, 2017. H. Luschgy, G. Pagès. Expansions For Gaussian Processes And Parseval Frames. Electronic Journal of Probability, 14, 2009, 1198-1221. N. Vakhania. Canonical Factorization of Gaussian covariance operators and some of its applications. Theory of Probability and Its Applications, 1991. V. Bogachev. Gaussian Measures. Mathematical Surveys and Monographs vol. 62, AMS, 1998. N. Vakhania, V. Tarieladze, S. Chobanyan. Probability Distributions on Banach spaces. Mathematics and Its Applications, Springer Netherlands, 1987. 33 / 42

Appendix 34 / 42

Examples with different kernels Basis functions 1.2 f0 f1 f2 f3 1.0 f4 f5 f6 f7 0.8 0.6 0.4 0.2 0.0 0.2 0.0 0.2 0.4 0.6 0.8 1.0 Figure: Wiener measure basis functions. 35 / 42

Examples with different kernels 10 1 Variances Variances 10 0 10-1 10-2 0 2 4 6 8 10 Figure: Wiener measure basis variances. 36 / 42

Examples with different kernels 1.5 Basis functions 1.0 0.5 0.0 f0 f1 0.5 f2 f3 f4 f5 f6 f7 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure: k(s, t) = exp ) ( (s t)2 basis functions. 2 37 / 42

Examples with different kernels 10 1 Variances Variances 10 0 10-1 10-2 10-3 10-4 10-5 10-6 10-7 10-8 10-9 10-10 10-11 10-12 10-13 0 1 2 3 4 5 6 7 8 Figure: k(s, t) = exp ) ( (s t)2 basis variances. 2 38 / 42

Examples with different kernels Basis functions x0 x1 1.0 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 0.5 0.0 0.5 0.0 0.2 0.4 0.6 0.8 1.0 Figure: Matern 3/2 kernel basis functions. 39 / 42

Examples with different kernels 10 1 Variances Variances 10 0 10-1 10-2 10-3 10-4 0 2 4 6 8 10 Figure: Matern 3/2 kernel basis variances. 40 / 42

Examples with different kernels Basis functions 1.2 f0 f1 f2 f3 1.0 f4 f5 f6 f7 0.8 0.6 0.4 0.2 0.0 0.2 0.0 0.2 0.4 0.6 0.8 1.0 Figure: Fractional brownian motion (H = 75%) basis functions. 41 / 42

Examples with different kernels Basis functions 1.2 f0 f1 f2 f3 1.0 f4 f5 f6 f7 0.8 0.6 0.4 0.2 0.0 0.2 0.0 0.2 0.4 0.6 0.8 1.0 Figure: Fractional brownian motion (H = 25%) basis functions. 42 / 42