THE HILBERT SPACE L2

Similar documents
x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS

Lecture 22: Variance and Covariance

Multivariate Random Variable

1.12 Multivariate Random Variables

Algorithms for Uncertainty Quantification

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

Chapter 5 continued. Chapter 5 sections

Lecture Note 1: Probability Theory and Statistics

Lecture 2: Repetition of probability theory and statistics

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

4.4. Orthogonality. Note. This section is awesome! It is very geometric and shows that much of the geometry of R n holds in Hilbert spaces.

More than one variable

Covariance and Correlation

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Gaussian random variables inr n

01 Probability Theory and Statistics Review

Vectors and Matrices Statistics with Vectors and Matrices

Bivariate distributions

Random Variables and Their Distributions

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Formulas for probability theory and linear models SF2941

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Multiple Random Variables

1 Probability theory. 2 Random variables and probability theory.

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

ENGG2430A-Homework 2

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

ESTIMATION THEORY. Chapter Estimation of Random Variables

STAT/MATH 395 PROBABILITY II

Let X and Y denote two random variables. The joint distribution of these random

Multivariate probability distributions and linear regression

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MANY BILLS OF CONCERN TO PUBLIC

Lecture 1: August 28

Expectation and Variance

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Chapter 2. Vectors and Vector Spaces

FE 5204 Stochastic Differential Equations

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

Chapter 5. Chapter 5 sections

3. General Random Variables Part IV: Mul8ple Random Variables. ECE 302 Fall 2009 TR 3 4:15pm Purdue University, School of ECE Prof.

Class 8 Review Problems solutions, 18.05, Spring 2014

Projection Theorem 1

Exam 2. Jeremy Morris. March 23, 2006

EE4601 Communication Systems

Probability Lecture III (August, 2006)

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

MAS223 Statistical Inference and Modelling Exercises

Appendix A : Introduction to Probability and stochastic processes

18.440: Lecture 26 Conditional expectation

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Bivariate Distributions

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

Gaussian Processes. 1. Basic Notions

We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events.

Machine learning - HT Maximum Likelihood

Prediction. Prediction MIT Dr. Kempthorne. Spring MIT Prediction

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:

Review of Probability Theory II

Continuous Random Variables

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Properties of Summation Operator

Chapter 1. Linear equations

Week 10 Worksheet. Math 4653, Section 001 Elementary Probability Fall Ice Breaker Question: Do you prefer waffles or pancakes?

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

The Multivariate Gaussian Distribution

Lecture 2: Review of Probability

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula

We introduce methods that are useful in:

BASICS OF PROBABILITY

Statistics (1): Estimation

Mathematical Preliminaries

B4 Estimation and Inference

The Hilbert Space of Random Variables

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2

Chapter 4 continued. Chapter 4 sections

Bivariate Transformations

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18

Joint Gaussian Graphical Model Review Series I

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Stat 5101 Notes: Algorithms (thru 2nd midterm)

Part II Probability and Measure

A Probability Review

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Multivariate Gaussians. Sargur Srihari

Stable Process. 2. Multivariate Stable Distributions. July, 2006

Covariance and Correlation Class 7, Jeremy Orloff and Jonathan Bloom

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution

Linear Algebra (Part II) Vector Spaces, Independence, Span and Bases

Boolean Algebra. Examples: (B=set of all propositions, or, and, not, T, F) (B=2 A, U,, c, Φ,A)

STAT Chapter 5 Continuous Distributions

3d scatterplots. You can also make 3d scatterplots, although these are less common than scatterplot matrices.

Joint Distribution of Two or More Random Variables

Elements of Probability Theory

Transcription:

THE HILBERT SPACE L

Definition: Let (Ω,A,P be a probability space. The set of all random variables X:Ω satisfying is denoted as L. EX < Remark: EX < implies that E X < (or equivalently that EX, because X X + E X EX +. Proposition: The set L together with the pointwise scalar multiplication defined for X L and λ by (λx(ω=λ(x(ω, ω Ω and the pointwise addition defined for X,Y L by is a vector space. (X+Y(ω=X(ω+Y(ω, ω Ω Proof: (i The two operations are closed because and X L, λ EX < E(λX =λ EX < λx L X,Y L EX, EY < E(X+Y E(X +Y < X+Y L.

(ii The associative, commutative, and distributive properties (X+Y+Z=X+(Y+Z, (λµx=λ(µx, X+Y=Y+X, λ(x+y=(λx+(λy, (λ+µx=(λx+(µx follow immediately from the pointwise definitions of the two operations. For example, if X,Y,Z L then ((X+Y+Z(ω =(X+Y(ω+Z(ω =(X(ω+Y(ω+Z(ω =X(ω+(Y(ω+Z(ω =X(ω+(Y+Z(ω =(X+(Y+Z(ω, ω Ω. (iii The random variable 0 which is identically zero on Ω satisfies the property of a zero vector. X+0=X X L (iv For all X S there exists an inverse vector -X defined by (-X(ω=-(X(ω, ω Ω, satisfying -X+X=0. (v X=X

Exercise: Show that a function < >:L L can be defined by <X,Y>=EXY, which satisfies for X,Y,Z L and λ <X+Y,Z>=<X,Z>+<Y,Z>, <λx,y>=λ<x,y>, <X,Y>=<Y,X>, <X,X> 0. Solution: - <E(-X -Y EXY E(X +Y < EXY, <X+Y,Z>=E(X+YZ=EXZ+EYZ=<X,Z>+<Y,Z>, <λx,y>=e(λxy=λexy=λ<x,y>, <X,Y>=EXY=EYX=<Y,X>, <X,X>=EXX=EX 0. 3

The function < > satisfies all the properties of an inner product except for <X,X>=0 X=0, because EX =0 implies only that P(X=0=, but not that X(ω=0 for all ω Ω. Analogously, the function satisfies all the properties of a norm except for X =0 X=0. To circumvent this problem we identify two random variables if they are equal almost surely, i.e., we switch from the individual random variables X L to equivalence classes [X]={Y L : P(Y=X=} of random variables which agree almost everywhere. Definition: Defining for equivalence classes [X], [Y] of almost surely equal elements of L and λ [X]+[Y]=[X+Y], λ[x]=[λx], <[X],[Y]>=<X,Y> we obtain an inner product space, which is denoted by L. 4

Proposition: The inner product space L of equivalence classes of almost surely equal random variables with finite variances is complete, i.e., X n L for all n, Thus L is a Hilbert space. Remark: Norm convergence X X 0 X L : X n X 0. m n X n X 0 is equivalent to mean square convergence n X X =E(X n -X 0. Exercise: Show that the relation ~ defined by X~Y P(X=Y= is indeed an equivalence relation by verifying the reflexive, symmetric, and transitive properties X~X, X~Y Y~X, X~Y,Y~Z X~Z X,Y,Z L. 5

Solution: The transitive property is satisfied, because {ω:x(ω=z(ω} {ω:x(ω=y(ω=z(ω} {ω:x(ω=z(ω} C {ω:x(ω=y(ω=z(ω} C = ({ω:x(ω=y(ω} {ω:y(ω=z(ω} C = {ω:x(ω=y(ω} C {ω:y(ω=z(ω} C P({ω:X(ω=Z(ω} C P({ω:X(ω=Y(ω} C + P({ω:Y(ω=Z(ω} C. Proposition: If E(X n -X 0 and E(Y n -Y 0, then Proof: (i EX n EX, (ii EX n Y n EXY, (iii Cov(X n Y n Cov(X,Y, (iv Var(X n Var(X. (i EX n =EX n = X n, X, =EX =EX (ii EX n Y n = X n, Y n X, Y =EXY (iii Cov(X n,y n =EX n Y n -EX n EY n EXY-EXEY=Cov(X,Y (iv Var(X n =Cov(X n,x n Cov(X,X=Var(X 6

Definition: The conditional expectation of X L given a closed subspace S L, which contains the constant function, is defined to be the projection of X onto S, i.e., E(X S=P S (X. Remark: The conditional expectation satisfies X E(XS < X Y for all other elements of S. Definition: The conditional expectation of X L given X,,X n L is defined to be the projection of X onto the closed subspace M(X,,X n spanned by all random variables of the form g(x,,x n, where g is some measurable function g: n, i.e., Remarks: (i It follows from E(X X,...,Xn = P M(X,...,X (X. n that X E(X X span(,x,,x n M(X,,X n,...,xn X E(X span(,x.,...,xn (ii For elements of L the definition of E(X X,...,Xn above coincides with the more general defnition of conditional expectation as the mean of the conditional distribution. 7

Exercise: Show that the bivariate normal density f(x=f(x,x = (π det Σ T µ exp( (x µ Σ (x with mean vector µ=(µ,µ T and covariance matrix Σ= = ρ ρ factors into two univariate normal densities, the marginal density f with mean µ and variance and the conditional x density f with mean µ + ρ and variance (-ρ. x Solution: Putting z =, z = we obtain (x-µ T Σ - (x-µ= = (x x x ρ (x T ( ρ x ρ (x ρ ( ρ + and completing squares (x x x z ρzz = ρ + z z ρ = ρ z ρ z ρz + ρ z + z = (z ρz z + ρ. Thus, f(x,x = π exp(- z π ( ρ exp(- (z ρz ρ. 8

Remark: The last exercise shows that in the case of a bivariate normal random vector (X,X the mean of the conditional distribution of X given X is a linear function of and X. More generally, if (X,X,,X n T has a multivariate normal distribution, then E(X X,...,Xn =E(X span(,x,...,xn. 9