Stable Process. 2. Multivariate Stable Distributions. July, 2006

Size: px
Start display at page:

Download "Stable Process. 2. Multivariate Stable Distributions. July, 2006"

Transcription

1 Stable Process 2. Multivariate Stable Distributions July, Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors. 5. Covariation. 6. James orthogonality. 7. Codifference. 1 Stable random vectors Definition A random vector X = (X 1,..., X d ) is said to be a stable random vector in R d if for any positive numbers A and B, there is a positive number C and vector D R such that AX (1) + BX (2) dist = CX + D, (1) where X (1) and X (2) are independent copies of X. X is called strictly stable if and only if D = 0. It is called symmetric stable if for any Borel set S of R d, the following relation holds P(X S) = P( X S). NOTE: Any symmetric stable random variable is also strictly stable, but not vise versa. Theorem 1.1 Let X = (X 1,..., X d ) be a stable(respectively strictly stable, symmetric stable) vector in R d. Then there is a constant α (0, 2], such that in equation 1, C = (A α + B α ) 1/α. Moreover, any linear combination of the components in X of the type Y = d i=1 b ix i = (b, X) is also α- stable(respectively strictly stable, symmetric stable). Proof Use equation 1 and characteristic functions, we know that (b, AX (1) + BX (2) ) dist = (b, CX + D), that is to say, AY (1) + BY (2) dist = CY + (b, D). 1

2 From the univariate theory, there is a constant α (0, 2], such that C = (A α + B α ) 1/α. Moreover, this α is unique, otherwise for another α, the following equation holds for any A > 0 and B > 0 (A α + B α ) 1/α = C = (A α + B α ) 1/α. The rest is also easy to see. Similar to the one dimensional case, the previous definition is equivalent to say Definition A random variable X is stable if and only if for any n 2, there is an α (0, 2], and a vector D n such that X (1) + X (2) + + X (n) dist = n 1/α X + D n, where X (1), X (2),..., X (n) are independent copies of X. Definition A random vector X in R d is called α-stable if equation 1 holds with C = (A α + B α ) 1/α, or, equivalently, if equation 1 holds. The index α is the index of stability or the characteristic exponent of the vector x. Theorem 1.2 Let X be a random vector in R d. (a) If all linear combinations of Y = d k=1 b kx k have strictly stable distribution, then X is a strictly stable random vector. (b) If all linear combinations are symmetric stable, then X is a symmetric stable random vector. (c) If all linear combinations are stable with index of stability greater or equal to 1, then X is a stable vector. HINT: Denote Y b = d k=1 b kx k, and first show that if all Y b are stable, then they are of the same index of stability using reductio ad absurdum(proof by contradiction). And the key in this proof is comparing the tail convergence, and consider the limit behaviour of Z n = A n Y b + Y c, where α b < α c, and A n goes to zero. NOTE: There is a counterexample due to David J. Marcus which demonstrates that there exists a non-stable vector X = (X 1, X 2 ) whose linear combinations are all α-stable random variables with α < 1. Key in the proof of the counterexample is examine the characteristic functions. Theorem 1.3 Let X be a random vector in R d such that all linear combinations of its components are stable. If X is also infinitely divisible, then X is stable. 2 Characteristic functions Let X = (X 1,..., X d ) be a α-stable random vector, and let Φ α (θ) = E exp{i(θ, X)} denote its characteristic function. Also let = {s : s = 1} denote the unit sphere in R d, which is a (d 1)-dimensional surface. Theorem 2.1 Let 0 < α < 2, then X is an α-stable random vector in R d if and only if there exists a finite measure Γ on the unit sphere and a vector µ 0 in R d such that 2

3 (a) If α 1, (b) If α = 1, { ( Φ α (θ) = exp (θ, s) α 1 isign((θ, s)) tan πα 2 ) } Γ(ds) + i(θ, µ 0 )). (2) { Φ α (θ) = exp (θ, s) (1 + i 2 ) } π sign((θ, s)) ln (θ, s) Γ(ds) + i(θ, µ 0 )). (3) The pair (Γ, µ 0 ) is unique. NOTE: The components of µ 0, in the case of α = 1, are not equal to the shift parameters of components X 1,..., X d of X. Definition The vector X in previous theorem is said to have spectrum representation (Γ, µ 0 ). The measure Γ is called the spectrum measure of the α-stable random vector X. Example Suppose d = 1, then S 1 = { 1, 1}. If X S α (σ, β, µ) with α 1, then σ = (Γ(1) + Γ( 1)) 1/α, β = Γ(1) Γ( 1) Γ(1) + Γ( 1), µ = µ0. The skewness parameter β is zero if the spectrum measure Γ is symmetric. Similar results hold for α = 1. Example Let X be an α-stable random vector with characteristic function given in Theorem 2.1. Then linear combination Y b = (b, X) has an α-stable distribution S α (σ b, β b, µ b ). Moreover, σ b = ( ) 1/α (b, s) α Γ(ds), (4) β b = (b, s) α sign((b, s))γ(ds) (b, s) α, (5) Γ(ds) µ b = { (b, µ 0 ) if α 1, (b, µ 0 ) 2 π (b, s) ln (b, s) Γ(ds) if α = 1. (6) Proposition 2.2 The spectral measure Γ d of an α-stable vector X is concentrated on a finite number of points on the unit sphere if and only if (X 1,..., X d ) can be expressed as a linear transformation of independent α-stable random variables, say X = AY, where Y 1,..., Y 2 are independent α-stable, and A is a d d matrix. Suppose X is an α-stable random vector, then we know a corresponding spectra measure Γ is defined on the unit sphere with respect to the Euclidean norm in R d. However, there are many other norms in R d, say, which defines another unit sphere S d, therefore, we need another spectra measure on this new unit sphere. 3

4 Proposition 2.3 Let Γ be a finte Borel measure on equivalent to Γ with Γ (ds) = s α Γ(ds) and let T : S d be given by T s = s/ s. Define and where Γ = Γ T 1, µ 0 = { µ 0 if α 1, µ 0 + µ if α = 1, ( µ ) j = 2 π s j ln s Γ(ds), j = 1,..., d. Then the joint characteristic function Φ α (θ) of the α-stable random vector X in R d is also given by (2) and (3) with (, Γ, µ 0 ) replaced by (S d, Γ, µ 0 ) and (θ, s) = d j=1 θ js j. 3 Strictly stable and symmetric stable random vectors Theorem 3.1 X is a strictly α-stable random vector in R d with 0 < α 2 if and only if (a) α 1, (b) α = 1, µ 0 = 0; s k Γ(ds) = 0 for k = 1,..., d. As a result, we have Corollary 3.2 X is a strictly α-stable random vector in R d with 0 < α 2 if and only if all its component X k, k = 1,..., d are strictly α-stable. Theorem 3.3 X is a symmetric α-stable random vector in R d with 0 < α < 2 if and only if there exists a unique symmetric finite measure Γ on the sphere such that { } E exp{i(θ, X)} = exp (θ, s) α Γ(ds). (7) Γ is the spectral measure of the symmetric α-stable random vector X. NOTE 1: Not every strictly 1-stable random vector in R d with d > 1 can be made symmetric by shifting. NOTE 2: The symmetry of an α-stable random vector cannot be regarded as a component-wise property. For example, let X 1, X 2, X 3 be i.i.d. S 1 (1, 1, 0), and Y 1 = X 1 X 2, Y 2 = X 2 X 3, then Y is component-wise symmetric 1-stable, but the vector itself is not symmetric. However, if X 1,..., X d be jointly SαS with spectral measure Γ d, then X 1,..., X n, n d, are jointly SαS with spectral measure Γ n defined by some transformation. NOTE 3: This theorem holds also in the Gaussian case α = 2, but then Γ is no longer unique. 4

5 4 Sub-Gaussian random vectors Recall proposition 3.1, symmetric α-stable random variable can be constructed by multiplying a normal random variable G and a α/2-stable random variable totally skewed to the right and independent of G. The d dimensional extension can be stated as follows. Choose ( ( A S α/2 cos πα ) ) 2/α, 1, 0, with α < 2, (8) 4 so that the Laplace transform is Ee γa = exp{ γ α/α }. Let G = (G 1,..., G d ) be a zero mean Gaussian vector in R d independent of A, then the random vector X = (A 1/2 G 1,..., A 1/2 G d ) (9) has a Sαistribution, since (b, X) is SαS for all b. Definition Any vector X distributed as in equation is called sub-gaussian SαS random vector with underlining Gaussian vector G. It is also said to be subordinated to G. Proposition 4.1 The sub-gaussian symmetric α-stable random vector X has characteristic function { } d d d E exp i θ k X k = exp 1 R ij θ i θ j α/2 2, (10) k=1 i=1 j=1 where R ij = EG i G j is the covariance of the underlining Gaussian random variable G. HINT: By conditional on A, use iterative expectations, combining the Laplace transform of A, easy to show the proposition is valid. NOTE: G and X has a one-to-one correspondence. Example The characteristic function of a multivariate Cauchy distribution in R d is { } φ(θ) = exp (θ T Σθ) 1/2 + i(θ, µ 0 ). That is to say, the multivariate Cauchy distribution is a shifted SαS sub-gaussian distribution. Proposition 4.2 Let X be a SαS, α < 2 random vector in R d, then the following three statements are equivalent: (a) X is sub-gaussian with an underling Gaussian vector having i.i.d. N(0, σ 2 ) components. (b) The characteristic function of X has the form of E exp { i } ( d θ k X k = exp σ 2 2 k=1 d i=1 θ 2 i ) α/2 = exp { 2 α/2 σ α θ α}. In other words, the characteristic function only depends on the magnitude of θ. (c) The spectral measure of X is uniform on. 5

6 Generally speaking, the components of the underlining G is not always i.i.d., but being Gaussian, they can always be expressed as a linear combination of i.i.d. N(0, 1) random variables. Therefore Proposition 4.3 Let Z be a SαS sub Gaussian random variable in R d with underling Gaussian vector having i.i.d. N(0, 1) components. Then for any SαS sub Gaussian random variable X in R d, there is a lower-triangular d d matrix Λ such that X d = ΛZ. The matrix Λ is full rank if the components of X are linearly independent. Proposition 4.4 The spectral measure Γ of a sub-gaussian SαS random vector in R d has the form Γ = h(γ 0 ), where Γ 0 is the uniform measure on, and h is a particular mapping from onto itself. NOTE: Not all symmetric α-stable random vectors are sub-gaussian. Moreover, the components of a sub-gaussian SαS random vector are strongly dependent. 5 Covariation The covariance function is extremely powerful in studying Gaussian random vectors, however, it does not exist for α-stable random variables when α < 2. Therefore, a less powerful(but still useful) tool called covariation is defined for 1 < α < 2, and details are discussed here. Definition Let a and p be real numbers, with p. The signed power a <p> equals a <p> = a p sign(a). (11) Definition Let X 1 and X 2 be jointly SαS with α > 1 and let Γ be the spectral measure of the random vector (X 1, X 2 ). The covariation of X 1 on X 2 is the real number [X 1, X 2 ] α = s 1 s <α 1> 2 Γ(ds). (12) S 2 NOTE 1: The covariation is not symmetric in its arguments. NOTE 2: When α = 2, this definition leads to [X 1, X 2 ] 2 = 1 2 Cov(X 1, X 2 ). More intuitively, Let (X 1, X 2 ) be jointly SαS with 1 < α < 2, then Y = θ 1 X 1 + θ 2 X 2 is also SαS. Let σ(θ 1, θ 2 ) be the scale parameter of the random variable Y. Definition The covariation can also be defined as [X 1, X 2 ] α = 1 σ α (θ 1, θ 2 ) θ1 α θ. (13) 1 =0,θ 2 =1 Proof From previous examples we know that σ α (θ 1, θ 2 ) = S 2 θ 1 s 1 + θ 2 s 2 α Γ(ds). Take the derivative, and easy to show the rest. Example( Let G be mean zero Gaussian vector with covariance matrix R. For fixed 1 < α < 2, let ( ) ) A S α/2 cos πα 2/α, 4 1, 0 be independent of G. Then X = (AG 1,..., AG d ) is sub-gaussian. Notice the scale parameter σ(θ i, θ j ) of random variable Y = θ i X i + θ j X j is σ α (θ i, θ j ) = 2 α/2 (θ 2 i R ii + 2θ i θ j R ij + θ 2 j R jj) α/2, Hence the covariation is [X i, X j ] α = 2 α/2 R ij R α/2 1 jj. (14) 6

7 NOTE: [X i, X j ] α = [X j, X i ] α if R ii = R jj ; and [X i, X j ] α = 0 if R ij = 0. Lemma 5.1 Let X be a SαS random vector in R n with α > 1 and spectral measure Γ X, and let Y = (a, X), and Z = (b, X). Then ( )( ) <α 1> [Y, Z] α = a, s b, s ΓX (ds). S n Corollary 5.2 Let X be a SαS random vector with α > 1 and spectral measure Γ X, then [X 1, X 2 ] α = s 1 s <α 1> 2 Γ X (ds), S n and [X 1, X 1 ] α = s 1 α Γ X (ds) = σx α 1, S n where α X1 is the scale parameter of the SαS random variable X 1. Proposition 5.3 (Additivity in the first argument) Suppose (X 1, X 2, Y ) are jointly SαS, then [X 1 + X 2, Y ] α = [X 1, Y ] α + [X 2, Y ] α. Proposition 5.4 (Scaling) Suppose (XY ) are jointly SαS, and a, b are real numbers, then [ax, by ] α = ab <α 1> [X, Y ] α. NOTE 1: Although covariation is linear in its first argument, it is in general not linear in its second argument. NOTE 2: Generally speaking, covariation is not symmetric in its arguments. Proposition 5.5 If X and Y are jointly SαS and independent, then [X, Y ] α = 0. HINT: Notice that the components X is independent of Y implies that the spectral measure Γ is discrete and concentrate on the intersection of the axes and the sphere. NOTE: When 1 < α < 2, it is possible to have [X, Y ] α = 0, and X and Y are dependent. For example, the sub-gaussian random vector X, with non-degenerate G 1 independent of G 2. Proposition 5.6 Let (X, Y 1, Y 2 ) are jointly SαS, α > 1, with Y 1 and Y 2 independent. Then [X, Y 1 + Y 2 ] α = [X, Y 1 ] α + [X, Y 2 ] α. Lemma 5.7 Let (XY ) are jointly SαS with α > 1. Then for all 1 < p < α, EXY <p 1> E Y p = [X, Y ] α Y α, where Y α denotes the scale parameter of random variable Y. NOTE: Y α is also equal to [Y, Y ] 1/α α, hence it is called the covariation norm of Y S α, where S α is the linear space of jointly SαS random variables. The norm is well defined when α > 1. Proposition 5.8 α is a norm on S α. Convergence in α is equivalent to convergence in probability, and it is also equivalent to convergence in L p for all p < α. NOTE: Even when 0 < α 1, the norm does not exist, if we continue use the notation that X α = σ X, the second part of this proposition about convergence still holds. Proposition 5.9 Let (X, Y ) are jointly SαS, 1 < α < 2, then [X, Y ] α X α Y α 1 α. HINT: Hőlder inequality. 7

8 6 James orthogonality Recall the Gaussian case, consider all mean zero normal random variables. Their collection L 2 0 (Ω, F, P ) is a Hilbert space with inner product (X, Y ) = Cov(X, Y ) = E(XY ). Two random variables are independent if and only if they are orthogonal. However, although the norm α is well defined in S α, the covariation is not a inner product, the notion of orthogonality is not well defined. One alternative is to introduce James orthogonality(james, 1947). Definition Let E be a normed vector space. A vector x E is said to be James orthogonal to a vector y E, writing as x J y, if for any real λ, x + λy x. NOTE 1: J is not symmetric. NOTE 2: If E is Hilbert space, then J reduced to the regular. Proposition 6.1 X and Y are jointly SαS with α > 1, then [X, Y ] α = 0, Y J X. Proposition 6.2 Let 1 < α < 2 and S α be the linear space of jointly SαS random variables with dims α 3. Then the following statements are equivalent: (a) S α has the property: X, Y S α, [X, Y ] α = 0 = [Y, X] α = 0. (b) S α has the property: X, Y, Z S α, [X, Y ] α = 0, [X, Z] α = 0 = [X, Y + Z] α = 0. (c) There is a inner product on S α such that X α = (X, X) 1/2, X S α. (d) S α consists of jointly sub-gaussian SαS random variables. NOTE: When dims α = 2, (b) is in general not equivalent to either (a), (c) or (d). Proposition 6.3 Let 1 < α < 2 and S α be the linear space of jointly SαS random variables. Then the following statements are equivalent: (a) S α has the property: X, Y S α, X α = Y α, = [X, Y ] α = [Y, X] α. (b) S α consists of jointly sub-gaussian SαS random variables. Proposition 6.4 Let 1 < α < 2 and S α be the linear space of jointly SαS random variables with dims α 2. Then the following statements are equivalent: (a) S α has the property: X, Y S α, [X, Y ] α = 0, = X, Y are independent. (b) α = 2, i.e. S α consists of mean zero Gaussian random variables. 8

9 7 Codifference Definition The codifference of two SαS, 0 < α α, random variables X and Y equals τ X,Y = X α α + Y α α X Y α α. (15) NOTE 1: τ is symmetric. NOTE 2: When α = 2, τ reduced to the regular covariance function. Proposition 7.1 If X and Y are independent, then τ X,Y = 0. Conversely, if τ X,Y = 0, and 0 < α < 1, then X and Y are independent. HINT: If X and Y are independent, then s 1 s 2 = 0. Conversely, notice for 0 < α < 1, we have inequality s 1 s 2 α s 1 α + s 2 α. 9

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Linear Normed Spaces (cont.) Inner Product Spaces

Linear Normed Spaces (cont.) Inner Product Spaces Linear Normed Spaces (cont.) Inner Product Spaces October 6, 017 Linear Normed Spaces (cont.) Theorem A normed space is a metric space with metric ρ(x,y) = x y Note: if x n x then x n x, and if {x n} is

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

Gaussian distributions and processes have long been accepted as useful tools for stochastic

Gaussian distributions and processes have long been accepted as useful tools for stochastic Chapter 3 Alpha-Stable Random Variables and Processes Gaussian distributions and processes have long been accepted as useful tools for stochastic modeling. In this section, we introduce a statistical model

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Hilbert Spaces: Infinite-Dimensional Vector Spaces

Hilbert Spaces: Infinite-Dimensional Vector Spaces Hilbert Spaces: Infinite-Dimensional Vector Spaces PHYS 500 - Southern Illinois University October 27, 2016 PHYS 500 - Southern Illinois University Hilbert Spaces: Infinite-Dimensional Vector Spaces October

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Real Variables # 10 : Hilbert Spaces II

Real Variables # 10 : Hilbert Spaces II randon ehring Real Variables # 0 : Hilbert Spaces II Exercise 20 For any sequence {f n } in H with f n = for all n, there exists f H and a subsequence {f nk } such that for all g H, one has lim (f n k,

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Math 108A: August 21, 2008 John Douglas Moore Our goal in these notes is to explain a few facts regarding linear systems of equations not included in the first few chapters

More information

Linear algebra for computational statistics

Linear algebra for computational statistics University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j

More information

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion IEOR 471: Stochastic Models in Financial Engineering Summer 27, Professor Whitt SOLUTIONS to Homework Assignment 9: Brownian motion In Ross, read Sections 1.1-1.3 and 1.6. (The total required reading there

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution

Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution Relevant

More information

Lecture Notes on Metric Spaces

Lecture Notes on Metric Spaces Lecture Notes on Metric Spaces Math 117: Summer 2007 John Douglas Moore Our goal of these notes is to explain a few facts regarding metric spaces not included in the first few chapters of the text [1],

More information

Definitions and Properties of R N

Definitions and Properties of R N Definitions and Properties of R N R N as a set As a set R n is simply the set of all ordered n-tuples (x 1,, x N ), called vectors. We usually denote the vector (x 1,, x N ), (y 1,, y N ), by x, y, or

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

ELEG 833. Nonlinear Signal Processing

ELEG 833. Nonlinear Signal Processing Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu February 15, 25 2 NON-GAUSSIAN MODELS 2 Non-Gaussian Models

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

Mathematics 530. Practice Problems. n + 1 }

Mathematics 530. Practice Problems. n + 1 } Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined

More information

Introduction to Geometry

Introduction to Geometry Introduction to Geometry it is a draft of lecture notes of H.M. Khudaverdian. Manchester, 18 May 211 Contents 1 Euclidean space 3 1.1 Vector space............................ 3 1.2 Basic example of n-dimensional

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

arxiv: v1 [math.na] 9 Feb 2013

arxiv: v1 [math.na] 9 Feb 2013 STRENGTHENED CAUCHY-SCHWARZ AND HÖLDER INEQUALITIES arxiv:1302.2254v1 [math.na] 9 Feb 2013 J. M. ALDAZ Abstract. We present some identities related to the Cauchy-Schwarz inequality in complex inner product

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

MATH FINAL EXAM REVIEW HINTS

MATH FINAL EXAM REVIEW HINTS MATH 109 - FINAL EXAM REVIEW HINTS Answer: Answer: 1. Cardinality (1) Let a < b be two real numbers and define f : (0, 1) (a, b) by f(t) = (1 t)a + tb. (a) Prove that f is a bijection. (b) Prove that any

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Regularity for Poisson Equation

Regularity for Poisson Equation Regularity for Poisson Equation OcMountain Daylight Time. 4, 20 Intuitively, the solution u to the Poisson equation u= f () should have better regularity than the right hand side f. In particular one expects

More information

Best approximations in normed vector spaces

Best approximations in normed vector spaces Best approximations in normed vector spaces Mike de Vries 5699703 a thesis submitted to the Department of Mathematics at Utrecht University in partial fulfillment of the requirements for the degree of

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Linear Ordinary Differential Equations

Linear Ordinary Differential Equations MTH.B402; Sect. 1 20180703) 2 Linear Ordinary Differential Equations Preliminaries: Matrix Norms. Denote by M n R) the set of n n matrix with real components, which can be identified the vector space R

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Part 1a: Inner product, Orthogonality, Vector/Matrix norm Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

Moreover this binary operation satisfies the following properties

Moreover this binary operation satisfies the following properties Contents 1 Algebraic structures 1 1.1 Group........................................... 1 1.1.1 Definitions and examples............................. 1 1.1.2 Subgroup.....................................

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 Jesús De Loera, UC Davis February 18, 2012 Orthogonal Vectors and Subspaces (3.1). In real life vector spaces come with additional METRIC properties!! We have

More information

Math 61CM - Solutions to homework 2

Math 61CM - Solutions to homework 2 Math 61CM - Solutions to homework 2 Cédric De Groote October 5 th, 2018 Problem 1: Let V be the vector space of polynomials of degree at most 5, with coefficients in a field F Let U be the subspace of

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Convex Sets. Prof. Dan A. Simovici UMB

Convex Sets. Prof. Dan A. Simovici UMB Convex Sets Prof. Dan A. Simovici UMB 1 / 57 Outline 1 Closures, Interiors, Borders of Sets in R n 2 Segments and Convex Sets 3 Properties of the Class of Convex Sets 4 Closure and Interior Points of Convex

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

We introduce methods that are useful in:

We introduce methods that are useful in: Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Grothendieck s Inequality

Grothendieck s Inequality Grothendieck s Inequality Leqi Zhu 1 Introduction Let A = (A ij ) R m n be an m n matrix. Then A defines a linear operator between normed spaces (R m, p ) and (R n, q ), for 1 p, q. The (p q)-norm of A

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations FFTs in Graphics and Vision Homogenous Polynomials and Irreducible Representations 1 Outline The 2π Term in Assignment 1 Homogenous Polynomials Representations of Functions on the Unit-Circle Sub-Representations

More information

B 1 = {B(x, r) x = (x 1, x 2 ) H, 0 < r < x 2 }. (a) Show that B = B 1 B 2 is a basis for a topology on X.

B 1 = {B(x, r) x = (x 1, x 2 ) H, 0 < r < x 2 }. (a) Show that B = B 1 B 2 is a basis for a topology on X. Math 6342/7350: Topology and Geometry Sample Preliminary Exam Questions 1. For each of the following topological spaces X i, determine whether X i and X i X i are homeomorphic. (a) X 1 = [0, 1] (b) X 2

More information