Synthesis of Gaussian and non-gaussian stationary time series using circulant matrix embedding

Size: px
Start display at page:

Download "Synthesis of Gaussian and non-gaussian stationary time series using circulant matrix embedding"

Transcription

1 Synthesis of Gaussian and non-gaussian stationary time series using circulant matrix embedding Vladas Pipiras University of North Carolina at Chapel Hill UNC Graduate Seminar, November 10, 2010 (joint work with H. Helgason, P. Abry) Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 1 / embe 28

2 Few references Gaussian stationary multivariate series: Fast and exact synthesis of stationary multivariate Gaussian time series using circulant embedding (with H. Helgason and P. Abry). Non-Gaussian stationary multivariate series: Synthesis of multivariate stationary series with prescribed marginal distributions and covariance using circulant matrix embedding (with H. Helgason and P. Abry). Both available on my website. Not the first in this department! On the reconstruction of the covariance of stationary Gaussian processes observed through zero-memory nonlinearities (S. Cambanis and E. Masry). IEEE Transactions on Information Theory, Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 2 / embe 28

3 What you need to know to follow this talk Focus on univariate time series {X n } n Z which are stationary. Autocovariance r: r(n) = Cov(X k, X k+n ) = Cov(X 0, X n ) = EX 0 X n EX 0 EX n Spectral density f : 0 f (w) = 1 2π ( r(0) + 2 ) r(n) cos(nw) Gaussian series: any vector (X k1,..., X kn ) is Gaussian. Non-Gaussian series: marginal will have certain non-gaussian distribution, e.g. χ 2 1 or log-normal series. n=1 Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 3 / embe 28

4 Three Gaussian series. Which one(s) dependent? Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 4 / embe 28

5 Three non-gaussian series. Which one(s) dependent? Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 5 / embe 28

6 Goals Interested in synthesis of X := (X 0, X 1,..., X N 1 ) where {X n } n Z is either Gaussian stationary series with given autocovariance, or non-gaussian stationary series with some marginal distribution and autocovariance. Denote the covariance matrix of X by Σ = EXX EXEX = r(0) r(1) r(2)... r(n 1) r(1) r(0) r(1)... r(n 2) r(2) r(1) r(0)... r(n 3) r(n 1) r(n 2) r(n 3)... r(0) Gaussian case: suppose EX = 0. Non-Gaussian case: Y will be used instead of X. Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 6 / embe 28

7 GAUSSIAN CASE Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 7 / embe 28

8 Elementary approach and its limitations Since Σ is non-negative definite, one elementary approach (Cholesky method) is to factorize Σ = Σ 1/2 Σ 1/2 and to set X = Σ 1/2 ɛ, where ɛ is a N (0, I N ) vector. Note that such X has the correct covariance structure since EXX = Σ 1/2 Eɛɛ Σ 1/2 = Σ 1/2 I N Σ 1/2 = Σ. Problem: The complexity of this method is O(N 3 ) and the approach is practical only for moderate sizes N (N 2000). What about larger N? Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 8 / embe 28

9 Circulant matrix embedding A circulant matrix embedding of Σ is a circulant matrix r(0) r(1)... r(n 1) r(n 2)... r(1) r(1) r(0)... r(n 2) r(n 1)... r(2) Σ = r(n 1) r(n 2)... r(0) r(1)... r(n 2) r(n 2) r(n 1)... r(1) r(0)... r(n 3) r(1) r(2)... r(n 2) r(n 3)... r(0) = circ(r(0), r(1),..., r(n 1), r(n 2),..., r(1)) of dimension 2M 2M with embedding size 2M = 2N 2. Note that Σ contains the covariance matrix Σ. Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 9 / embe 28

10 Why we love circulant matrices The discrete Fourier basis diagonalizes circulant matrices: Σ = F ΛF, where Λ = diag(λ(0),..., λ(2m 1)) and F is the 2M 2M Fourier matrix with jth column e j = 1 ( ) 2πj i 1, e 2M,..., e i 2πj(2M 1) 2M. 2M The eigenvalues λ(m) satisfy λ(m) = 2M 1 j=0 M 1 2πjm i r(j)e 2M = r(0) + r(m)( 1) m + 2 j=1 r(j) cos ( ) πjm M and can be computed rapidly using FFT (supposing 2M = 2 K ; complexity O(M log M)). Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 10 / embe 28

11 Assumption ND Σ has real eigenvalues λ(m) in general. Suppose for the moment: Assumption ND: The eigenvalues λ(m), m = 0,..., 2M 1, are non-negative. Equivalently, the matrix Σ is non-negative definite. Then, there is a Gaussian vector X such that E X X = Σ. Since Σ contains Σ in its upper-left corner, the first N elements X of X will then have the desired covariance structure Σ. Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 11 / embe 28

12 Constructing X and X To construct X, it is more convenient to work with complex-valued variables. Let Ṽ = F Λ 1/2 Z, where Λ 1/2 exists by using Assumption ND and Z = Z 0 + iz 1 consists of two independent N (0, I 2M ) random vectors Z 0 and Z 1. Note that, for m = 0,..., 2M 1, Ṽ m = 1 2M 1 λ(j) 1/2 2πjm i Z j e 2M, 2M j=0 so that Ṽ can be rapidly computed by FFT. Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 12 / embe 28

13 Constructing X and X Fact: The Gaussian vectors R(Ṽ ) and I(Ṽ ) are independent, with the covariance structure ER(Ṽ )R(Ṽ ) = EI(Ṽ )I(Ṽ ) = Σ. Thus, both X = R(Ṽ ) and X = I(Ṽ ) have the covariance matrix Σ. Finally, the desired vector X with the covariance matrix Σ is made up of the first N elements of X : X = first N entries of X. Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 13 / embe 28

14 Assumption ND is expected to hold for large N Fact: If {X n } n Z is a short-range dependent series with (strictly) positive density function f (λ), then Assumption ND holds for large enough N. Basic idea: Note that, for large M (or N), M 1 λ(m) = r(0) + r(m)( 1) m + 2 j=1 r(j) cos ( ) πjm M r(0) + 2 j=1 ( r(j) cos j πm M ) = f ( πm M ) > 0. Open problem: Prove analogous result for long-range dependent series. Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 14 / embe 28

15 Known conditions for Assumption ND to hold for any N Facts: Suppose that the sequence r(0),..., r(m) is convex, decreasing and non-negative. Then, Assumption ND holds. Suppose r(k) 0, k = 1,..., M. Then, Assumption ND holds. E.g. FARIMA(0, d, 0) series, fractional Gaussian noise and other models satisfy one of these conditions. Main reference for circulant embedding method: Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix (C. Dietrich and G. Newsam), SIAM J. Sci. Comput., Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 15 / embe 28

16 Our contribution Chan and Wood (1999) extended circulant embedding method to stationary Gaussian vector fields. 4 page paper! Unclear why their algorithm works or how to implement it in practice. Our contribution clarifies their algorithm in the multivariate context: Each component series is constructed using univariate circulant embedding. Cross covariance is obtained by correlating vectors Z s. Embedding of odd size is used by Chan and Wood, whereas we use even size. Difference from univariate case related to time-reversibility. We formulate analogous sufficient conditions for Assumption ND to hold, and check them on several multivariate series models. Our algorithm is implemented and has been experimented on a number of multivariate series models. Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 16 / embe 28

17 NON-GAUSSIAN CASE Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 17 / embe 28

18 Problem statement We wish to numerically synthesize univariate stationary series {Y n } n Z targeting a priori given marginal distribution, F (y) = P(Y n y) autocovariance, r Y (n) = EY 0 Y n EY 0 EY n Also, the procedure should be practical and computationally fast. Remarks: The problem is not necessarily well-posed. Y may not be unique higher-order quantities not targeted. Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 18 / embe 28

19 Focus on Construction based on non-linear memoryless transforms of Gaussian stationary series: Y n = f (X n ) X n is a stationary Gaussian series with autocovariance r X (n) X n d = N (0, 1) f : R R is a deterministic function Matching marginal: Multiple ways to reach the desired marginal F. A standard transformation: f (x) = F 1 (Φ(x)), Φ cdf for N (0, 1) Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 19 / embe 28

20 Relationship between covariances r Y and r X Hermite polynomials: H 0 (x) = 1, H 1 (x) = x, H 2 (x) = x 2 1, etc. For f L 2 (R, e x2 /2 dx) in Y n = f (X n ), expand f as f (x) = c m H m (x). m=0 Then, using the orthogonality property of H m (x) s, or r Y (n) = cmm!(r 2 X (n)) m m=1 r Y (n) r Y (0) = g(r X (n)) with g(z) = m=1 c 2 mm! r Y (0) zm =: b m z m. m=1 Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 20 / embe 28

21 Relationship between covariances r Y and r X Example: χ 2 1 -marginal: f (x) = x 2, r Y (n) = 2(r X (n)) 2, g(z) = z 2. Standard transformation f (x) = F 1 (Φ(x)) and the corresponding g, on the other hand, do not have explicit form. Invert r Y (n) r Y (0) = g(r X (n)) to have a candidate covariance r X for targeted r Y : Issues: ( r r X (n) = g 1 Y ) (n) r Y. (0) Inversion when g is not given explicitly. If after inversion, r X defines a valid covariance structure, X can be generated, for example, using circulant embedding method. Otherwise, What approximating valid covariance r X to choose? Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 21 / embe 28

22 Series reversion Consider g(z) = m=1 b mz m. If b 1 0, g(z) has an inverse g 1 in a neighborhood of z = 0 which can be expressed as g 1 (w) = d m w m m=1 where {d m } m 1 is defined by the reversion of the sequence {b m } m 1. Formally, equate the coefficients at the powers of z in z = g 1 (g(z)) = ( ) m d m b k z k m=1 This gives, d 1 = b1 1, d 2 = b1 3 b 2, d 3 = b1 5 (2b2 2 b 1b 3 ), etc. Simple algorithm exists for computing the coefficients d m (e.g. computational complex analysis book by Henrici (1974)). k=1 Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 22 / embe 28

23 Series reversion Few issues: Given the coefficients b m in g(z), difficult to determine where g 1 (w) = m=1 d mw m defines the inverse of g. In practice, plot (z, g(z)) for z [ 1, 1] and ( g 1 (w), w) on the same graph. Example: χ 2 1-marginal through standard transformation f (x) = F 1 (Φ(x)). The function g does not have explicit form g(z) (or w) z (or g 1 (w)) Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 23 / embe 28

24 Approximating covariance through circulant embedding Invert (explicitly or numerically) to obtain ( r r X (n) = g 1 Y ) (n) r Y. (0) Again, r X does not necessarily define a valid covariance structure. Proceed with circulant embedding method anyways. Set negative eigenvalues to 0 by { λ(m), if λ(m) 0, λ(m) = 0, if λ(m) < 0 and consider embedding Σ = F ΛF. Proceed with the rest of circulant embedding method to generate a Gaussian vector X with approximating r X. Generate approximating Ŷn as f ( X n ). Let r Y be the resulting approximation to r Y. Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 24 / embe 28

25 Optimality of approximation Are approximations r Y, Ŷ optimal in any sense? Yes, semi-formal proof shows that r Y (n) r Y (n) 2 W n min r Y (n) r app (n) 2 W n, n n where x 2 W = vec(x) W vec(x) with a positive definite matrix W, W n are suitable positive definite weight matrices, and the minimum is over all covariance structures r app arising from the transformation of a Gaussian series. The optimality is akin to spectral truncation method used in the area ( Generation of a random sequence having a jointly specified marginal distribution and autocovariance, B. Liu and D. C. Munson, IEEE Transactions on Acoustics, Speech, and Signal Processing, 1982). Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 25 / embe 28

26 Numerical example Want {Y n } with χ 2 1 marginal and autocovariance r Y (n) = 2φ n. Negative correlations: 1 < φ < 0. Consider the standard transformation f. Take φ = 0.35 so that r Y (n)/r Y (0) > 0.44 g( 1). Consider N = r Y (n) or r Y (n) r Y (n) r Y (n) n n Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 26 / embe 28

27 Our contribution Key points of our contribution: Multivariate context Use of series reversion Use of circulant embedding for approximation Its optimality Various ways to match marginals Practical implementations Open problem: It seems that standard transformation f (x) = F 1 (Φ(x)) leads to the largest class of attainable autocovariances. Can this be proved? Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 27 / embe 28

28 THANK YOU! Vladas Pipiras University of North Carolina atsynthesis Chapel Hill of Gaussian () and non-gaussian stationary time series using circulant matrix 28 / embe 28

User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series

User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series Hannes Helgason, Vladas Pipiras, and Patrice Abry June 2, 2011 Contents 1 Organization

More information

Smoothing windows for the synthesis of Gaussian. stationary random fields using circulant matrix. embedding

Smoothing windows for the synthesis of Gaussian. stationary random fields using circulant matrix. embedding Smoothing windows for the synthesis of Gaussian stationary random fields using circulant matrix embedding Hannes Helgason KTH Royal Institute of Technology, Stockholm Vladas Pipiras University of North

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

On the usefulness of wavelet-based simulation of fractional Brownian motion

On the usefulness of wavelet-based simulation of fractional Brownian motion On the usefulness of wavelet-based simulation of fractional Brownian motion Vladas Pipiras University of North Carolina at Chapel Hill September 16, 2004 Abstract We clarify some ways in which wavelet-based

More information

Adaptive wavelet decompositions of stochastic processes and some applications

Adaptive wavelet decompositions of stochastic processes and some applications Adaptive wavelet decompositions of stochastic processes and some applications Vladas Pipiras University of North Carolina at Chapel Hill SCAM meeting, June 1, 2012 (joint work with G. Didier, P. Abry)

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

Time Series 3. Robert Almgren. Sept. 28, 2009

Time Series 3. Robert Almgren. Sept. 28, 2009 Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E

More information

Long-Range Dependence and Self-Similarity. c Vladas Pipiras and Murad S. Taqqu

Long-Range Dependence and Self-Similarity. c Vladas Pipiras and Murad S. Taqqu Long-Range Dependence and Self-Similarity c Vladas Pipiras and Murad S. Taqqu January 24, 2016 Contents Contents 2 Preface 8 List of abbreviations 10 Notation 11 1 A brief overview of times series and

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Robustness of Principal Components

Robustness of Principal Components PCA for Clustering An objective of principal components analysis is to identify linear combinations of the original variables that are useful in accounting for the variation in those original variables.

More information

Estimation of matrix rank: historical overview and more recent developments

Estimation of matrix rank: historical overview and more recent developments Estimation of matrix rank: historical overview and more recent developments Vladas Pipiras CEMAT, IST and University of North Carolina, Chapel Hill UTL Probability and Statistics Seminar in Lisbon November

More information

Contents 1. Contents

Contents 1. Contents Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................

More information

Statistics of stochastic processes

Statistics of stochastic processes Introduction Statistics of stochastic processes Generally statistics is performed on observations y 1,..., y n assumed to be realizations of independent random variables Y 1,..., Y n. 14 settembre 2014

More information

LARGE DIMENSIONAL RANDOM MATRIX THEORY FOR SIGNAL DETECTION AND ESTIMATION IN ARRAY PROCESSING

LARGE DIMENSIONAL RANDOM MATRIX THEORY FOR SIGNAL DETECTION AND ESTIMATION IN ARRAY PROCESSING LARGE DIMENSIONAL RANDOM MATRIX THEORY FOR SIGNAL DETECTION AND ESTIMATION IN ARRAY PROCESSING J. W. Silverstein and P. L. Combettes Department of Mathematics, North Carolina State University, Raleigh,

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted

More information

DOA Estimation using MUSIC and Root MUSIC Methods

DOA Estimation using MUSIC and Root MUSIC Methods DOA Estimation using MUSIC and Root MUSIC Methods EE602 Statistical signal Processing 4/13/2009 Presented By: Chhavipreet Singh(Y515) Siddharth Sahoo(Y5827447) 2 Table of Contents 1 Introduction... 3 2

More information

Convex optimization and feasible circulant matrix embeddings in synthesis of stationary Gaussian fields

Convex optimization and feasible circulant matrix embeddings in synthesis of stationary Gaussian fields Convex optimization and feasible circulant matrix embeddings in synthesis of stationary Gaussian fields Hannes Helgason, Stefanos Kechagias 2, and Vladas Pipiras 2 University of Iceland, Reykjavik 2 University

More information

16.584: Random Vectors

16.584: Random Vectors 1 16.584: Random Vectors Define X : (X 1, X 2,..X n ) T : n-dimensional Random Vector X 1 : X(t 1 ): May correspond to samples/measurements Generalize definition of PDF: F X (x) = P[X 1 x 1, X 2 x 2,...X

More information

STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5)

STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5) STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5) Outline 1 Announcements 2 Autocorrelation and Cross-Correlation 3 Stationary Time Series 4 Homework 1c Arthur Berg STA

More information

GAUSSIAN PROCESS TRANSFORMS

GAUSSIAN PROCESS TRANSFORMS GAUSSIAN PROCESS TRANSFORMS Philip A. Chou Ricardo L. de Queiroz Microsoft Research, Redmond, WA, USA pachou@microsoft.com) Computer Science Department, Universidade de Brasilia, Brasilia, Brazil queiroz@ieee.org)

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

component risk analysis

component risk analysis 273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

Probability and Statistics for Final Year Engineering Students

Probability and Statistics for Final Year Engineering Students Probability and Statistics for Final Year Engineering Students By Yoni Nazarathy, Last Updated: May 24, 2011. Lecture 6p: Spectral Density, Passing Random Processes through LTI Systems, Filtering Terms

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

1.4 Properties of the autocovariance for stationary time-series

1.4 Properties of the autocovariance for stationary time-series 1.4 Properties of the autocovariance for stationary time-series In general, for a stationary time-series, (i) The variance is given by (0) = E((X t µ) 2 ) 0. (ii) (h) apple (0) for all h 2 Z. ThisfollowsbyCauchy-Schwarzas

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

Statistics of Stochastic Processes

Statistics of Stochastic Processes Prof. Dr. J. Franke All of Statistics 4.1 Statistics of Stochastic Processes discrete time: sequence of r.v...., X 1, X 0, X 1, X 2,... X t R d in general. Here: d = 1. continuous time: random function

More information

STAT 248: EDA & Stationarity Handout 3

STAT 248: EDA & Stationarity Handout 3 STAT 248: EDA & Stationarity Handout 3 GSI: Gido van de Ven September 17th, 2010 1 Introduction Today s section we will deal with the following topics: the mean function, the auto- and crosscovariance

More information

ECO 513 Fall 2009 C. Sims CONDITIONAL EXPECTATION; STOCHASTIC PROCESSES

ECO 513 Fall 2009 C. Sims CONDITIONAL EXPECTATION; STOCHASTIC PROCESSES ECO 513 Fall 2009 C. Sims CONDIIONAL EXPECAION; SOCHASIC PROCESSES 1. HREE EXAMPLES OF SOCHASIC PROCESSES (I) X t has three possible time paths. With probability.5 X t t, with probability.25 X t t, and

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

Parametric Inference on Strong Dependence

Parametric Inference on Strong Dependence Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

A central limit theorem for an omnibus embedding of random dot product graphs

A central limit theorem for an omnibus embedding of random dot product graphs A central limit theorem for an omnibus embedding of random dot product graphs Keith Levin 1 with Avanti Athreya 2, Minh Tang 2, Vince Lyzinski 3 and Carey E. Priebe 2 1 University of Michigan, 2 Johns

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

Chapter 2. Some basic tools. 2.1 Time series: Theory Stochastic processes

Chapter 2. Some basic tools. 2.1 Time series: Theory Stochastic processes Chapter 2 Some basic tools 2.1 Time series: Theory 2.1.1 Stochastic processes A stochastic process is a sequence of random variables..., x 0, x 1, x 2,.... In this class, the subscript always means time.

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Marcela Alfaro Córdoba August 25, 2016 NCSU Department of Statistics Continuous Parameter

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

Random Vectors and Multivariate Normal Distributions

Random Vectors and Multivariate Normal Distributions Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Notes on Random Processes

Notes on Random Processes otes on Random Processes Brian Borchers and Rick Aster October 27, 2008 A Brief Review of Probability In this section of the course, we will work with random variables which are denoted by capital letters,

More information

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84 Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

Efficient Computation of Linearized Cross-Covariance and Auto-Covariance Matrices of Interdependent Quantities 1

Efficient Computation of Linearized Cross-Covariance and Auto-Covariance Matrices of Interdependent Quantities 1 Mathematical Geology, Vol. 35, No. 1, January 2003 ( C 2003) Efficient Computation of Linearized Cross-Covariance and Auto-Covariance Matrices of Interdependent Quantities 1 Wolfgang Nowak, 2 Sascha Tenkleve,

More information

A Matrix Theoretic Derivation of the Kalman Filter

A Matrix Theoretic Derivation of the Kalman Filter A Matrix Theoretic Derivation of the Kalman Filter 4 September 2008 Abstract This paper presents a matrix-theoretic derivation of the Kalman filter that is accessible to students with a strong grounding

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Progress in the method of Ghosts and Shadows for Beta Ensembles

Progress in the method of Ghosts and Shadows for Beta Ensembles Progress in the method of Ghosts and Shadows for Beta Ensembles Alan Edelman (MIT) Alex Dubbs (MIT) and Plamen Koev (SJS) Oct 8, 2012 1/47 Wishart Matrices (arbitrary covariance) G=mxn matrix of Gaussians

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part X Factor analysis When we have data x (i) R n that comes from a mixture of several Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting,

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

Lecture 14: Multivariate mgf s and chf s

Lecture 14: Multivariate mgf s and chf s Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),

More information

= W z1 + W z2 and W z1 z 2

= W z1 + W z2 and W z1 z 2 Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of

More information

Probability Theory Review Reading Assignments

Probability Theory Review Reading Assignments Probability Theory Review Reading Assignments R. Duda, P. Hart, and D. Stork, Pattern Classification, John-Wiley, 2nd edition, 2001 (appendix A.4, hard-copy). "Everything I need to know about Probability"

More information

Stochastic Processes. A stochastic process is a function of two variables:

Stochastic Processes. A stochastic process is a function of two variables: Stochastic Processes Stochastic: from Greek stochastikos, proceeding by guesswork, literally, skillful in aiming. A stochastic process is simply a collection of random variables labelled by some parameter:

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Estimating Econometric Models through Matrix Equations

Estimating Econometric Models through Matrix Equations Estimating Econometric Models through Matrix Equations Federico Poloni 1 Giacomo Sbrana 2 1 U Pisa, Dept of Computer Science 2 Rouen Business School, France No Free Lunch Seminar SNS, Pisa, February 2013

More information

22 : Hilbert Space Embeddings of Distributions

22 : Hilbert Space Embeddings of Distributions 10-708: Probabilistic Graphical Models 10-708, Spring 2014 22 : Hilbert Space Embeddings of Distributions Lecturer: Eric P. Xing Scribes: Sujay Kumar Jauhar and Zhiguang Huo 1 Introduction and Motivation

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

9.2 Support Vector Machines 159

9.2 Support Vector Machines 159 9.2 Support Vector Machines 159 9.2.3 Kernel Methods We have all the tools together now to make an exciting step. Let us summarize our findings. We are interested in regularized estimation problems of

More information

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t, CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance

More information

Introduction to Computational Stochastic Differential Equations

Introduction to Computational Stochastic Differential Equations Introduction to Computational Stochastic Differential Equations Gabriel J. Lord Catherine E. Powell Tony Shardlow Preface Techniques for solving many of the differential equations traditionally used by

More information

No. of dimensions 1. No. of centers

No. of dimensions 1. No. of centers Contents 8.6 Course of dimensionality............................ 15 8.7 Computational aspects of linear estimators.................. 15 8.7.1 Diagonalization of circulant andblock-circulant matrices......

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Gaussian Processes Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College London

More information

More Spectral Clustering and an Introduction to Conjugacy

More Spectral Clustering and an Introduction to Conjugacy CS8B/Stat4B: Advanced Topics in Learning & Decision Making More Spectral Clustering and an Introduction to Conjugacy Lecturer: Michael I. Jordan Scribe: Marco Barreno Monday, April 5, 004. Back to spectral

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS F. C. Nicolls and G. de Jager Department of Electrical Engineering, University of Cape Town Rondebosch 77, South

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

Long Memory through Marginalization

Long Memory through Marginalization Long Memory through Marginalization Hidden & Ignored Cross-Section Dependence Guillaume Chevillon ESSEC Business School (CREAR) and CREST joint with Alain Hecq and Sébastien Laurent Maastricht University

More information

Extra Topic: DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES

Extra Topic: DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES Extra Topic: DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES A little in Montgomery and Runger text in Section 5-5. Previously in Section 5-4 Linear Functions of Random Variables, we saw that we could find

More information

Rectangular Young tableaux and the Jacobi ensemble

Rectangular Young tableaux and the Jacobi ensemble Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau

More information

Uncorrelatedness and Independence

Uncorrelatedness and Independence Uncorrelatedness and Independence Uncorrelatedness:Two r.v. x and y are uncorrelated if C xy = E[(x m x )(y m y ) T ] = 0 or equivalently R xy = E[xy T ] = E[x]E[y T ] = m x m T y White random vector:this

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information