Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Similar documents
ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Variations. ECE 6540, Lecture 02 Multivariate Random Variables & Linear Algebra

Large Scale Data Analysis Using Deep Learning

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process

Chapter 6. Random Processes

Stochastic Processes

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

Singular Value Decomposition (SVD)

Review of some mathematical tools

Advanced Digital Signal Processing -Introduction

Properties of Matrices and Operations on Matrices

16.584: Random (Stochastic) Processes

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Maths for Signals and Systems Linear Algebra in Engineering

Eigenvalues and Eigenvectors

A Review of Linear Algebra

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5.

Problems on Discrete & Continuous R.Vs

7.3 The Jacobi and Gauss-Seidel Iterative Methods

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

7. Symmetric Matrices and Quadratic Forms

Signals and Spectra - Review

Statistical signal processing

Singular Value Decomposition

Linear Algebra, part 3 QR and SVD

Time Domain Analysis of Linear Systems Ch2. University of Central Oklahoma Dr. Mohamed Bingabr

Name of the Student: Problems on Discrete & Continuous R.Vs

Lecture notes: Applied linear algebra Part 1. Version 2

UNIT 6: The singular value decomposition.

Stochastic Processes. A stochastic process is a function of two variables:

Probability and Statistics for Final Year Engineering Students

Linear Algebra Review. Vectors

Math 108b: Notes on the Spectral Theorem

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY UNIT 3 RANDOM PROCESS TWO MARK QUESTIONS

EE731 Lecture Notes: Matrix Computations for Signal Processing

ECE 636: Systems identification

Name of the Student: Problems on Discrete & Continuous R.Vs

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes

Pseudoinverse & Moore-Penrose Conditions

Data Preprocessing. Jilles Vreeken IRDM 15/ Oct 2015

Statistical and Adaptive Signal Processing

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

A Posteriori Error Estimates For Discontinuous Galerkin Methods Using Non-polynomial Basis Functions

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Lecture Notes to Big Data Management and Analytics Winter Term 2017/2018 Text Processing and High-Dimensional Data

Digital Image Processing

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

I. Multiple Choice Questions (Answer any eight)

DS-GA 1002 Lecture notes 10 November 23, Linear models

Linear Algebra in Actuarial Science: Slides to the lecture

Lecture 2. Linear Systems

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Review of Some Concepts from Linear Algebra: Part 2

STAT 100C: Linear models

MAT Linear Algebra Collection of sample exams

Math 3191 Applied Linear Algebra

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition

Algebra C Numerical Linear Algebra Sample Exam Problems

Computational Methods. Eigenvalues and Singular Values

Module 7 (Lecture 27) RETAINING WALLS

Introduction to time series econometrics and VARs. Tom Holden PhD Macroeconomics, Semester 2

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Stat 159/259: Linear Algebra Notes

EE731 Lecture Notes: Matrix Computations for Signal Processing

Cheat Sheet for MATH461

Wiener Filter for Deterministic Blur Model

Notes on Eigenvalues, Singular Values and QR

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

Tutorial on Principal Component Analysis

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

The Singular Value Decomposition

Matrix Factorizations

Chapter 1. Matrix Algebra

Hands-on Matrix Algebra Using R

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

Pseudoinverse & Orthogonal Projection Operators

Lecture: Face Recognition and Feature Reduction

Image Registration Lecture 2: Vectors and Matrices

Conceptual Questions for Review

Applied Linear Algebra in Geoscience Using MATLAB

Linear Algebra (Review) Volker Tresp 2017

LinGloss. A glossary of linear algebra

The QR Factorization

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Sample ECE275A Midterm Exam Questions

The Cooper Union Department of Electrical Engineering ECE111 Signal Processing & Systems Analysis Final May 4, 2012

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

ECE 6540, Lecture 06 Sufficient Statistics & Complete Statistics Variations

Transcription:

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides

Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX tt 0 xx ii tt xx ii tt 0 = XX = xx random process random variable realization of random process realization of random variable xx 1 (tt) tt 0 tt XX tt /tt continuous discrete continuous Continuous-state continuous-time process Continuous-state discretetime process discrete-time process = sequence discrete Discrete-state continuoustime process Discrete-state discrete-time process 2

Continuous-state discrete-time process process XX(kk) current realization of XX(kk): realization xx(kk) A stochastic process is said to be strict sense stationary (SSS), if the statistics are invariant to any translation of the time axis A stochastic process is said to be wide sense stationary (WSS), if its mean is constant and its autocorrelation depends on a time difference ττ only Here: we simply call WSS as stationary If expected values (averaging multiple realizations) can be calculated by time averaging of one realization, the process is said to be ergodic. Ergodic processes are always strict sense stationary; but not all strict sense stationary processes have to be ergodic We presume XX(kk) to be ergodic Moment calculation via averaging in time 3

Continuous-state discrete-time process Probability density function: 1 pp XX xx = lim Pr{xx < XX xx + xx} xx 0 xx Joint probability density function: 1 pp XX,YY xx, yy = lim Pr{xx < XX xx 0, yy 0 xx yy xx + xx, yy < YY yy + yy} Normal distribution: pp XX xx = 1 2πππππ ee xx μμ XX ² 2σσσ Moments E{ } + + 1 st order: E XX = xx pp XX xx dddd = μμ XX 2 nd order: E XX² = xx² pp XX xx dddd Variance: E XX μμ XX ² = E XXX E XX 2 = σσ XX 2 = + xx μμ XX ²pp XX xx dddd 4

Correlation series of discrete-time processes auto correlation series (not necessarily stationary); complex valued process XX(kk) rr XXXX κκ 1, κκ 2 = E XX κκ 1 XX κκ 2 = E (XX RR κκ 1 jjxx II κκ 1 ) (XX RR κκ 2 + jjxx II κκ 2 ) stationary processes: κκ 1 kk, κκ 2 kk + κκ; rr XXXX κκ = E XX kk XX kk + κκ auto covariance series: cc XXXX (κκ) = E (XX kk μμ XX )(XX kk + κκ μμ XX ) = rr XXXX κκ μμ XX ² zero mean process: cc XXXX (κκ) = rr XXXX κκ cross correlation series of two processes XX kk, YY kk rr XXYY κκ 1, κκ 2 = E XX κκ 1 YY κκ 2 stationary rr XXYY κκ = E XX kk YY kk + κκ 5

Correlation series of discrete-time processes Properties of the ACS rr XXXX κκ = rr XXXX κκ real valued processes: rr XXXX κκ = rr XXXX κκ even ACF max rr XXXX κκ = rr XXXX 0 κκ rr XXXX 0 = E XX 2 kk XX kk = E XX kk ² zero mean: rr XXXX 0 = σσ XX Properties of the CCS rr XXYY κκ = rr YYXX κκ real valued processes: rr XXYY κκ = rr YYXX κκ cc XXYY κκ = rr XXXX κκ μμ XX μμ YY cross covariance sequence 6

Random Variables (RVs): Covariance/Uncorrelatedness/Orthogonality The covariance C of two RVs XX and YY is C = E (XX μμ XX ) (YY μμ YY ) =E XXXX - E XX E YY Uncorrelatedness: Two RVs are called uncorrelated if their covariance equals zero. C =0 E XXXX = E XX E YY Orthogonality: Two RVs are called orthogonal if E XXXX =0 7

Processes: Correlateness, Orthogonality, White noise Two WSS processes XX(kk) and YY(kk) are called uncorrelated if cc XXXX κκ = 0 κκ rr XXXX κκ = μμ XX μμ YY zero mean processes: rr XXXX κκ = 0 κκ Two WSS processes XX(kk) and YY(kk) are called (mutually) orthogonal if rr XXXX κκ = 0 κκ White noise: White noise is a stationary process with E{XX(kk)}=0 and rr XXXX κκ = σσ XX 2 δδ(κκ) 8

Power Spectral Density Definition (Wiener-Khintchine Theorem): SS XXXX ee jjω = DTFT rr XXXX κκ = rr XXXX κκ ee jjωκκ Because of conjugate even ACF Power Spectral Density always real valued Total power of the process (zero mean): Var XX kk ππ = σσ 2 XX = SS XXXX ee jjω ππ ddω = rr XXXX 0 White noise: PSD constant (total power limited, because of band-limited system) SS XXXX ee jjω rr XXXX κκ = IDTFT σσ XX 2 κκ= = σσ XX 2 for ππ < Ω < ππ = σσ XX 2 δδ(κκ) 9

ACF for bandlimited noise 10

Influence of a linear system rand. process at the input of the system: XX(kk) System impulse response: h kk rand. process at the output of the system: YY(kk) System-(energy-) autocorrelation sequence: rr hh ACS output: rr YYYY κκ = rr XXXX κκ rr EE hh CCS output: rr XXYY κκ = rr XXXX κκ h κκ EE κκ = h kk h(kk + κκ) kk= κκ = rr XXXX κκ h κκ h kk = h κκ h kk Power density output: SS YYYY ee jjω = SS XXXX ee jjω HH ee jjω ² phase blind Cross power density; in-output: SS XXYY ee jjω = SS XXXX ee jjω HH ee jjω White noise at the input of a system: rr YYYY κκ = σσ 2 XX δδ κκ rr EE hh κκ = σσ 2 XX rr EE hh κκ SS YYYY ee jjω = σσ 2 XX HH ee jjω 2 rr XXYY κκ = σσ 2 XX δδ κκ h κκ = σσ 2 XX h κκ SS XXYY ee jjω = σσ 2 XX HH ee jjω 11

Complex Gaussian noise PDF of a single real valued Gaussian random variable pp nn nn = 1 σσ NN 2ππ ee nn 2 2σσ 2 NN PDF of a complex valued random variable nn = nn + jjjjjj is given by the joint pdf of two real-valued (real and imaginary part) random variables pp nn (nn + jjjjjj) pp nn,nn (nn, nnnn) If we assume that real and imaginary part are statistically independent then pp nn (nn + jjjjjj) pp nn (nn ) pp nn (nn ) PDF of a single complex Gaussian random variable pp nn nn = 1 nn 2 +nn 2 σσ 2 NN ππ ee σσ2 NN = 1 nn 2 σσ 2 NN ππ ee σσ2 NN pp nn nn pp nn nn + jjnn 0.4 0.3 0.2 0.1 0-3 -2-1 0 1 2 3 nn nn nn 12

New nomenclature In the following we use small letters for both random variable and particular realization. Random variable: XX xx Scalar random variable: xx Vector-valued random variable: xx (column vector) Matrix-valued random variable: XX 13

Autocorrelation matrix Vector-valued random variable: xx = xx(0) xx(1) xx(nn 1) C NN 1 : column vector; expectation: E xx = NN 1 E xx 2 = E xx H xx = E xx ii 2 ii=0 Autocorrelation matrix: E xx 0 E xx 1 E xx NN 1 = E xx 0 2 + xx 1 2 + + xx NN 1 2 Note: 14

Convolution as inner product Digital signals and linear time invariant system: Assume: h kk causal FIR with order mm impulse response of length mm + 1 Timely infinite input sequence xx(kk); < kk < Define: hh = xx(kk) yy kk = xx kk h kk = h(0) h(1) h(mm) h(kk) C mm+1 ; xx(kk) = mm υυ=0 yy(kk) h υυ xx(kk υυ) xx(kk) xx(kk 1) xx(kk mm) past values of xx(kk), non-causal input C mm+1 15

Convolution as inner product Output signal yy(kk) of filter as inner product: yy(kk) = hh T xx kk = xx kk T hh Assume: XX(kk) is stationary discrete-time process xx(kk) is vector of random variables yy(kk) is scalar random variable Power of output signal: E yy kk 2 = E yy kk yy kk = E hh T xx(kk)xx H (kk)hh = hh T E xx(kk)xx H (kk) hh = hh T RR xxxx hh 16

Convolution as matrix multiplication Causal input: xx kk = [xx 0, xx 1,, xx LL 1 ] T Finite impulse response: hh kk = [h 0, h 1,, h mm ] T Full equation system: 17

Convolution as matrix multiplication Example of convolution as matrix multiplication with mm = 2, LL = 4: transient phase Matrix HH has Toeplitz structure steady-state: complete impulse response in rows filter filled up with input samples decay phase 18

Convolution as matrix multiplication Convolutional matrix in general: Toeplitz structure mm transient phase LL + mm LL m steady state decay phase mm LL 19

Correlation as convolution Define correlation of two signals (at least one is deterministic) as: with LL LL 20

Correlation as scalar product Define correlation of two signals (at least one is deterministic) as: with Note: of convolution defined as causal input in contrast to anti-causal definition for formulation 21

Singular Value Decomposition (SVD) Every mm nn matrix AA of rank r can be written as Singular values σσ ii of AA = square roots of nonzero eigenvalues of A H A or AA H Unitary mm mm matrix UU contains left singular vectors of A = eigenvectors of AA H Unitary nn nn matrix VV contains right singular vectors of A = eigenvectors of A H A Verification with eigenvalue decomposition with the matrix of singular values Four fundamental subspaces: the vectors u 1,...,u r span the column space of A u r+1,...,u m span the left nullspace of A v 1,...,v r span the row space of A v r+1,...,v n span the right nullspace of A 22

Singular Value Decomposition (SVD) (2) Illustration of the fundamental subspaces x! Ax Consider linear mapping with orthogonal decomposition x = x r + x n x r Ax = Ax r 0 x Ax n = 0 x n 23

Moore-Penrose Pseudoinverse Inverse AA 1 exists only for square matrices with full rank Assume any mm nn matrix AA Definition: (Moore-Penrose) pseudo inverse A + ) Special cases for full rank matrices: It can be verified that if and only if AA has full rank 24

QR decomposition Every mm nn matrix A can be written as where Q is an m n matrix with orthonormal columns, R is an upper triangular nn nn matrix Columns of A are represented in the orthonormal base defined by Q Illustration for the mm 2 case q 2 r 2;2 q 2 a 2 = r 1;2 q 1 + r 2;2 q 2 q 1 r 1;2 q 1 a 1 = r 1;1 q 1 25

Matrix inversion lemma Matrix Inversion Lemma (A R m x m, B R m x n, C R n x n, D R n x m ) Inverse of block matrix E: with A R m x m, B R m x n, C R n x m, D R n x n Schur complement of A w.r.t E Schur complement of D w.r.t E 26

Wirtinger calculus since Derivative w.r.t. a vector derivative w.r.t. column-vector row-vector derivative w.r.t. row-vector column-vector 27