Advanced Digital Signal Processing -Introduction

Similar documents
Statistical and Adaptive Signal Processing

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Review of Linear Algebra

DISCRETE-TIME SIGNAL PROCESSING

Parametric Method Based PSD Estimation using Gaussian Window

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

SIMON FRASER UNIVERSITY School of Engineering Science

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

Review of some mathematical tools

Elementary linear algebra

ELE/MCE 503 Linear Algebra Facts Fall 2018

ADAPTIVE FILTER THEORY

Linear Algebra Massoud Malek

1 Determinants. 1.1 Determinant

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Linear Algebra: Matrix Eigenvalue Problems

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Quantum Computing Lecture 2. Review of Linear Algebra

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra (Review) Volker Tresp 2018

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Computational math: Assignment 1

2. Every linear system with the same number of equations as unknowns has a unique solution.

Lecture 6: Geometry of OLS Estimation of Linear Regession

Linear Algebra Review. Vectors

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Math 302 Outcome Statements Winter 2013

1. General Vector Spaces

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

Foundations of Matrix Analysis

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1

Applied Linear Algebra in Geoscience Using MATLAB

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra I for Science (NYC)

Chapter 3 Transformations

Time Series: Theory and Methods

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Matrix Operations: Determinant

Review problems for MA 54, Fall 2004.

MATH2210 Notebook 2 Spring 2018

INNER PRODUCT SPACE. Definition 1

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

ELEMENTARY MATRIX ALGEBRA

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Transforms and Orthogonal Bases

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

LS.1 Review of Linear Algebra

Lecture Summaries for Linear Algebra M51A

Symmetric and anti symmetric matrices

Math 307 Learning Goals

MAT Linear Algebra Collection of sample exams

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

LINEAR ALGEBRA SUMMARY SHEET.

Lecture 19 IIR Filters

SYLLABUS UNDER AUTONOMY MATHEMATICS

MAT 2037 LINEAR ALGEBRA I web:

Linear Algebra Highlights

Biomedical Signal Processing and Signal Modeling

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Contents. Acknowledgments

1 Linear Algebra Problems

ADAPTIVE FILTER THEORY

Linear Algebra. Min Yan

Lecture Notes in Linear Algebra

Lecture 7. Econ August 18

Practical Spectral Estimation

Lecture 7: Linear Prediction

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Mathematics. EC / EE / IN / ME / CE. for

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Definitions for Quizzes

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Fourier and Wavelet Signal Processing

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Conceptual Questions for Review

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Diagonalizing Matrices

Algebra C Numerical Linear Algebra Sample Exam Problems

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

Exercise Sheet 1.

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Probability and Statistics for Final Year Engineering Students

Interchange of Filtering and Downsampling/Upsampling

Math 307 Learning Goals. March 23, 2010

Lecture 23: 6.1 Inner Products

Introduction to Matrix Algebra

Large Scale Data Analysis Using Deep Learning

Transcription:

Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary processes, Bias and Estimation, Autocovariance, Autocorrelation, Parseval s theorem, Wiener-Khintchine relation, White noise, Power Spectral Density, Spectral factorization, Filtering Random Processes, Special types of Random Processes ARMA, AR, MA Yule-Walker equations. UNIT II SPECTRUM ESTIMATION Estimation of spectra from finite duration signals, Nonparametric methods - Periodogram, Modified periodogram, Bartlett, Welch and Blackman-Tukey methods, Parametric methods ARMA, AR and MA model based spectral estimation, Solution using Levinson-Durbin algorithm UNIT III LINEAR ESTIMATION AND PREDICTION Linear prediction Forward and Backward prediction, Solution of Prony s normal equations, Least mean-squared error criterion, Wiener filter for filtering and prediction, FIR and IIR Wiener filters, Discrete Kalman filter 2 1

AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT IV ADAPTIVE FILTERS FIR adaptive filters adaptive filter based on steepest descent method- Widrow-Hopf LMS algorithm, Normalized LMS algorithm, Adaptive channel equalization, Adaptive echo cancellation, Adaptive noise cancellation, RLS adaptive algorithm. UNIT V MULTIRATE DIGITAL SIGNAL PROCESSING Mathematical description of change of sampling rate Interpolation and Decimation, Decimation by an integer factor, Interpolation by an integer factor, Sampling rate conversion by a rational factor, Poly phase filter structures, Multistage implementation of multirate system, Application to sub-band coding Wavelet transform 3 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING REFERENCES: 1. Monson H. Hayes, Statistical Digital Signal Processing and Modeling, John Wiley and Sons, Inc, Singapore, 2002 2. John J. Proakis, Dimitris G. Manolakis, : Digital Signal Processing, Pearson Education, 2002 3. Rafael C. Gonzalez, Richard E. Woods, Digital Image Processing, Pearson Education Inc.,Second Edition, 2004 4 2

The DFT and FFT The discrete-time Fourier transform is a very useful tool, but the frequency variable is a continuous variable, and hence not well suited for computation 5 For finite length sequences The DFT and FFT there is another representation called the Discrete Fourier Transform (DFT) which is a function of a discrete frequency variable k 6 3

The DFT and FFT If two finite length sequences of length N1 and N2 have to be linearly convolved using the DFT, then the DFTs have to be at least of length N1+N2-1. The length of a DFT is normally a power of 2, so that a fast implementation of the DFT can be used the so called fast Fourier transform (FFT) While the DTFT has a time complexity of N 2, the DFT has a time complexity of Nlog 2 (N) 7 Linear Algebra Much of discrete-time signal processing is done with finite length sequences These sequences can be ideally represented by a vector, containing the samples of these sequences This allows discrete-time signal processing to access the power of linear algebra to solve real world problems 8 4

Vectors A vector is an array of real-valued or complex-valued numbers or functions. Normally we assume that the vectors are column vectors Hence the transpose of a vector is a row vector The Hermitian transpose is the complex conjugate of the transpose 9 Vectors A finite length sequence x(n) that is equal to zero outside the interval [0,N-1] may be represented in vector form Also it is convenient to define the vector x(n) as 10 5

Vectors Often the goal will be to minimize the length magnitude of an error vector. Hence the magnitude needs to be defined. Usually this is the L 2 norm 11 Vectors Other useful norms are the L 1 norm And the L infitiy norm In this module the L 2 will be used almost exclusively and hence x implies the L 2 norm 12 6

Vectors A vector might be normalized to have unit magnitude by dividing by its norm v x is now a unit norm vector pointing in the direction of x For a signal vector the norm squared represents its energy The norm can also be used to measure the distance between two vectors (~RMS of error signal) 13 Vectors Given two complex vectors, the inner product is defined as For real vectors the inner product becomes The inner product defines the geometrical relationship between two vectors, where θ is the angle between the two vectors. Two nonzero vectors are said to be orthogonal if their inner product is zero 14 7

Vectors Example Inner Product Consider the two unit norm vectors The inner product is Hence θ = π/4 If however, The inner product is 0 and the vectors are orthogonal to each other 15 Vectors Since the cosine is bounded by +/- 1 it follows Where the equal sign holds if the vectors are co-linear (a=αb). This is known as the Cauchy-Schwarz inequality Another useful inequality is This follows from 16 8

Vectors The inner product can be used to formulate the output of an LSI filter in a concise way Let It follows 17 Linear Independence, Vector Spaces, & Basis Vectors A set of n vectors is said to be linearly independent it implies that all alphas are zero If there is a set of nonzero alphas so that this holds, the set is called linearly dependent For a set of linearly dependent vectors there exist therefore a nonzero set of betas such that this holds For vectors of dimension N, no more than N vectors can be linearly independent 18 9

Linear Independence Example Given the following pair of vectors If we assume that there are alphas such that Then it follows that Which means that Hence the vectors are linearly independent 19 Linear Independence Example However, adding an additional vector Results in a linearly dependent set since 20 10

Linear Independence, Vector Spaces, and Basis Vectors Given a set of N vectors Consider the set of all vectors that may be formed form a linear combination of these vectors This set forms a vector space and the vectors are said to span the space. If the set is linearly independent, then they are said to form a basis for the space. The number of vectors in the basis are referred to as the dimension of the space 21 Linear Independence, Vector Spaces, and Basis Vectors 22 11

Matrices An n m matrix is an array of numbers (real or complex) or functions having n rows and m columns 23 Matrices Sometimes it is convenient to let a matrix have an infinite number of rows or columns The prime example for this is the convolution sum written as a matrix multiplication. Let h(n) be the unit sample response of a LTI FIR filter, then the output y(n) can be written as an inner product 24 12

Matrices If x(n)=0 for n<0, then we can express y(n) for n 0 as Where X 0 is a convolution matrix And y 25 Matrices Sometimes it is convenient to write the matrix as a set of column or row vectors A matrix can also be partitioned into sub-matrices 26 13

Matrices Partitioning Example Consider the 3x3 matrix This matrix may be partitioned as Where And 27 Matrices If A is a n m matrix, then its transpose A T is the m n matrix that is formed by interchanging the rows and the columns of A For a square matrix if A is equal to its transpose, it is called a symmetric matrix For complex matrices, the Hermitian transpose is the complex conjugate of the transpose 28 14

Matrices For a complex square matrix if A is equal to its Hermitian transpose it is called a Hermitian matrix Properties of the Hermitian transpose are Equivalent properties for the transpose may be obtained by replacing the Hermitian transpose with the regular transpose 29 Matrix Inverse Let A be an n x m matrix that is partitioned in terms of its m column vectors The rank of A, ρ(a) is defined to be the number of linearly independent columns in A, i.e., the number of independent vectors in the above set. One of the properties of the rank is, that the rank of a matrix is equal to the rank of its Hermitian transpose 30 15

Matrix Inverse Therefore if A is partitioned in term of its n row vectors, then the rank is equivalently equal to the number of linearly independent row vectors A useful property of the rank of a matrix is the following Since the rank is equal to the number of independent rows and the number of independent columns it follows that for an m x n matrix 31 Matrix Inverse If A is a square matrix of full rank, then there exists a unique matrix A -1 called the inverse of A such that Where I is the identity matrix In this case A is called invertible or nonsingular If A is not of full rank the it is said noninvertible or singular and A does not have an inverse 32 16

Matrix Inverse Some properties of the matrix inverse are as follows. If A and B are invertible then Second 33 Matrix Inverse Finally a formula that is useful for inverting matrices that arise in adaptive filtering (RLS) algorithms is the Matrix inversion lemma 34 17

Determinant And Trace If A = a 11 is a 1x1 matrix, then its determinate det(a) is defined as det(a)=a 11 The determinant of an n x n matrix is defined recursively in terms of the determinants of (n-1) (n-1) matrices as follows. For any j Where A ij is the (n-1) (n-1) matrix that is formed by deleting the i th row and the j th column of A The determinant also has an important geometric interpretation as the area of a parallelogram, and more generally as the volume of a higher-dimensional parallelepiped. 35 For a 2 x 2 matrix Determinant Example The determinant is And for the 3 x 3 matrix The determinant is 36 18

Determinant And Trace The determinant my be used to determine whether or not a matrix is invertible Additional properties of the determinant are listed below, where it is assumed that A and B are n n matrices 37 Determinant And Trace Another useful function of a matrix is the trace. Often the trace of a covariance matrix needs to minimized. Given an n n matrix A the trace is the sum of the terms along the diagonal 38 19

Linear Equations Signal modeling, Wiener filtering and spectrum estimation require finding the solution to a set of linear equations This can be efficiently written as And interpreted as a linear combination of the column vectors of A 39 Linear Equations-Square Matrix For a square matrix n=m If A is nonsingular, the there exists a unique solution If A is singular, there exist no solution (inconsistent equations) or there exist many solutions (at least one degree of freedom is still left) If A is singular, then the columns of A are linearly dependent and there exists a nonzero solution to the homogeneous equations. In fact there will be k=n-ρ(a) linearly independent solution to the homogeneous equations 40 20

Linear Equations-Square Matrix If there is at least one vector x 0 that solves Then any vector of the form will also be a solution where are linearly independent solutions of 41 Linear Equations-Square Matrix Example 42 21

Diagonal matrix Special Matrix Forms Which can be written as Identity matrix 43 Special Matrix Forms If the entries on the diagonal become matrices, then A is said to be a block diagonal matrix The exchange matrix reverses the order of entries 44 22

Special Matrix Forms An upper triangular matrix is a square matrix in which all of the terms below the diagonal are equal to zero An lower triangular matrix is a square matrix in which all of the terms above the diagonal are equal to zero Transposing a lower triangular matrix results in an upper triangular matrix and vice versa 45 Special Matrix Forms 46 23

Special Matrix Forms A very important matrix is a Toeplitz matrix, which is completely specified when the first row and the first column is known. Note that a convolution matrix is a Toeplitz matrix 47 Special Matrix Forms If a Toeplitz matrix is symmetric, or Hermitian in the case of a complex matrix, then all the elements are completely determined by either the first row or the first column. Note that an autocorrelation matrix is a Hermitian Toeplitz matrix 48 24

Special Matrix Forms A real n n matrix is said to be orthogonal if the columns and rows are orthonormal Hence for orthogonal matrices holds In the case of a complex matrix, such matrices are called unitary matrices 49 Quadratic and Hermitian Forms The quadratic form of a real symmetric n x n matrix A is the scalar defined by, where x is a real vector For example For a Hermitian matrix If the quadratic form of a matrix A is positive for all nonzero vectors x, then the matrix is called positive definite, written as A>0 50 25

Quadratic and Hermitian Forms For example this matrix is positive definite, since the quadratic form is positive for any non-zero vector x 51 Quadratic and Hermitian Forms If the quadratic form is non-negative for all nonzero vectors x, then A is said to be positive semidefinite For example since the quadratic form is zero for x=[0,x 2 ] T 52 26