Lecture 7: Linear Prediction
|
|
- Deirdre Willis
- 6 years ago
- Views:
Transcription
1 1 Lecture 7: Linear Prediction Overview Dealing with three notions: PREDICTION, PREDICTOR, PREDICTION ERROR; FORWARD versus BACKWARD: Predicting the future versus (improper terminology) predicting the past; Fast computation of AR parameters: Levinson Durbin algorithm; New AR parametrization: Reflection coefficients; Lattice filters References : Chapter 3 from S Haykin- Adaptive Filtering Theory - Prentice Hall, 22
2 Lecture 7 2 Notations, Definitions and Terminology Time series: u(1), u(2), u(3),, u(n 1), u(n), u(n + 1), Linear prediction of order M FORWARD PREDICTION Regressor vector û(n) = w 1 u(n 1) + w 2 u(n 2) + + w M u(n M) = M k=1 w k u(n k) = w T u(n 1) u(n 1) = [ u(n 1) u(n 2) u(n M) ] T Predictor vector of order M FORWARD PREDICTOR w = [ w 1 w 2 w M ] T a M = [ 1 w 1 w 2 w M ] T = [ a M, a M,1 a M,2 a M,M ] T and thus a M, = 1, a M,1 = w 1, a M,2 = w 2,, a M,M = w M, Prediction error of order M FORWARD PREDICTION ERROR f M (n) = u(n) û(n) = u(n) w T u(n 1) = a T M u(n) u(n 1) = a T Mu(n)
3 Lecture 7 3 Notations: r(k) = E[u(n)u(n + k)] R = E[u(n 1)u T (n 1)] = E = R = r B u(n 1) u(n 2) u(n 3) u(n M) autocorrelation function [ u(n 1) u(n 2) u(n 3) u(n M) ] Eu(n 1)u(n 1) Eu(n 1)u(n 2) Eu(n 1)u(n M) Eu(n 2)u(n 1) Eu(n 2)u(n 2) Eu(n 2)u(n M) Eu(n M)u(n 1) Eu(n M)u(n 2) Eu(n M)u(n M) r() r(1) r(m 1) r(1) r() r(m 2) r(m 1) r(m 2) r() r = E[u(n 1)u(n)] = [ r(1) r(2) r(3) r(m) ] T = E[u(n 1)u(n M 1)] = [ r(m) r(m 1) r(m 2) r(1) ] T = autocorrelation matrix autocorrelation vector Superscript B is the vector reversing (Backward) operator ie for any vector x, we have x B = [ x(1) x(2) x(3) x(m) ] B = [ x(m) x(m 1) x(m 2) x(1) ]
4 Lecture 7 4 Optimal forward linear prediction Optimality criterion Optimal solution: J(w) = E[f M (n)] 2 = E[u(n) w T u(n 1)] 2 J(a M ) = E[f M (n)] 2 = E[a T M u(n) u(n 1) ] 2 Optimal Forward Predictor w o = R 1 r Forward Prediction Error Power P M = r() r T w o Two derivations of optimal solution 1 Transforming the criterion into a quadratic form J(w) = E[u(n) w T u(n 1)] 2 = E[u(n) w T u(n 1)][u(n) u(n 1) T w] = E[u(n)] 2 2E[u(n)u(n 1) T ]w + w T E[u(n 1)u(n 1) T ]w = r() 2r T w + w T Rw = r() r T R 1 r + (w R 1 r) T R(w R 1 r) (1)
5 Lecture 7 5 The matrix R is positive semi-definite because x T Rx = x T E[u(n)u(n)] T x = E[u(n) T x] 2 x and therefore the quadratic form in the right hand side of (1) (w R 1 r) T R(w R 1 r) attains its minimum when (w o R 1 r) =, ie w o = R 1 r For the predictor w o, the optimal criterion in (1) equals P M = r() r T R 1 r = r() r T w o 2 Derivation based on optimal Wiener filter design The optimal predictor evaluation can be rephrased as the following Wiener filter design problem: find the FIR filtering process y(n) = w T u(n) as close as possible to desired signal d(n) = u(n + 1), ie minimizing the criterion E[d(n) y(n)] 2 = E[u(n + 1) w T u(n)] 2 Then the optimal solution is given by w o = R 1 p where R = E[u(n)u(n) T ] and p = E[d(n)u(n)] = E[u(n + 1)u(n)] = E[u(n)u(n 1)] = r, ie w o = R 1 r
6 Lecture 7 6 Augmented Wiener Hopf equations The optimal predictor filter solution w o and the optimal prediction error power satisfy r() r T w o = P M Rw o r = which can be written in a block matrix equation form r() rt r R 1 w o = P M But 1 = a w M o r() rt r R = R M+1 Autocorrelation matrix of dimensions (M + 1) (M + 1) Finally, the augmented Wiener Hopf equations for optimal forward prediction error filter are R M+1 a M = P M or r() r(1) r(m) r(1) r() r(m 1) r(m) r(m 1) r() a M, a M,1 a M,2 a M,M Whenever R M is nonsingular, and a M, is set to 1, there are unique solutions a M and P M = P M
7 Lecture 7 7 Optimal backward linear prediction Linear backward prediction of order M BACKWARD PREDICTION û b (n M) = g 1 u(n) + g 2 u(n 1) + + g M u(n M + 1) = M k=1 g k u(n k + 1) = g T u(n) where the BACKWARD PREDICTOR is g = [ g 1 g 2 ] T g M Backward prediction error of order M BACKWARD PREDICTION ERROR b M (n) = u(n M) û b (n M) = u(n M) g T u(n) Optimality criterion J b (g) = E[b M (n)] 2 = E[u(n M) g T u(n)] 2
8 Lecture 7 8 Optimal solution: Optimal Backward Predictor g o = R 1 r B = w B o Forward Prediction Error Power P M = r() (r B ) T g o = r() r T w o Derivation based on optimal Wiener filter design The optimal backward predictor evaluation can be rephrazed as the following Wiener filter design problem: find the FIR filtering process y(n) = g T u(n) as close as possible to desired signal d(n) = u(n M), ie minimizing the criterion E[d(n) y(n)] 2 = E[u(n M) g T u(n)] 2 Then the optimal solution is given by g o = R 1 p where R = E[u(n)u(n) T ] and p = E[d(n)u(n)] = E[u(n M)u(n)] = E[u(n M 1)u(n 1)] = r B, ie and the optimal criterion value is g o = R 1 r B J b (g o ) = E[b M (n)] 2 = E[d(n)] 2 g T o Rg o = E[d(n)]2 g T o rb = r() g T o rb
9 Lecture 7 9 Relations between Backward and Forward predictors g o = w B o Useful mathematical result: If the matrix R is Toeplitz, then for all vectors x (Rx) B = Rx B (Rx) B i = (Rx B ) i (Rx) M i+1 = (Rx B ) i Proof: (Rx B ) i = M R i,j x M j+1 = M j=1 = M k=1 j=1 r(i j)x M j+1 j=m k+1 = R M i+1,k x k = (Rx) M i+1 = (Rx) B i The Forward and Backward optimal predictors are solutions of the systems Rw o = r Rg o = r B M k=1 r(i M + k 1)x k Rg o = r B = (Rw o ) B = Rw B o
10 Lecture 7 1 and since R is supposed nonsingular, we have g o = w B o Augmented Wiener-Hopf equations for Backward prediction error filter The optimal Backward predictor filter solution g o and the optimal Backward prediction error power satisfy Rg o r B = r() (r B ) T g o = P M which can be written in a block matrix equation form R r B (r B ) T r() g o 1 = P M But g o 1 = c M R r B (r B ) T = R r() M+1 Autocorrelation matrix of dimensions (M + 1) (M + 1) Finally, the augmented Wiener Hopf equations for optimal backward predictioon error filter are R M+1 c M = P M
11 Lecture 7 11 or r() r(1) r(m) r(1) r() r(m 1) r(m) r(m 1) r() c M, c M,1 c M,2 c M,M = r() r(1) r(m) r(1) r() r(m 1) r(m) r(m 1) r() a M,M a M,M 1 a M,M 2 a M, = P M Levinson Durbin algorithm Stressing the order of the filter All variables will receive a subscript expressing the order of the predictor: R m, r m, a m, w o m Some order recursive equations can be written: r m+1 = [ r(1) r(2) r(m) r(m + 1) ] T = R m+1 = R m r B m (r B m) T r() = r() r m rt m R m r m r(m + 1)
12 Lecture 7 12 Main recursions where ψ = a m 1 m 1 a B m 1 m 1 = r T ma B m 1 = a T m 1r B m = r(m) + m 1 Multipling the right hand side of Equation (2) by R m+1 = R m r B m (r B m) T r() R m+1 ψ = R m+1 = = R ma m 1 (r B m) T a m 1 R ma m 1 m 1 a m 1 m 1 m 1 m rt ma B m 1 R m a B m 1 m 1 R m a B m 1 a B m 1 = = m 1 m 1 R m r B m (r B m) T r() m 1 k=1 m 1 m 1 (2) a m 1,k r(m k) a m 1 = = r() r m m 1 rt m R m r() r m 2 m 1 m 1 m 1 m 1 But since ψ(1) = a m 1, = 1, and we suppose R m nonsingular, the unique solution of R m+1 ψ = 2 m 1 we obtain rt m R m a B m 1 = 2 m 1 provides the optimal predictor a m = ψ with the recursion (2) and the optimal prediction error power m m P m = 2 m 1 = (1 2 m 1 ) = (1 Γ 2 P m) m 1 P 2 m 1
13 Lecture 7 13 with the notation Γ m = m 1 Levinson Durbin recursions a m = m 1 + Γ m a B m 1 Vector form of L D recursions a m,k = a m 1,k + Γ m a m 1,m k, k =, 1,, m Scalar form of L D recursions m 1 = a T m 1r B m = r(m) + m 1 Γ m = m 1 P m = (1 Γ 2 m) k=1 a m 1,k r(m k)
14 Lecture 7 14 Interpretation of m and Γ m 1 m 1 = E[f m 1 (n)b m 1 (n 1)] Proof (Solution of Problem 9 page 238 in [Haykin91]) E[f m 1 (n)b m 1 (n 1)] = E[a T m 1u(n)][u(n 1) T a B m 1] = a T m 1 = [ 1 w T m 1 2 = E[f (n)b (n 1)] = E[u(n)u(n 1)] = r(1) 3 Iterating P m = (1 Γ 2 m) we obtain P m = P ] rt ma B m 1 m 1 m k=1 = r T ma B m 1 = m 1 (1 Γ 2 k) r T m R m 1 r B m 1 wb m Since the power of prediction error must be positive for all orders, the reflection coefficients are less than unit in absolute value: Γ m 1 m =,, M 5 Reflection coefficients equal last autoregressive coefficient, for each order m: Γ m = a m,m, m = M, M 1,, 1
15 Lecture 7 15 Algorithm (L D) Given r(), r(1), r(2),, r(m) for example, estimated from data u(1), u(2), u(3),, u(t ) using r(k) = 1 T T n=k+1 u(n)u(n k) 1 Initialize = r(1), P = r() 2 For m = 1,, M 21 Γ m = m 1 22 a m, = 1 23 a m,k = a m 1,k + Γ m a m 1,m k, k = 1,, m 24 m = r(m + 1) + m a m,k r(m + 1 k) k=1 25 P m = (1 Γ 2 m) Computational complexity: For the m th iteration of Step 2: 2m + 2 multiplications, 2m + 2 additions, 1 division The overall computational complexity: O(M 2 ) operations
16 Lecture 7 16 Algorithm (L D) Second form Given r() and Γ 1, Γ 2,, Γ M 1 Initialize P = r() 2 For m = 1,, M 21 a m, = 1 22 a m,k = a m 1,k + Γ m a m 1,m k, k = 1,, m 23 P m = (1 Γ 2 m) Inverse Levinson Durbin algorithm a m,k = a m 1,k + Γ m a m 1,m k a m,m k = a m 1,m k + Γ m a m 1,k and using the identity Γ m = a m,m a m,k a m,m k = 1 Γ m Γ m 1 a m 1,m k a m 1,k a m 1,k = a m,k a m,m a m,m k 1 (a m,m ) 2 k = 1,, m
17 Lecture 7 17 The second order properties of the AR process are perfectly described by the set of reflection coefficients This immediately follows from the following property: The sets {P, Γ 1, Γ 2,, Γ M } and {r(), r(1),, r(m)} are in one-to-one correspondence Proof (a) {r(), r(1),, r(m)} (b) From we can obtain immediately (Algorithm L D) = {P, Γ 1, Γ 2,, Γ M, } Γ m+1 = m P m = r(m + 1) + m k=1 a m,k r(m + 1 k) P m r m+1 = Γ m+1 P m m a m,k r(m + 1 k) which can be iterated together with Algorithm L D form 2, to obtain all r(1),, r(m) Whitening property of prediction error filters k=1 In theory, a prediction error filter is capable of whitening a stationary discrete-time stochastic process applied to its input, if the order of the filter is high enough Then all information in the original stochastic process u(n) is represented by the parameters {P M, a M,1, a M,2,, a M,M } (or, equivalently, by {P, Γ 1, Γ 2,, Γ M }) A signal equivalent (as second order properties) can be generated starting from {P M, a M,1, a M,2,, a M,M } using the autoregressive difference equation model
18 Lecture 7 18 These analyze and generate paradigms combine to provide the basic principle of vocoders Gram-Schmidt orthogonalization algorithm Using the notations b (n) = u(n) b 1 (n) = a 1,1 u(n) + a 1, u(n 1) b 2 (n) = a 2,2 u(n) + a 2,1 u(n 1) + a 2, u(n 2) b M (n) = a M,M u(n) + a M,M 1 u(n 1) + + a M, u(n M) u(n) = [u(n), u(n 1),, u(n M)] T b(n) = [b (n), b 1 (n),, b M (n)] T L = we may write G S orthogonalization procedure as 1 a 1,1 1 a M,M a M,M 1 1 b(n) = Lu(n)
19 Lecture 7 19 Orthogonality of Backward prediction errors The following orthogonality property holds: Proof Suppose j i E[b i (n)b j (n)] = E[b j (n) ] b i (n) = i a i,i k u(n k) k= P m, i = j, i j Eb j (n)b i (n) = Eb j (n) i a i,i k u(n k) = = i k= k= a i,i k Eb j (n)u(n k) = Eb 2 i (n), i = j, i j = P m, i = j, i j due to orthogonality of optimal Wiener filter error to the inputs involved in the computation of that error: Eb j (n)u(n k) = for k j In matrix form Substituting b(n) = Lu(n) E[b(n)b(n) T ] = P P 1 P M = D E[b(n)b(n) T ] = LE[u(n)u(n) T ]L T = LRL T = D
20 Lecture 7 2 which can be use to factorize the matrices R and R 1 as R = L 1 DL T R 1 = (L 1 DL T ) 1 = L T D 1 L = L T D 1/2 D 1/2 L = (D 1/2 L) T D 1/2 L (3) Equation (3) provides the Cholesky factorization of R 1
Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion
Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted
More informationChapter 2 Wiener Filtering
Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start
More informationLecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method
1 Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method Adaptive filtering: Problem statement Consider the family of variable parameter FIR filters, computing their
More informationSGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method
SGN 21006 Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 20 Adaptive filtering:
More informationMultimedia Communications. Differential Coding
Multimedia Communications Differential Coding Differential Coding In many sources, the source output does not change a great deal from one sample to the next. This means that both the dynamic range and
More information1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c =
ENEE630 ADSP Part II w/ solution. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix) R a = 4 4 4,R b = 0 0,R c = j 0 j 0 j 0 j 0 j,r d = 0 0 0
More informationAdvanced Digital Signal Processing -Introduction
Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary
More informationSIMON FRASER UNIVERSITY School of Engineering Science
SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main
More informationOrder Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples
Order Recursion Introduction Order versus Tie Updates Matrix Inversion by Partitioning Lea Levinson Algorith Interpretations Exaples Introduction Rc d There are any ways to solve the noral equations Solutions
More informationLinear Optimum Filtering: Statement
Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error
More information2-D FIR WIENER FILTER REALIZATIONS USING ORTHOGONAL LATTICE STRUCTURES
-D FIR WIENER FILTER REALIZATIONS USING ORTHOGONAL LATTICE STRUCTURES by Ahmet H. Kayran and Ender Ekşioğlu Department of Electrical Engineering Istanbul Technical University aslak, Istanbul, Turkey 866
More informationStatistical and Adaptive Signal Processing
r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface
More informationParametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes
1 Discrete-time Stochastic Processes Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering University of Maryland, College Park
More informationStatistical Signal Processing Detection, Estimation, and Time Series Analysis
Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY
More informationSGN Advanced Signal Processing Project bonus: Sparse model estimation
SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve
More informationParametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes
Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationELEG-636: Statistical Signal Processing
ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical
More informationMatrix Factorizations
1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular
More informationLeast Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on
Least Square Es?ma?on, Filtering, and Predic?on: Sta?s?cal Signal Processing II: Linear Es?ma?on Eric Wan, Ph.D. Fall 2015 1 Mo?va?ons If the second-order sta?s?cs are known, the op?mum es?mator is given
More informationLinear Prediction Theory
Linear Prediction Theory Joseph A. O Sullivan ESE 524 Spring 29 March 3, 29 Overview The problem of estimating a value of a random process given other values of the random process is pervasive. Many problems
More informationAdaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling
Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical
More informationCh4: Method of Steepest Descent
Ch4: Method of Steepest Descent The method of steepest descent is recursive in the sense that starting from some initial (arbitrary) value for the tap-weight vector, it improves with the increased number
More informationLesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:
Lesson 1 Optimal Signal Processing Optimal signalbehandling LTH September 2013 Statistical Digital Signal Processing and Modeling, Hayes, M: John Wiley & Sons, 1996. ISBN 0471594318 Nedelko Grbic Mtrl
More information6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m
6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m φ(ω) = γ ke jωk n k= n ρ, (23) jωk ke where γ k = γ k and ρ k = ρ k. Any continuous PSD can be approximated arbitrary
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationSome notes about signals, orthogonal polynomials and linear algebra
Some notes about signals, orthogonal polynomials and linear algebra Adhemar Bultheel Report TW 180, November 1992 Revised February 1993 n Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan
More informationWiener Filtering. EE264: Lecture 12
EE264: Lecture 2 Wiener Filtering In this lecture we will take a different view of filtering. Previously, we have depended on frequency-domain specifications to make some sort of LP/ BP/ HP/ BS filter,
More informationLinear Model Under General Variance Structure: Autocorrelation
Linear Model Under General Variance Structure: Autocorrelation A Definition of Autocorrelation In this section, we consider another special case of the model Y = X β + e, or y t = x t β + e t, t = 1,..,.
More informationELEG 305: Digital Signal Processing
ELEG 305: Digital Signal Processing Lecture 19: Lattice Filters Kenneth E. Barner Department of Electrical and Computer Engineering University of Delaware Fall 2008 K. E. Barner (Univ. of Delaware) ELEG
More informationChapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析
Chapter 9 Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 1 LPC Methods LPC methods are the most widely used in speech coding, speech synthesis, speech recognition, speaker recognition and verification
More informationSteven L. Scott. Presented by Ahmet Engin Ural
Steven L. Scott Presented by Ahmet Engin Ural Overview of HMM Evaluating likelihoods The Likelihood Recursion The Forward-Backward Recursion Sampling HMM DG and FB samplers Autocovariance of samplers Some
More informationOptimal Polynomial Control for Discrete-Time Systems
1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should
More informationLevinson Durbin Recursions: I
Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions
More informationLMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.
Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller
More informationParametric Method Based PSD Estimation using Gaussian Window
International Journal of Engineering Trends and Technology (IJETT) Volume 29 Number 1 - November 215 Parametric Method Based PSD Estimation using Gaussian Window Pragati Sheel 1, Dr. Rajesh Mehra 2, Preeti
More informationFIR Filters for Stationary State Space Signal Models
Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationLevinson Durbin Recursions: I
Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions
More informationLecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.
Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationARIMA Modelling and Forecasting
ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first
More informationAutomatic Autocorrelation and Spectral Analysis
Piet M.T. Broersen Automatic Autocorrelation and Spectral Analysis With 104 Figures Sprin ger 1 Introduction 1 1.1 Time Series Problems 1 2 Basic Concepts 11 2.1 Random Variables 11 2.2 Normal Distribution
More informationIS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?
IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationThe goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.
Wiener filter From Wikipedia, the free encyclopedia In signal processing, the Wiener filter is a filter proposed by Norbert Wiener during the 1940s and published in 1949. [1] Its purpose is to reduce the
More information12. Prediction Error Methods (PEM)
12. Prediction Error Methods (PEM) EE531 (Semester II, 2010) description optimal prediction Kalman filter statistical results computational aspects 12-1 Description idea: determine the model parameter
More informationA Levinson algorithm based on an isometric transformation of Durbin`s
Universidade de São Paulo Biblioteca Digital da Produção Intelectual - BDPI Departamento de Sistemas Eletrônicos - EP/PSI Artigos e Materiais de evistas Científicas - EP/PSI 2008 A Levinson algorithm based
More informationWidely Linear Estimation and Augmented CLMS (ACLMS)
13 Widely Linear Estimation and Augmented CLMS (ACLMS) It has been shown in Chapter 12 that the full-second order statistical description of a general complex valued process can be obtained only by using
More informationComputational Methods. Least Squares Approximation/Optimization
Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the
More informationAdaptive Filtering Part II
Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationIII.C - Linear Transformations: Optimal Filtering
1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients
More informationNumerical Method for Solving Fuzzy Nonlinear Equations
Applied Mathematical Sciences, Vol. 2, 2008, no. 24, 1191-1203 Numerical Method for Solving Fuzzy Nonlinear Equations Javad Shokri Department of Mathematics, Urmia University P.O.Box 165, Urmia, Iran j.shokri@mail.urmia.ac.ir
More information14 - Gaussian Stochastic Processes
14-1 Gaussian Stochastic Processes S. Lall, Stanford 211.2.24.1 14 - Gaussian Stochastic Processes Linear systems driven by IID noise Evolution of mean and covariance Example: mass-spring system Steady-state
More information«Random Vectors» Lecture #2: Introduction Andreas Polydoros
«Random Vectors» Lecture #2: Introduction Andreas Polydoros Introduction Contents: Definitions: Correlation and Covariance matrix Linear transformations: Spectral shaping and factorization he whitening
More informationDr. Maddah ENMG 617 EM Statistics 11/28/12. Multiple Regression (3) (Chapter 15, Hines)
Dr. Maddah ENMG 617 EM Statistics 11/28/12 Multiple Regression (3) (Chapter 15, Hines) Problems in multiple regression: Multicollinearity This arises when the independent variables x 1, x 2,, x k, are
More informationAdaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.
Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is
More information5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model
5 Kalman filters 5.1 Scalar Kalman filter 5.1.1 Signal model System model {Y (n)} is an unobservable sequence which is described by the following state or system equation: Y (n) = h(n)y (n 1) + Z(n), n
More informationEEL 6502: Adaptive Signal Processing Homework #4 (LMS)
EEL 6502: Adaptive Signal Processing Homework #4 (LMS) Name: Jo, Youngho Cyhio@ufl.edu) WID: 58434260 The purpose of this homework is to compare the performance between Prediction Error Filter and LMS
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationAdaptive Filter Theory
0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent
More informationA Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases
A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases Phil Schniter Nov. 0, 001 Abstract In this report we combine the approach of Yousef and Sayed [1] with that of Rupp and Sayed
More informationEIIhhEEEIIIhIIhEIIIE. Eh/hhE h hh he D-A APPLICATION OF LATTICE FILTERING TECHNIQUES TO CM ITI MEASURED TINE DOMAIN DATA(U) LAWRENCE LIVERMORE
D-A184 351 APPLICATION OF LATTICE FILTERING TECHNIQUES TO CM ITI MEASURED TINE DOMAIN DATA(U) LAWRENCE LIVERMORE EIIhhEEEIIIhIIhEIIIE NATIONAL LAB CA S GILES ET AL JAN 87 N881t4-83-D-9689 UNCLASSIFIED
More informationi=1 h n (ˆθ n ) = 0. (2)
Stat 8112 Lecture Notes Unbiased Estimating Equations Charles J. Geyer April 29, 2012 1 Introduction In this handout we generalize the notion of maximum likelihood estimation to solution of unbiased estimating
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationLecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters
1 Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters Overview Block Adaptive Filters Iterating LMS under the assumption of small variations in w(n) Approximating the gradient by time
More informationTime Series: Theory and Methods
Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary
More informationMatrix Factorization and Analysis
Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationLecture 8: ARIMA Forecasting Please read Chapters 7 and 8 of MWH Book
Lecture 8: ARIMA Forecasting Please read Chapters 7 and 8 of MWH Book 1 Predicting Error 1. y denotes a random variable (stock price, weather, etc) 2. Sometimes we want to do prediction (guessing). Let
More informationTracking the Instability in the FTF Algorithm
Tracking the Instability in the FTF Algorithm Richard C. LeBorne Tech. Universität Hamburg-Harburg Arbeitsbereich Mathematik Schwarzenbergstraße 95 2173 Hamburg, Germany leborne@tu-harburg.de Ian K. Proudler
More informationOn Input Design for System Identification
On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationParametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)
Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides
More informationMachine Learning and Adaptive Systems. Lectures 3 & 4
ECE656- Lectures 3 & 4, Professor Department of Electrical and Computer Engineering Colorado State University Fall 2015 What is Learning? General Definition of Learning: Any change in the behavior or performance
More informationCh5: Least Mean-Square Adaptive Filtering
Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm
More informationADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering
Advanced Digital Signal rocessing (18-792) Spring Fall Semester, 201 2012 Department of Electrical and Computer Engineering ROBLEM SET 8 Issued: 10/26/18 Due: 11/2/18 Note: This problem set is due Friday,
More informationIntroduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A
More informationAdaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco
Adaptive Systems Winter Term 2017/18 Instructor: Pejman Mowlaee Beikzadehmahaleh Assistants: Christian Stetco Signal Processing and Speech Communication Laboratory, Inffeldgasse 16c/EG written by Bernhard
More informationNotes on Linear Minimum Mean Square Error Estimators
Notes on Linear Minimum Mean Square Error Estimators Ça gatay Candan January, 0 Abstract Some connections between linear minimum mean square error estimators, maximum output SNR filters and the least square
More informationEE 225D LECTURE ON DIGITAL FILTERS. University of California Berkeley
University of California Berkeley College of Engineering Department of Electrical Engineering and Computer Sciences Professors : N.Morgan / B.Gold EE225D Digital Filters Spring,1999 Lecture 7 N.MORGAN
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationGaussian processes. Basic Properties VAG002-
Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity
More informationSumming Symbols in Mutual Recurrences
Summing Symbols in Mutual Recurrences Berkeley R Churchill 1 and Edmund A Lamagna 2 1 University of California, Santa Barbara, CA, USA Department of Mathematics 2 University of Rhode Island, Kingston,
More informationLinear Prediction 1 / 41
Linear Prediction 1 / 41 A map of speech signal processing Natural signals Models Artificial signals Inference Speech synthesis Hidden Markov Inference Homomorphic processing Dereverberation, Deconvolution
More informationHirschman Optimal Transform Block LMS Adaptive
Provisional chapter Chapter 1 Hirschman Optimal Transform Block LMS Adaptive Hirschman Optimal Transform Block LMS Adaptive Filter Filter Osama Alkhouli, Osama Alkhouli, Victor DeBrunner and Victor DeBrunner
More informationLayered Polynomial Filter Structures
INFORMATICA, 2002, Vol. 13, No. 1, 23 36 23 2002 Institute of Mathematics and Informatics, Vilnius Layered Polynomial Filter Structures Kazys KAZLAUSKAS, Jaunius KAZLAUSKAS Institute of Mathematics and
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More informationChapter 2 Fundamentals of Adaptive Filter Theory
Chapter 2 Fundamentals of Adaptive Filter Theory In this chapter we will treat some fundamentals of the adaptive filtering theory highlighting the system identification problem We will introduce a signal
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More information