Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on

Size: px
Start display at page:

Download "Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on"

Transcription

1 Least Square Es?ma?on, Filtering, and Predic?on: Sta?s?cal Signal Processing II: Linear Es?ma?on Eric Wan, Ph.D. Fall

2 Mo?va?ons If the second-order sta?s?cs are known, the op?mum es?mator is given by the normal equa?ons or the solu?on to the Wiener-Hopf Equa?ons For most applica?ons, the actual sta?s?cs are unknown. Alterna?ve approach is to es?mate the coefficients from observed data Two possible approaches Es?mate required moments from available data and build an approximate MMSE es?mator Build an es?mator that minimizes some error func?onal calculated from the available data 2

3 MMSE versus Least Squares Recall that MMSE es'mators are op?mal in expecta?on across the ensemble of all stochas?c processes with the same second order sta?s?cs Least squares es?mators minimize the error on a given block of data In signal processing applica?ons, the block of data is a fnite-length period of?me Note the book defines E as a sum instead of an average No guarantees about op?mality on other data sets or other stochas?c processes When can we infer something about the ensemble performance based on a single observa?on sequence of an experiment? 3

4 MMSE versus Least Squares No guarantees about op?mality on other data sets or other stochas?c processes When can we infer something about the ensemble performance based on a single observa?on sequence of an experiment? If the process is ergodic and sta'onary, the LSE es?mator approaches the MMSE es?mator as the size of the data set grows Will only discuss the sum of squares as the performance criterion Recall our earlier discussion about alterna?ves Ra?onale Mathema?cally tractable - Picking sum of squares will permit us to obtain a closedform op?mal solu?on Solu?on only depends on second order moments, which are easily es?mated 4

5 Block Processing 5

6 Least Squares Least squares is a method for finding the best fit to a linear system of N equa?ons and M unknown, a 11 x 1 + a 12 x 2 = y 1 a 21 x 2 + a 22 x 2 = y 2 N = M! # # " a 11 a 12 a 21 a 22 $! &# &# %" x 1 x 2 $! & & = y # 1 # % " y 2 $ & & % Ax = y x = A 1 y 6

7 Least Squares Least squares is a method for finding the best fit to a linear system of N equa?ons and M unknown, a 11 a 12 a 21 a 22 a 31 a 32 x 1 x 2 = y 1 y 2 y 3 N > M Ax = y x = A 1 y e = y Ax A Is called the Data Matrix min x (e T e) = min e(n) 2 x N n=1 (note index n starts at 1 versus 0 in book) 7

8 Linear Least Squares Es?ma?on and Filtering Back to book s nota?on 8

9 Linear Least Squares Es?ma?on and Filtering Defini?ons 9

10 Matrix Formula?on - Mul?ple Sensors (previous nota?on: e = y Ax ) 10

11 Matrix Formula?on - Mul?ple Sensors 11

12 Matrix Formula?on - Filtering Principle is the same, but need to consider edge effects. Will come back to computa?onal and windowing aspects of filtering later 12

13 Matrix Formula?on Back to mul?ple signals 13

14 Squared Error 14

15 Squared Error Components (note that book does not normalize by 1/N) 15

16 Squared Error Components 16

17 Rela?ng LSE to MMSE Es?ma?on Plugging in our defini?ons for R and d Which should look familiar from before. Many of the concepts, solu?ons, etc. will be similar 17

18 Least Squares Es?mate Three ways to solve for the least squares es?mate 1. Take gradient and set to zero 2. Complete the square 3. Orthogonality 18

19 Least Squares Es?mate 1. Take gradient and set to zero e T e = (y - Xc) T (y - Xc) This yields the normal equa?ons = y T y 2c T X T y + c T X T Xc e T e c = 2XT y + 2X T Xc = 0 X T Xc = X T y ˆRc = ˆd For N>M the problem is almost always overdetermined and hence the columns of the data matrix X are independent. This implies X T X is full rank. c LS = ( X T X) 1 X T y c LS = ˆR 1ˆd 19

20 Least Squares Es?mate 2. Complete the square 20

21 Least Squares Es?mate 2. Complete the square Both the LSE and MSE criteria are quadra?c func?ons of the coefficient vector Same form as the FIR Wiener solu?on When are they equivalent? For an ergodic process in the limit of large N 21

22 A toy example Line fiing y x" x" x" x" x" x" x" x" x" y = ax + b x! # # # # # " e 1 e 2! e N $! & # & # & = # & # & # % " y 1 y 2! y N $! & # & # & # & # & # % " x 1 1 x N 1!! x 1 1 $ & &! &# &" & % a b $ & % e = y - Xc Note the coefficients could be for a higher-order polynomial. The system must be linear in the unknown parameters, not the equa?ons themselves. 22

23 More Applica?ons Applica?ons we ve seen earlier for Linear Es?ma?on / Wiener Filtering Noise reduc?on Equaliza?on Predic?on System Iden?fica?on Can be solved using LS given a block of data Applies to FIR (not general IIR filter) Numerical and computa?onal aspects need further inves?ga?on 23

24 Computa?onal Issues 24

25 Example: Time Series Predic?on (from McNames notes) Goal: Predict the S&P 500 Clearly not a sta?onary signal What might we do? Common trick: difference the?me series 25

26 Example: Difference Time Series 26

27 Example: Percent Change Time Series 27

28 Example: Predic?on Results 28

29 Orthogonality and Geometric Interpreta?on (2-D illustra?on) Consider the simple example:! X = # 2 " 1 $! & y = # 2 % " 2 $ & % e = y - Xc y e o Xc LS ŷ = Xc ŷ(0) = 2c ŷ(1) =1c If we want to make Xc LS as close as possible to y, then the error vector e should be orthogonal to the line (column space) Xc (Xc) T e o = 0 c c T X T (Xc LS y) = c T [X T Xc LS X T y] = 0 c P = X(X T X) 1 X T Since this must hold for all c, we must have, X T Xc LS = X T y (normal equa2ons) 29

30 Orthogonality and Geometric Interpreta?on (2-D illustra?on) Consider the simple example:! X = # 2 " 1 $! & y = # 2 % " 2 $ & % e = y - Xc y e o Xc LS ŷ = Xc ŷ(0) = 2c ŷ(1) =1c Subs?tu?ng directly for the LS solu?on: c LS = ( X T X) 1 X T y ŷ = X(X T X) 1 X T y The matrix P = X(X T X) 1 X T is a projec?on operator which projects y onto the space spanned by X 30

31 Orthogonality and Geometric Interpreta?on (book) 31

32 Orthogonality and Geometric Interpreta?on 32

33 Orthogonality and Geometric Interpreta?on 33

34 Uniqueness 34

35 The Pseudoinverse We can write the least squares solu?on as c LS = ( X T X) 1 X T y = X + y Where we have defined the pseudoinverse of the matrix independent columns: X + = ( X T X) 1 X T The pseudoinverse has the following proper?es X with linear i) XX + X = X ii) (XX + ) T = XX + It can be shown using orthogonality that any matrix X + sa?sfying the above two condi?ons yields the least squares solu?on X + y to the equa?on y -Xc = e. 35

36 Minimum Norm Solu?on Suppose that the columns of X are not linearly independent, or simple N<M. Then X T X cannot be inverted and there are an infinite number of solu?ons which solve y = Xc exactly Which to choose? X =! 1 2 "# $ %& y =! 4 "# $ %& 1c 1 + 2c 2 = 4 y - Xc = 0 c 2 c min c 1 c min is orthogonal to the subspace of y - Xc = 0 c min is orthogonal to the null space of X is in the range space of X T c min c = X T min λ 36

37 Minimum Norm Solu?on Solving c min = X T c min = X T λ y = Xc y = XX T λ λ = (XX T ) 1 y ( X T X) 1 y = X + y Psuedoinverse X + = X T ( X T X) 1 iii) X + XX + = X + iv) (X + X) T = X + X Moore-Penrose pseudoinverse. For any matrix X there is only one matrix X + that sa?sfies all four condi?ons 37

38 Weighed Least Squares 38

39 Weighed Least Squares 39

40 Weighed Least Squares 40

41 Proper?es of the LS Es?mate Assume a sta?s?cal model of how the data was generated Some proper?es won t hold when the model is not accurate 41

42 Determinis?c versus Stochas?c Data Matrix 42

43 Es?mator Proper?es (Determinis?c case) 43

44 Es?mator Proper?es (Determinis?c case) Error Variance Define This is is an unbiased es?mate of the true error variance See book for proof 44

45 Es?mator Proper?es (Determinis?c case) Other proper?es - see book for proofs 45

46 Es?mator Proper?es (Stochas?c case) 46

47 Another Perspec?ve System iden?fica?on v(n) x(n) H(z) y(n) c LS ŷ(n) - e(n) Addi?ve noise did not affect the Wiener solu?on LS solu?on is s?ll unbiased (if the model matches). But adds variance to the LS solu?on, so you need more data for a good fit. 47

48 Example: Power Spectral Es?ma?on Power spectral es?ma?on (material from chapter 7) 1. Use non-parametric approach (e.g., Welch method) from ECE Fit a model e(n) H(z) x(n) If e(n) is white, then Use an autoregressive model driven by noise, e(n) M 1 x(n) = a k x(n k) + e(n) k=1 48

49 Example: Power Spectral Es?ma?on Least Squares Solu?on a LS = c LS = ( X T X) 1 X T y Note autoregressive models have a duality with predic?on y(n) = x(n +1) x(n) h lp (n) ŷ(n)!" e(n +1) (we will discuss windowing aspects later) 49

50 Example: Power Spectral Es?ma?on Generate some data e(n) H(z) x(n) 1 H(z) = 1+.3z 1 +.6z 2 6 x n 50

51 Example: Power Spectral Es?ma?on True Power Spectrum 25 Power Spectrum Rxx(e jw ) Power (db) Normalized Freq 51

52 Example: Power Spectral Es?ma?on Periodogram 25 Rxx(e jw ) versus DFT Power (db) Normalized Freq 52

53 Example: Power Spectral Es?ma?on Welch Method 25 Rxx(e jw ) versus DFT 2 (WELCH(256)) Power (db) Normalized Freq 53

54 Example: Power Spectral Es?ma?on LS Fit (M=5) 25 Rxx(e jw ) versus AR fit Power (db) Normalized Freq 54

55 Power Spectral Es?ma?on Mixing parametric and non-parametric Use predic?on filter to whiten the error Use non-parametric on the residual Color the PSD es?mate using the AP model (undo the pre-whitening) 55

56 Least Squares Filtering Addi?onal aspects 56

57 Edge condi?ons and windowing 57

58 Compu?ng the correla?on matrix 58

59 Deriva?on of Correla?on Matrix Recursion 59

60 Window op?ons See text for minor modifica?ons to correla?on recursions 60

61 More on computa?onally efficient methods Methods based on forward-backward predic?on and Order Recursive algorithms A lot of the details in Chapter 7, We will touch on some aspects rela?ng to predic?on Allows for an alterna?ve windowing approach General Linear Algebra approaches (come back to this) Cholesky decomposi?on, SVD, etc

62 Linear Predic?on Recall the Wiener Solu?on y(n) = x(n +1) M x(n) = a k x(n k) + e(n) k=1 x(n) a ŷ(n)!" e(n) Ra o = d d What is?! # # d = E[x(n) y(n)] = E[x(n)x(n +1)] = # # # " r x (1) r x (2)! r x (M ) $ & & & = r & & % Ra o = r M 1 r(n) = a k r(n 1) k=1 These are known as the Yule-Walker equa?ons Leads to efficient order recursive computa?ons (Levinson-Durbin algorithms) 62

63 !" Backward Linear Predic?on Think of?me running in the reverse y(n) = x(n +1) a f x(n +1) x(n) a f ŷ(n)!" e f (n) y(n) = x(n M 1) x(n M 1) a b e b (n) ŷ(n) a b x(n) M x(n) = a f x(n k) + e f (n) x(n M 1) = a b x(n k) + e b (n) k k k=1 M k=1 Easy to show using the Yule-Walker equa?ons that So how do we make use of this? Note, book s nota?on slightly different (b = a b ) a f = flip(a b ) 63

64 Forward-Backward Linear Predic?on Minimize the forward and backward error. Double the size of the data matrix F w a FB F B B Lowers the variance of the LS es?mate Correla?on or modified covariance window methods See book for addi?onal details and more careful nota?on See MATLAB s ar(x,m) (uses forward-backward and short/no windows by default) 64

65 Applica?on Example: Narrow band interference canceling 65

66 Applica?on Example: Narrow band interference canceling 66

67 Applica?on Example: Narrow band interference canceling 67

68 Applica?on Example: Narrow band interference canceling Just a D-Step ahead predictor Some?mes called a line enhancer 68

69 Example: Microelectrode Narrowband Interference 69

70 Example: Microelectrode Narrowband Interference Signal PSD 0.1 Input PSD PSD (scaled) Frequency (Hz) 70

71 Example: Microelectrode Narrowband Interference Residual PSD 0.1 Output PSD PSD (scaled) Frequency (Hz) 71

72 Example: Microelectrode Narrowband Interference Input and Predicted Signal NMSE= 93.4% D = 44 (5 seconds) M = 500 Observed Estimated Signal Time (s) 72

73 Example: Microelectrode Narrowband Interference Predic?on filter frequency response 10 1 Prediction Filter 10 0 Magnitude Response H(e jω ) Frequency (Hz) 73

74 Example: Microelectrode Narrowband Interference Predic?on error filter frequency response 10 1 Prediction Error Filter 10 0 Magnitude Response H(e jω ) Frequency (Hz) 74

75 Example: OGI Seminar 1 original recording x noise segment

76 Example: OGI Seminar 1 original recording D =1 M = x Enhanced Speech x 10 4 (Audi2on Noise reduc2on) 76

77 Example Applica?on: IIR filtering / System ID We previously used a predictor / AR model (all-pole) for spectral es?ma?on Now consider a general zero/pole IIR model Given input x(n) and desired output y(n) v(n) x(n)? y(n) H LS (z) = B(z) A(z) ŷ(n) - e(n) M 1 ŷ(n) = a(n) ŷ(n k) + b(n)x(n k) k=1 M 1 k=0 77

78 IIR filtering / System ID Data matrices M 1 ŷ(n) = a(n) ŷ(n k) + b(n)x(n k) k=1 M 1 k=0 - y What s wrong with this? How do we get ŷ(n)! X c 78

79 IIR filtering / System ID Data matrices M 1 ŷ(n) = a(n) ŷ(n k) + b(n)x(n k) k=1 M 1 k=0 - y Subs?tute the for the best solu?on: Called Equa?on Error method Easy to show the solu?on is unbiased if no noise, X ŷ(n) y(n) v(n) = 0 c 79

80 Back to Numerical Methods QR decomposi?on (just the basics) Any matrix with linearly independent columns can be factored as X = QR R is upper triangular and inver?ble (not to be confused with the autocorrela?on matrix) The columns of Q are Orthonormal Q T Q = I Factoriza?on is achieved using Gram-Schimdt or Householder algorithms Subs?tu?ng into the LS equa?ons ( ) 1 X T y c LS = X T X = (R T Q T QR) 1 R T Q T y = (R T R) 1 R T Q T y = R 1 Q T y Which is easily solves using back subs?tu?on since R is upper triangular MATLAB s backslash command, c LS = X \ y Rc LS = Q T y 80

81 Back to Numerical Methods Singular Value Decomposi?on Any matrix of rank r can be factored 81

82 Back to Numerical Methods Singular Value Decomposi?on Any matrix of rank r can be factored Easy to show that the pseudoinverse is given by Thus the LS solu?on is just c LS = X + y 82

83 Other Topics Not Covered Addi?onal details on Numeric Methods Details on Signal Modeling and Parametric Spectral Es?ma?on Minimum variance spectral es?ma?on Harmonic models and super resolu'on algorithms. 83

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

ECE 8440 Unit 13 Sec0on Effects of Round- Off Noise in Digital Filters

ECE 8440 Unit 13 Sec0on Effects of Round- Off Noise in Digital Filters ECE 8440 Unit 13 Sec0on 6.9 - Effects of Round- Off Noise in Digital Filters 1 We have already seen that if a wide- sense staonary random signal x(n) is applied as input to a LTI system, the power density

More information

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on Adap>ve Filters Part 2 (LMS variants and analysis) Sta>s>cal Signal Processing II: Linear Es>ma>on Eric Wan, Ph.D. Fall 2015 1 LMS Variants and Analysis LMS variants Normalized LMS Leaky LMS Filtered-X

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

Part III Spectrum Estimation

Part III Spectrum Estimation ECE79-4 Part III Part III Spectrum Estimation 3. Parametric Methods for Spectral Estimation Electrical & Computer Engineering North Carolina State University Acnowledgment: ECE79-4 slides were adapted

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1

CONTENTS NOTATIONAL CONVENTIONS GLOSSARY OF KEY SYMBOLS 1 INTRODUCTION 1 DIGITAL SPECTRAL ANALYSIS WITH APPLICATIONS S.LAWRENCE MARPLE, JR. SUMMARY This new book provides a broad perspective of spectral estimation techniques and their implementation. It concerned with spectral

More information

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering Advanced Digital Signal rocessing (18-792) Spring Fall Semester, 201 2012 Department of Electrical and Computer Engineering ROBLEM SET 8 Issued: 10/26/18 Due: 11/2/18 Note: This problem set is due Friday,

More information

Deriva'on of The Kalman Filter. Fred DePiero CalPoly State University EE 525 Stochas'c Processes

Deriva'on of The Kalman Filter. Fred DePiero CalPoly State University EE 525 Stochas'c Processes Deriva'on of The Kalman Filter Fred DePiero CalPoly State University EE 525 Stochas'c Processes KF Uses State Predic'ons KF es'mates the state of a system Example Measure: posi'on State: [ posi'on velocity

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Linear Prediction 1 / 41

Linear Prediction 1 / 41 Linear Prediction 1 / 41 A map of speech signal processing Natural signals Models Artificial signals Inference Speech synthesis Hidden Markov Inference Homomorphic processing Dereverberation, Deconvolution

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m

6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m 6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m φ(ω) = γ ke jωk n k= n ρ, (23) jωk ke where γ k = γ k and ρ k = ρ k. Any continuous PSD can be approximated arbitrary

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

UVA CS 4501: Machine Learning. Lecture 6: Linear Regression Model with Dr. Yanjun Qi. University of Virginia

UVA CS 4501: Machine Learning. Lecture 6: Linear Regression Model with Dr. Yanjun Qi. University of Virginia UVA CS 4501: Machine Learning Lecture 6: Linear Regression Model with Regulariza@ons Dr. Yanjun Qi University of Virginia Department of Computer Science Where are we? è Five major sec@ons of this course

More information

Wiener Filtering. EE264: Lecture 12

Wiener Filtering. EE264: Lecture 12 EE264: Lecture 2 Wiener Filtering In this lecture we will take a different view of filtering. Previously, we have depended on frequency-domain specifications to make some sort of LP/ BP/ HP/ BS filter,

More information

Polynomials and Gröbner Bases

Polynomials and Gröbner Bases Alice Feldmann 16th December 2014 ETH Zürich Student Seminar in Combinatorics: Mathema:cal So

More information

Laboratory Project 2: Spectral Analysis and Optimal Filtering

Laboratory Project 2: Spectral Analysis and Optimal Filtering Laboratory Project 2: Spectral Analysis and Optimal Filtering Random signals analysis (MVE136) Mats Viberg and Lennart Svensson Department of Signals and Systems Chalmers University of Technology 412 96

More information

Least squares: the big idea

Least squares: the big idea Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Introduction to Particle Filters for Data Assimilation

Introduction to Particle Filters for Data Assimilation Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,

More information

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion

Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion Peter Richtárik (joint work with Robert M. Gower) University of Edinburgh Oberwolfach, March 8, 2016 Part I Stochas(c Dual

More information

Machine Learning and Data Mining. Linear regression. Prof. Alexander Ihler

Machine Learning and Data Mining. Linear regression. Prof. Alexander Ihler + Machine Learning and Data Mining Linear regression Prof. Alexander Ihler Supervised learning Notation Features x Targets y Predictions ŷ Parameters θ Learning algorithm Program ( Learner ) Change µ Improve

More information

Bellman s Curse of Dimensionality

Bellman s Curse of Dimensionality Bellman s Curse of Dimensionality n- dimensional state space Number of states grows exponen

More information

Sta$s$cal sequence recogni$on

Sta$s$cal sequence recogni$on Sta$s$cal sequence recogni$on Determinis$c sequence recogni$on Last $me, temporal integra$on of local distances via DP Integrates local matches over $me Normalizes $me varia$ons For cts speech, segments

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied Linear Signal Models Overview Introduction Linear nonparametric vs. parametric models Equivalent representations Spectral flatness measure PZ vs. ARMA models Wold decomposition Introduction Many researchers

More information

Practical Spectral Estimation

Practical Spectral Estimation Digital Signal Processing/F.G. Meyer Lecture 4 Copyright 2015 François G. Meyer. All Rights Reserved. Practical Spectral Estimation 1 Introduction The goal of spectral estimation is to estimate how the

More information

Other types of errors due to using a finite no. of bits: Round- off error due to rounding of products

Other types of errors due to using a finite no. of bits: Round- off error due to rounding of products ECE 8440 Unit 12 More on finite precision representa.ons (See sec.on 6.7) Already covered: quan.za.on error due to conver.ng an analog signal to a digital signal. 1 Other types of errors due to using a

More information

ECE Unit 4. Realizable system used to approximate the ideal system is shown below: Figure 4.47 (b) Digital Processing of Analog Signals

ECE Unit 4. Realizable system used to approximate the ideal system is shown below: Figure 4.47 (b) Digital Processing of Analog Signals ECE 8440 - Unit 4 Digital Processing of Analog Signals- - Non- Ideal Case (See sec8on 4.8) Before considering the non- ideal case, recall the ideal case: 1 Assump8ons involved in ideal case: - no aliasing

More information

Least Squares Parameter Es.ma.on

Least Squares Parameter Es.ma.on Least Squares Parameter Es.ma.on Alun L. Lloyd Department of Mathema.cs Biomathema.cs Graduate Program North Carolina State University Aims of this Lecture 1. Model fifng using least squares 2. Quan.fica.on

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Parametric Method Based PSD Estimation using Gaussian Window

Parametric Method Based PSD Estimation using Gaussian Window International Journal of Engineering Trends and Technology (IJETT) Volume 29 Number 1 - November 215 Parametric Method Based PSD Estimation using Gaussian Window Pragati Sheel 1, Dr. Rajesh Mehra 2, Preeti

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

E : Lecture 1 Introduction

E : Lecture 1 Introduction E85.2607: Lecture 1 Introduction 1 Administrivia 2 DSP review 3 Fun with Matlab E85.2607: Lecture 1 Introduction 2010-01-21 1 / 24 Course overview Advanced Digital Signal Theory Design, analysis, and implementation

More information

Lecture 7: Linear Prediction

Lecture 7: Linear Prediction 1 Lecture 7: Linear Prediction Overview Dealing with three notions: PREDICTION, PREDICTOR, PREDICTION ERROR; FORWARD versus BACKWARD: Predicting the future versus (improper terminology) predicting the

More information

Bias/variance tradeoff, Model assessment and selec+on

Bias/variance tradeoff, Model assessment and selec+on Applied induc+ve learning Bias/variance tradeoff, Model assessment and selec+on Pierre Geurts Department of Electrical Engineering and Computer Science University of Liège October 29, 2012 1 Supervised

More information

Linear Regression and Correla/on. Correla/on and Regression Analysis. Three Ques/ons 9/14/14. Chapter 13. Dr. Richard Jerz

Linear Regression and Correla/on. Correla/on and Regression Analysis. Three Ques/ons 9/14/14. Chapter 13. Dr. Richard Jerz Linear Regression and Correla/on Chapter 13 Dr. Richard Jerz 1 Correla/on and Regression Analysis Correla/on Analysis is the study of the rela/onship between variables. It is also defined as group of techniques

More information

Linear Regression and Correla/on

Linear Regression and Correla/on Linear Regression and Correla/on Chapter 13 Dr. Richard Jerz 1 Correla/on and Regression Analysis Correla/on Analysis is the study of the rela/onship between variables. It is also defined as group of techniques

More information

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M: Lesson 1 Optimal Signal Processing Optimal signalbehandling LTH September 2013 Statistical Digital Signal Processing and Modeling, Hayes, M: John Wiley & Sons, 1996. ISBN 0471594318 Nedelko Grbic Mtrl

More information

STAD68: Machine Learning

STAD68: Machine Learning STAD68: Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! h0p://www.cs.toronto.edu/~rsalakhu/ Lecture 1 Evalua;on 3 Assignments worth 40%. Midterm worth 20%. Final

More information

Numerical Methods in Physics

Numerical Methods in Physics Numerical Methods in Physics Numerische Methoden in der Physik, 515.421. Instructor: Ass. Prof. Dr. Lilia Boeri Room: PH 03 090 Tel: +43-316- 873 8191 Email Address: l.boeri@tugraz.at Room: TDK Seminarraum

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 1 Evalua:on

More information

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c =

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c = ENEE630 ADSP Part II w/ solution. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix) R a = 4 4 4,R b = 0 0,R c = j 0 j 0 j 0 j 0 j,r d = 0 0 0

More information

COMP 562: Introduction to Machine Learning

COMP 562: Introduction to Machine Learning COMP 562: Introduction to Machine Learning Lecture 20 : Support Vector Machines, Kernels Mahmoud Mostapha 1 Department of Computer Science University of North Carolina at Chapel Hill mahmoudm@cs.unc.edu

More information

III.C - Linear Transformations: Optimal Filtering

III.C - Linear Transformations: Optimal Filtering 1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients

More information

Linear Optimum Filtering: Statement

Linear Optimum Filtering: Statement Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Reduced Models for Process Simula2on and Op2miza2on

Reduced Models for Process Simula2on and Op2miza2on Reduced Models for Process Simulaon and Opmizaon Yidong Lang, Lorenz T. Biegler and David Miller ESI annual meeng March, 0 Models are mapping Equaon set or Module simulators Input space Reduced model Surrogate

More information

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach. Wiener filter From Wikipedia, the free encyclopedia In signal processing, the Wiener filter is a filter proposed by Norbert Wiener during the 1940s and published in 1949. [1] Its purpose is to reduce the

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Quan&fying Uncertainty. Sai Ravela Massachuse7s Ins&tute of Technology

Quan&fying Uncertainty. Sai Ravela Massachuse7s Ins&tute of Technology Quan&fying Uncertainty Sai Ravela Massachuse7s Ins&tute of Technology 1 the many sources of uncertainty! 2 Two days ago 3 Quan&fying Indefinite Delay 4 Finally 5 Quan&fying Indefinite Delay P(X=delay M=

More information

ARMA SPECTRAL ESTIMATION BY AN ADAPTIVE IIR FILTER. by JIANDE CHEN, JOOS VANDEWALLE, and BART DE MOOR4

ARMA SPECTRAL ESTIMATION BY AN ADAPTIVE IIR FILTER. by JIANDE CHEN, JOOS VANDEWALLE, and BART DE MOOR4 560 R. BRU AND J. VITbRIA REFERENCES Y. H. Au-Yeung and Y. T. Poon, 3X3 orthostochastic matrices and the convexity of generalized numerical ranges, Linear Algebra Appl. 27:69-79 (1979). N. Bebiano, Some

More information

Fitting Linear Statistical Models to Data by Least Squares: Introduction

Fitting Linear Statistical Models to Data by Least Squares: Introduction Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Linear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing

Linear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing Linear Stochastic Models Special Types of Random Processes: AR, MA, and ARMA Digital Signal Processing Department of Electrical and Electronic Engineering, Imperial College d.mandic@imperial.ac.uk c Danilo

More information

Introduc)on to linear algebra

Introduc)on to linear algebra Introduc)on to linear algebra Vector A vector, v, of dimension n is an n 1 rectangular array of elements v 1 v v = 2 " v n % vectors will be column vectors. They may also be row vectors, when transposed

More information

(a)

(a) Chapter 8 Subspace Methods 8. Introduction Principal Component Analysis (PCA) is applied to the analysis of time series data. In this context we discuss measures of complexity and subspace methods for

More information

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Random Signals Analysis (MVE136) Mats Viberg Department of Signals and Systems Chalmers University of Technology 412

More information

ODEs + Singulari0es + Monodromies + Boundary condi0ons. Kerr BH ScaRering: a systema0c study. Schwarzschild BH ScaRering: Quasi- normal modes

ODEs + Singulari0es + Monodromies + Boundary condi0ons. Kerr BH ScaRering: a systema0c study. Schwarzschild BH ScaRering: Quasi- normal modes Outline Introduc0on Overview of the Technique ODEs + Singulari0es + Monodromies + Boundary condi0ons Results Kerr BH ScaRering: a systema0c study Schwarzschild BH ScaRering: Quasi- normal modes Collabora0on:

More information

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015 FINAL EXAM Ma-00 Eakin Fall 05 December 6, 05 Please make sure that your name and GUID are on every page. This exam is designed to be done with pencil-and-paper calculations. You may use your calculator

More information

Automatic Autocorrelation and Spectral Analysis

Automatic Autocorrelation and Spectral Analysis Piet M.T. Broersen Automatic Autocorrelation and Spectral Analysis With 104 Figures Sprin ger 1 Introduction 1 1.1 Time Series Problems 1 2 Basic Concepts 11 2.1 Random Variables 11 2.2 Normal Distribution

More information

P = A(A T A) 1 A T. A Om (m n)

P = A(A T A) 1 A T. A Om (m n) Chapter 4: Orthogonality 4.. Projections Proposition. Let A be a matrix. Then N(A T A) N(A). Proof. If Ax, then of course A T Ax. Conversely, if A T Ax, then so Ax also. x (A T Ax) x T A T Ax (Ax) T Ax

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

Chapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析

Chapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 Chapter 9 Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 1 LPC Methods LPC methods are the most widely used in speech coding, speech synthesis, speech recognition, speaker recognition and verification

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

CS 6140: Machine Learning Spring What We Learned Last Week. Survey 2/26/16. VS. Model

CS 6140: Machine Learning Spring What We Learned Last Week. Survey 2/26/16. VS. Model Logis@cs CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Assignment

More information

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR SPEECH CODING Petr Polla & Pavel Sova Czech Technical University of Prague CVUT FEL K, 66 7 Praha 6, Czech Republic E-mail: polla@noel.feld.cvut.cz Abstract

More information

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises Random Processes With Applications (MVE 135) Mats Viberg Department of Signals and Systems Chalmers University of Technology

More information

Pseudoinverse and Adjoint Operators

Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 5 ECE 275A Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p.

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

CS 6140: Machine Learning Spring 2016

CS 6140: Machine Learning Spring 2016 CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa?on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Logis?cs Assignment

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation. Bhaskar D Rao University of California, San Diego

Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation. Bhaskar D Rao University of California, San Diego Generalized Sidelobe Canceller and MVDR Power Spectrum Estimation Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books 1. Optimum Array Processing, H. L. Van Trees 2.

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Linear Regression with mul2ple variables. Mul2ple features. Machine Learning

Linear Regression with mul2ple variables. Mul2ple features. Machine Learning Linear Regression with mul2ple variables Mul2ple features Machine Learning Mul4ple features (variables). Size (feet 2 ) Price ($1000) 2104 460 1416 232 1534 315 852 178 Mul4ple features (variables). Size

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion

Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview

More information

Time- varying signals: cross- and auto- correla5on, correlograms. NEU 466M Instructor: Professor Ila R. Fiete Spring 2016

Time- varying signals: cross- and auto- correla5on, correlograms. NEU 466M Instructor: Professor Ila R. Fiete Spring 2016 Time- varying signals: cross- and auto- correla5on, correlograms NEU 466M Instructor: Professor Ila R. Fiete Spring 2016 Sta5s5cal measures We first considered simple sta5s5cal measures for single variables

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

General linear model: basic

General linear model: basic General linear model: basic Introducing General Linear Model (GLM): Start with an example Proper>es of the BOLD signal Linear Time Invariant (LTI) system The hemodynamic response func>on (Briefly) Evalua>ng

More information

ECE4270 Fundamentals of DSP Lecture 20. Fixed-Point Arithmetic in FIR and IIR Filters (part I) Overview of Lecture. Overflow. FIR Digital Filter

ECE4270 Fundamentals of DSP Lecture 20. Fixed-Point Arithmetic in FIR and IIR Filters (part I) Overview of Lecture. Overflow. FIR Digital Filter ECE4270 Fundamentals of DSP Lecture 20 Fixed-Point Arithmetic in FIR and IIR Filters (part I) School of ECE Center for Signal and Information Processing Georgia Institute of Technology Overview of Lecture

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Sec$on The Use of Exponen$al Weigh$ng Exponen$al weigh$ng of a sequence x(n) is defined by. (equa$on 13.70)

Sec$on The Use of Exponen$al Weigh$ng Exponen$al weigh$ng of a sequence x(n) is defined by. (equa$on 13.70) ECE 8440 UNIT 23 Sec$on 13.6.5 The Use of Exponen$al Weigh$ng Exponen$al weigh$ng of a sequence x(n) is defined by w(n) = α n x(n). (equa$on 13.69) Exponen$al weigh$ng can be used to avoid or lessen some

More information

Machine Learning & Data Mining CS/CNS/EE 155. Lecture 11: Hidden Markov Models

Machine Learning & Data Mining CS/CNS/EE 155. Lecture 11: Hidden Markov Models Machine Learning & Data Mining CS/CNS/EE 155 Lecture 11: Hidden Markov Models 1 Kaggle Compe==on Part 1 2 Kaggle Compe==on Part 2 3 Announcements Updated Kaggle Report Due Date: 9pm on Monday Feb 13 th

More information