ECE 636: Systems identification

Similar documents
ECE 636: Systems identification

Chapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe

6.435, System Identification

System Identification & Parameter Estimation

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing

ECE 636: Systems identification

Lecture 1: Introduction to System Modeling and Control. Introduction Basic Definitions Different Model Types System Identification

Non-parametric identification

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

II. Nonparametric Spectrum Estimation for Stationary Random Signals - Non-parametric Methods -

EL1820 Modeling of Dynamical Systems

Heteroskedasticity and Autocorrelation Consistent Standard Errors

Time series models in the Frequency domain. The power spectrum, Spectral analysis

Problem Sheet 1 Examples of Random Processes

System Identification

Identification of Linear Systems

Practical Spectral Estimation

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

Solutions for examination in TSRT78 Digital Signal Processing,

Fourier Methods in Digital Signal Processing Final Exam ME 579, Spring 2015 NAME

BME 50500: Image and Signal Processing in Biomedicine. Lecture 5: Correlation and Power-Spectrum CCNY

STAD57 Time Series Analysis. Lecture 23

EE531 (Semester II, 2010) 6. Spectral analysis. power spectral density. periodogram analysis. window functions 6-1

Data Processing and Analysis

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

VIII. Coherence and Transfer Function Applications A. Coherence Function Estimates

IMPROVEMENTS IN MODAL PARAMETER EXTRACTION THROUGH POST-PROCESSING FREQUENCY RESPONSE FUNCTION ESTIMATES

Introduction to system identification

Further Results on Model Structure Validation for Closed Loop System Identification

SPECTRUM. Deterministic Signals with Finite Energy (l 2 ) Deterministic Signals with Infinite Energy N 1. n=0. N N X N(f) 2

2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit dwm/courses/2tf

Final Exam January 31, Solutions

2A1H Time-Frequency Analysis II

Laboratory Project 2: Spectral Analysis and Optimal Filtering

DFT & Fast Fourier Transform PART-A. 7. Calculate the number of multiplications needed in the calculation of DFT and FFT with 64 point sequence.

Basics on 2-D 2 D Random Signal

EL1820 Modeling of Dynamical Systems

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

Computer Engineering 4TL4: Digital Signal Processing

DATA IN SERIES AND TIME I. Several different techniques depending on data and what one wants to do

SNR Calculation and Spectral Estimation [S&T Appendix A]

Lecture 7 Random Signal Analysis

Centre for Mathematical Sciences HT 2017 Mathematical Statistics

ANNEX A: ANALYSIS METHODOLOGIES

IV. Covariance Analysis

Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina

ECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 3. Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

The Discrete Fourier Transform (DFT) Properties of the DFT DFT-Specic Properties Power spectrum estimate. Alex Sheremet.

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

Chapter 1 Fundamental Concepts

Digital Filters Ying Sun

Massachusetts Institute of Technology

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

A summary of Modeling and Simulation

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

CDS 101/110: Lecture 3.1 Linear Systems

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Notes on Random Processes

CDS 101/110: Lecture 3.1 Linear Systems

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

GMM, HAC estimators, & Standard Errors for Business Cycle Statistics

4.1. If the input of the system consists of the superposition of M functions, M

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

EECE Adaptive Control

Chirp Transform for FFT

Periodogram and Correlogram Methods. Lecture 2

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

Correlator I. Basics. Chapter Introduction. 8.2 Digitization Sampling. D. Anish Roshi

Frequency methods for the analysis of feedback systems. Lecture 6. Loop analysis of feedback systems. Nyquist approach to study stability

Heteroskedasticity- and Autocorrelation-Robust Inference or Three Decades of HAC and HAR: What Have We Learned?

Chapter 1 Fundamental Concepts

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

L6: Short-time Fourier analysis and synthesis

Stationary Graph Processes: Nonparametric Spectral Estimation

Signal processing Frequency analysis

12. Prediction Error Methods (PEM)

Representation of Digital Signals

System Identification & Parameter Estimation

The Cooper Union Department of Electrical Engineering ECE111 Signal Processing & Systems Analysis Final May 4, 2012

Frequency Response and Continuous-time Fourier Series

Communication Systems Lecture 21, 22. Dong In Kim School of Information & Comm. Eng. Sungkyunkwan University

f-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with

Signals and Systems. Problem Set: The z-transform and DT Fourier Transform

ENVIRONMENTAL DATA ANALYSIS WILLIAM MENKE JOSHUA MENKE WITH MATLAB COPYRIGHT 2011 BY ELSEVIER, INC. ALL RIGHTS RESERVED.

Fourier Analysis of Signals Using the DFT

Advanced Digital Signal Processing -Introduction

Continuous Time Signal Analysis: the Fourier Transform. Lathi Chapter 4

Identification, Model Validation and Control. Lennart Ljung, Linköping

X. Cross Spectral Analysis

Aspects of Continuous- and Discrete-Time Signals and Systems

ENSC327 Communications Systems 2: Fourier Representations. School of Engineering Science Simon Fraser University

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples

SIGNAL AND IMAGE RESTORATION: SOLVING

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Module 29.3: nag tsa spectral Time Series Spectral Analysis. Contents

2.161 Signal Processing: Continuous and Discrete Fall 2008

E 4101/5101 Lecture 6: Spectral analysis

Transcription:

ECE 636: Systems identification Lectures 7 8 onparametric identification (continued)

Important distributions: chi square, t distribution, F distribution Sampling distributions ib i Sample mean If the variance of x is known x = ˆ = xi μ Χ i= If the variance of x is unknown Sample variance: 2 ˆ x 2 ~ χ, 2 n n= x nσ σ Ratio of two sample variances 2 2 ˆ σ = Χ ( xi x) i= We can use sampling distributions to construct confidence intervals and perform statistical hypothesis testing 2

Power spectral density estimation Direct method Periodogram, modified periodogram Unbiased, not consistent Φ = Ν ˆ 2 Var { Φ xx( ω)} Φ xx( ω) 2 ˆ xx ( ω) X ( ω), ω=2πk/, k=0,,...- Bartlett, Welch method: improved variance, reduced resolution Parametric methods: model the signal as an ARMA process yt () = ayt ( )... a yt ( M ) wt () bwt ( )... bwt ( Q ) M Q Indirect method Estimate autocorrelation, take DFT of estimate Ν τ ˆ ϕxx ( τ) = xnxn ( ) ( τ ), τ n= 0 3

onparametric identification Time domain (impulse response estimation) Impulse response analysis Step response analysis Correlation analysis Least squares estimation Frequency domain (frequency response estimation) Sinusoidal analysis Frequency response analysis Coherence analysis otation: For an LTI system () = 0( τ ) ( τ) υ() = 0( ) () υ() τ = 0( ) = 0( τ ) τ = y t g ut t G q ut t G q g q τ u(t) g 0 (τ) As we mentioned, in nonparametric identification we estimate g 0 (τ) In parametric identification, we parametrize the system model, ie Gq ˆ (, θ ) estimate the parameter vector θ. For example, if the model is an ARX model: yt () ayt ( ) = ut () but ( ) but ( 2) ε () t 2 2 bq bq 2 θ = θ= 2 aq Gq ˆ (, ), [ a b b ] T υ(t) and we 4

Let the input of the system α, t = 0 ut () = 0, t 0 Output yt () = α g() t υ() t gt ˆ( ) = 0 yt () α The error of the estimate is: Impulse response analysis () = 0( τ ) ( τ) υ() = 0( ) () υ() τ = yt g ut t G q ut t be an impulse of the form: υ(t) u(t) g 0 (τ) Therefore, the value of α should be very large compared to the noise values in order to achieve small errors Practically difficult to apply such inputs in many cases 5

Let the input of the system α, t 0 ut () = 0, t < 0 Output t yt () = α g( τ) υ() t τ = yt () yt ( ) gt ˆ( ) = α Error 0 Step response analysis () = 0( τ ) ( τ) υ() = 0( ) () υ() τ = yt g ut t G q ut t be the step signal Practically: Large error Step response analysis is mostly useful for estimating basic characteristics of the system such as pure time delay, static gain, time constants (e.g., in automatic control applications) 6

yt () = g( τ ) ut ( τ) υ() t τ = 0 0 Correlation analysis Let the input be stationary with autocorrelation function Also, let the noise be uncorrelated to the input Eut υ t Multiply () with u(t k), and take expected value τ = 0 0 () ϕ ( τ) = Eutut { ( ) ( τ)} uu {()( τ )} = 0 E{ ytut ( ) ( k)} = E{ g( τ) ut ( τ) ut ( k) υ( tut ) ( k)} ϕ ( k) = g ( τ) ϕ ( τ k) uy τ = 0 0 uu If we have white noise input: and: g ϕ ( τ ) uy 0( τ ) = 2 σ u ϕ τ σ δ τ 2 uu ( ) = u ( ) 7

Correlation analysis In practice, we estimate the autocorrelation and cross correlation functions that are needed from a data record of length ote: There are more than one sum types (different limits) to estimate these quantities For example we can use the following estimates: ˆ ϕuy ( τ) = ynun ( ) ( τ) n= τ Ν τ ˆ ϕ ( ) ( ) ( ), ˆ ( ) ˆ uu τ = unun τ ϕuu τ = ϕuu ( τ), τ = 0,,2,... 2 2 ˆ ˆ u = ϕ (0) uu = u ( n ) n= σ n= to get the following estimate for the impulse response of the system: gˆ( ) = n= τ τ ynun ( ) ( τ ) n = u 2 ( n) 8

Correlation analysis What happens if we don t have a white noise input? ϕ ( k) = g ( τ) ϕ ( τ k) uy τ = 0 0 uu ϕuy ( τ ) = g0 ( τ )* ϕuu ( τ ) In practice we truncate the above sum up to M (where M is the memory of the system) and use estimates for the auto/cross correlation functions as shown before: ˆ ϕ ( τ ) = gˆ ( τ)* ˆ ϕ ( τ) uy In matrix form, we can write: uu ˆ ϕ (0) ˆ (0) ˆ ( )... ˆ ( ( )) ˆ uy ϕ g(0) uu ϕuu ϕ uu Μ ˆ ϕ () ˆ () ˆ (0)... ˆ ( ( 2)) ˆ uy g() ϕuu ϕuu ϕuu Μ =.................. ˆ ϕ ( ) ˆ ( ) ˆ ( 2) ˆ (0) ˆ uy Μ ϕ g( ) uu Μ ϕuu Μ ϕ Μ uu We can solve the above by inverting the (MxM) matrix ( ) gˆ = Φuu Φuy Φˆ uu 9

We return to: ynun ( ) ( τ ) n= τ gˆ( τ ) = 2 u ( n) = n Correlation analysis It can be shown that: lim ˆ Eg { ( τ )} = g0( τ ) and that the covariance matrix of E{( g ˆ g)( g ˆ g) T } ˆ g g 0 i.e.: depends on /Ν, therefore we have better estimates as increases 0

The convolution sum as a linear regression problem ote: By truncating as before the convolution sum up to M, we can write the discrete convolution relation as: ˆ y=ug y() u() 0... 0 gˆ (0) y(2) u(2) u()... 0 gˆ () =.................. y ( ) u ( ) u ( )... u ( M ) gˆ ( Μ ) U is a xm matrix The leastsquaressolution squares solution of this is: T T g=uu ˆ ( ) Uy ot very efficient (many unknowns) More to follow on linear regression

Analysis of sinusoidal response The sinusoidal response of an LTI system is also sinusoidal, whereby the amplitude of the output sinusoid is multiplied by the magnitude of the system frequency response at the input sinusoid frequency H(ω 0 ) and a phase equal to the phase of the υ(t) system frequency response at ω 0 is added to the input u(t) phase, i.e.: g 0 (τ) ut () = α cos( ω t) 0 yt () = α G( ω )cos( ω t ϕ) υ() t ϕ = G 0 0 0 ( ω ) 0 0 Therefore we can vary the input frequency ω 0 and measure the amplitude and the phase of the steady state response in the output. Consequently we can estimate the frequency response of the system, e.g. graphically in the form of Bode plots This experimental protocol is often not feasible and it is affected by noise as we have to estimate two values (magnitude and phase) in the presence of noise 2

Analysis of sinusoidal response In order to improve these estimates we may define: u(t) g 0 (τ) υ(t) Substitute y() t = α G0( ω0)cos( ω0t ϕ) υ() t transient It can be shown using trigonometric identities that: α lim IC( ) = G0( ω0) cosϕ 2 α lim I S ( ) = G 0 ( ω 0 ) sinϕ 2 We can then estimate the magnitude and the phase as: and ignoring the transient response 2 2 ˆ I I ( ) C S ω0 = α /2 G ( ) ( ) I ( ) S ˆ ϕ = arctan( ) I ( ) C 3

The empirical transfer function estimate The simplest estimate for the frequency response of the system results from the relation between the FTs of the input and the output, ie: i.e.: Y ( ) ˆ( ) ω G ω = U ( ω) u(t) g 0 (τ) υ(t) where Υ Ν (ω) and U (ω) are the DFTs of the input and output, which may be computed from sample records {u(),,u()} and {y(),,y()} 2π 2π j kn j kn ω n= 0 n= 0 U ( ω) = u( n) e Y ( ) = y( n) e k = 0,,...,, ω=2πk/ By taking the inverse DFT of () we can also obtain estimates of the impulse response Simple method but undesirable properties It can be shown that for Ν > : the estimate is asymptotically unbiased the estimate is not consistent but it depends on the noise to signal ratio at each frequency (recall periodogram estimates) the estimates at different frequencies are asymptotically uncorrelated Solution: as in PSD estimation, we can utilize smoothing 4

Smoothing the empirical transfer function We can use windowing (as we did in PSD estimation), but this time in the frequency domain, to reduce the variance of our estimate The basic idea is that the true frequency response is a smooth function of ω,in other words for neighboring values of ω the corresponding G 0 (ω) values are also close However, the simple DFT method yields uncorrelated estimates and if the frequency resolution 2π/Ν is small compared to the changes of G 0 (ω), the estimated values ˆ 2π k 2π k G ( ), k, ω corresponds to uncorrelated, unbiased estimates of the same quantity G 0 (ω)!

Smoothing the empirical transfer function Let G 0 (ω) be constant in the interval 2π k 2π k2 = ω0 Δ ω < ω < ω0 Δ ω = We can estimate this value from: 2π k Gˆ( ), k ( k, k2 ) 2π k by simply taking the average of these values. Moreover we can weigh these values by the inverse variance of each of these estimated values, i.e.: 2 α k k= k ( ω 0) = k2 Gˆ( ) k 2π k Gˆ( ) * α k 2 k= k where α k corresponds to the variance of each estimate, i.e.: For large Ν the sums become integrals therefore α = k U Φ υυ 2π k ( ) 2π k ( ) 2π k 2 0 ˆ( ) ˆ ω Δω α ξg ξ dξ 2 ω 0 Δω U G ω = ( ξ ) ( 0) α = ω0 Δω α ξd ξ ξ Φ ( ) υ ξ ω ω 0 Δ * If we have estimates of the same quantity with different variances, the best estimate in terms of minimum variance of this quantity is given by a weighted sum where the weights are the inverse of the variance of each separate estimate

Smoothing the empirical transfer function If the frequency response is not completely constant Between ( ω0 Δ ωω, 0 Δω) : We can multiply the previous relation with a window function W(ξ) which weighs the estimates around ω 0 more, i.e.: ω 0 Δω ˆ ( 0 ) ( ) 0 ˆ( () 0 ) Wγ ξ ω α ξg ξ dξ ω Δω G ω = ω0 Δω Wγ( ξ ω0) α ξdξ ω0 Δω If the noise spectrum Φ υυ ( ξ ) is known we can calculate the quantity above. Typically its not! We can however assume that this spectrum varies slowly compared to the window width (broadband noise), therefore we can assume that it stays approximately constant and equal to Φ υυ ( ω0). Therefore: U αξ = Φ U υυ 2 ( ξ ) ( ω ) 0 and we can simplify () to: ω Δ ω 0 ω0 Δω ˆ( ω ) = ω0 Δω ( ) ( ) G 0 2 0 2 W ( ξ ω ) U ( ξ) Gˆ ( ξ) dξ γ ω Δω 0 0 W ξ ω U ξ dξ γ

Smoothing the empirical transfer function We can choose any of the symmetric windows that we have seen before note that here we are working in the frequency domain The width of the window determines the bias/variance tradeoff: When we use narrow windows in the time domain (small M), which corresponds to broader windows in the frequency domain, we obtain estimates with smaller variance but larger bias as we are averaging the estimates over a wider range of frequencies within which the true frequency response may vary considerably Rectangular Bartlett Hamming

Smoothing the empirical transfer function We can also smooth in an analogous manner to the Welch method that we examined in PSD estimation We split the data record of length Ν in Μ segments of length Κ and estimate k ˆ k Y ( ) ( ) R ω GR ω = k, k =,2,..., M U ( ω) R Take the average of these estimates Gˆ M ( ) ˆ k ω = GR( ω) M k= or using weights that are inversely related to the variance M ˆ R k 2π k β k( ω) GR( ) ˆ ( 0 ) k =, R ( ) k G ω = β M k ω = UR ( ω ) R β ( ω) 2 k = k Periodogram of each segment

We can also use the relation: ˆ Φ yu ( ω) G ( ω ) = Φ ( ω) uu Estimating the frequency response u(t) g 0 (τ) to estimate the frequency response, or we can estimate the coherence: ( ) 2 yu ω γ yu ( ω) = Φ Φ ( ω ) Φ ( ω ) uu yy 2 υ(t) We have seen that the auto/cross spectra may be estimated directly (periodogram) or indirectly by first estimating the corresponding auto/cross correlation functions, i.e.: Ν τ τ ˆ ϕ ˆ ϕuu ( τ) = unun ( ) ( τ) yu ( τ) = ynun ( ) ( τ) n= n= In both cases we can use windowing when estimating the spectrum, i.e.: Φ ˆ ( ω) = w( τ) ˆ ϕ ( τ) e uu τ = uu Φ ˆ ( ω) = w( τ) ˆ ϕ ( τ) e yu τ = yu iτω iτω This results in weighing the values of the auto/cross correlation estimates at large values of τ less. This is desirable as these estimates are more inaccurate due to that they are estimated by 20 less I/O samples (recall we have finite samples!)

Estimating the frequency response Selecting a proper window length is important In practice we can select the window parameter such that: The window width is small compared to the data record length so that we don t smooth excessively Coherence: While we don t get an explicit estimate of the system frequency response, coherence is also widely used in systems identification/modeling applications. Since it involves the estimation of the cross spectrum between input and output and the I/O spectra, similar remarks hold for the estimation of these quantities 2 Φ ( ) 2 uy ω γuy ( ω) ω ω = Φ ( ) Φ ( ) uu yy

oise spectrum estimation If we have noise measurements, we can estimate its spectrum as before (typically we don t) In practice, after we obtain an estimate of gt ˆ( ) we can estimate the noise values as: M ˆ υ () t = y () t gˆ ( τ ) u ( t τ ) τ = 0 and then estimate its spectrum. Alternatively, we can estimate the spectrum (assuming uncorrelated noise) by: Φ ( ω ) =Φ ( ω ) Φ ( ω ) yy zz υυ 2 2 Φ yu ( ω) Φ zz ( ω) = G0 ( ω) Φ uu ( ω) = Φuu ( ω ) 2 Φˆ yu ( ω) g 0 (τ) Φ ˆ ( ) ˆ υυ ω =Φyy ( ω) Φˆ uu ( ω ) u(t) z(t) υ(t)

Dynamic response of biomedical system (human ankle) Second order linear ODE Input: torque, Output: position Ideal input: GW Broadband case: LP filtering at 200 Hz Low pass filtering at 50 Hz oise: 0 db Example

Input/output spectra Example

onparametric time domain identification: Least squares method (Α) true (Β) GW, no noise (C) Broadband input, no noise (D) Broadband input, 0 db noise (E) LP filtered input, no noise (F) LP, 0 db noise Example

onparametric frequency domain identification Example

Let the true system be: Example ARX system Σ u(t) h Σ e(t) e,u uncorrelated

Impulse response, frequency response B=[0 ]; A=[ 0.8]; htrue = impz(b,a); [H,W] = freqz(b,a); Simulation of Σ u(t) Σ e(t) h

e,u uncorrelated Simulation Σ u(t) h z(t) e(t) Ν=024; B=[0 ]; A=[ 0.8]; u=randn(,); z=filter(b,a,u); e=randn(,); y=ze;

Correlation betweeen input/output Simulation Σ u(t) h e(t) [phi_zu,lags]=xcorr(z,u,'biased'); [phi_yu,lags]=xcorr(y,u,'biased');

Correlation analysis The input is approximately white noise, therefore: u(t) Σ e(t) h Ν=024; M=45; hest_zu=czu(floor(length(czu)/2):floor(length(czu)/2)m)/(var(u)); hest_yu=cyu(floor(length(cyu)/2):floor(length(czu)/2)m)/(var(u));

Correlation analysis u(t) How does the value of Ν affect the results? Σ e(t) h Ν=50 Ν=00

Correlation analysis u(t) A long data record is required for good estimates! Σ e(t) h Ν=0000 Ν=000000

In the frequency domain Cross spectral density in Matlab: cpsd [Tzu,w] = tfestimate(u,z); [Tyu,w] = tfestimate(u,y); Frequency response estimation u(t) Σ e(t) h

Frequency response estimation Frequencyresponse response estimation (system identification toolbox): spa (spectra estimated with Hamming window) datyu=iddata(y,u,); datzu=iddata(z,u,); spayu=spa(datyu); spazu=spa(datzu); plot (spayu) plot (spazu) u(t) Σ e(t) h

Empirical transfer function estimate (system identification toolbox): etfe datyu=iddata(y,u,); hetfe_yu=etfe(datyu); hetfe yu5=etfe(datyu 5); Frequency response estimation u(t) hetfe_yu5=etfe(datyu,5); h Σ e(t)

Empirical transfer function estimate (system identification toolbox): etfe datyu=iddata(y,u,); hetfe_yu0=etfe(datyu,0); hetfe yu50=etfe(datyu 50); Frequency response estimation u(t) hetfe_yu50=etfe(datyu,50); h Σ e(t)

Let the input be an impulse of magnitude α For α=3: o noise (or very weak noise) > good estimate For noisy outputs > bad estimates Impulse response analysis u(t) Σ e(t) h

Alpha=3; z_step=filter(b,a,u_step); h_step=diff(z_step)/alpha; Step response analysis α=3 u(t) ˆ yt () yt ( ) h h () t = α Σ e(t)

The presence of noise worsens the estimates very much! Step response analysis u(t) ˆ yt () yt ( ) h h () t = α Σ e(t)

o noise With noise: e (0,) Ν=024; M=45; Umatzeros( U_mat=zeros(,M); for i=:, for j=:m, if (i j)>=0 U_mat(i,j)=u(i j); end; end; end; hls_nons=pinv(u_mat) mat)*z; hls_noise=pinv(u_mat)*y; Least squares IR estimation u(t) Σ e(t) h y = Ugˆ y() u() 0... 0 gˆ (0) y(2) u(2) u()... 0 gˆ () =.................. y ( ) u ( ) u ( )... u ( M ) gˆ ( Μ ) T g=uu ˆ ( ) T Uy