Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case

Similar documents
Expressions for the covariance matrix of covariance data

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

On instrumental variable-based methods for errors-in-variables model identification

A unified framework for EIV identification methods in the presence of mutually correlated noises

EECE Adaptive Control

Accuracy Analysis of Time-domain Maximum Likelihood Method and Sample Maximum Likelihood Method for Errors-in-Variables Identification

A New Subspace Identification Method for Open and Closed Loop Data

14 - Gaussian Stochastic Processes

Optimal input design for nonlinear dynamical systems: a graph-theory approach

EL1820 Modeling of Dynamical Systems

State Estimation of Linear and Nonlinear Dynamic Systems

Single Equation Linear GMM with Serially Correlated Moment Conditions

Identifiability in dynamic network identification

EL1820 Modeling of Dynamical Systems

EECE Adaptive Control

Single Equation Linear GMM with Serially Correlated Moment Conditions

Blind Identification of FIR Systems and Deconvolution of White Input Sequences

On Identification of Cascade Systems 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

12. Prediction Error Methods (PEM)

2 Introduction of Discrete-Time Systems

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

Asymptotics of minimax stochastic programs

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011

7. MULTIVARATE STATIONARY PROCESSES

F denotes cumulative density. denotes probability density function; (.)

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY UNIT 3 RANDOM PROCESS TWO MARK QUESTIONS

Lecture 8: Bayesian Estimation of Parameters in State Space Models

5: MULTIVARATE STATIONARY PROCESSES

Parameter Estimation in a Moving Horizon Perspective

B y t = γ 0 + Γ 1 y t + ε t B(L) y t = γ 0 + ε t ε t iid (0, D) D is diagonal

System Identification & Parameter Estimation

Non-parametric identification

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Closed-loop Identification of Hammerstein Systems Using Iterative Instrumental Variables

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

OPTIMAL DESIGN INPUTS FOR EXPERIMENTAL CHAPTER 17. Organization of chapter in ISSO. Background. Linear models

ECON 616: Lecture 1: Time Series Basics

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS

STARMAX Revisted. Spatio-Temporal Modeling for Biosurveillance

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

On Input Design for System Identification

Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

Controlled sequential Monte Carlo

Algorithms for Recursive Frisch Scheme Identification and errors-invariables

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples

Refined Instrumental Variable Methods for Identifying Hammerstein Models Operating in Closed Loop

FRTN 15 Predictive Control

Performance assessment of MIMO systems under partial information

13. Parameter Estimation. ECE 830, Spring 2014

Stability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay

New Introduction to Multiple Time Series Analysis

Stochastic Process II Dr.-Ing. Sudchai Boonto

EE531 (Semester II, 2010) 6. Spectral analysis. power spectral density. periodogram analysis. window functions 6-1

EL2520 Control Theory and Practice

Discrete time processes

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Least costly probing signal design for power system mode estimation

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Optimal discrete-time H /γ 0 filtering and control under unknown covariances

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

Chapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe

Chapter 6. Random Processes

A MULTIVARIATE MODEL FOR COMPARISON OF TWO DATASETS AND ITS APPLICATION TO FMRI ANALYSIS

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

Outline lecture 6 2(35)

FRF parameter identification with arbitrary input sequence from noisy input output measurements

Switching Regime Estimation

Multivariate ARMA Processes

Linear Approximations of Nonlinear FIR Systems for Separable Input Processes

On the Goodness-of-Fit Tests for Some Continuous Time Processes

Sign-Perturbed Sums (SPS): A Method for Constructing Exact Finite-Sample Confidence Regions for General Linear Systems

St. Petersburg Math. J. 13 (2001),.1 Vol. 13 (2002), No. 4

SIMULTANEOUS STATE AND PARAMETER ESTIMATION USING KALMAN FILTERS

A Matrix Theoretic Derivation of the Kalman Filter

ADAPTIVE PID CONTROLLER WITH ON LINE IDENTIFICATION

Outline Lecture 3 2(40)

AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET. Questions AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET

State Estimation and Prediction in a Class of Stochastic Hybrid Systems

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability

Control Systems Lab - SC4070 System Identification and Linearization

Bayesian Inference for DSGE Models. Lawrence J. Christiano

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

Testing for Granger Non-causality in Heterogeneous Panels

Output Stabilization of Time-Varying Input Delay System using Interval Observer Technique

5 Analog carrier modulation with noise

EECE Adaptive Control

IV. Covariance Analysis

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations

SAMPLING JITTER CORRECTION USING FACTOR GRAPHS

Observability and state estimation

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Outline Lecture 2 2(32)

Design of Observer-Based 2-d Control Systems with Delays Satisfying Asymptotic Stablitiy Condition

ECE 636: Systems identification

APPLICATION OF INSTRUMENTAL VARIABLE METHOD TO THE IDENTIFICATION OF HAMMERSTEIN-WIENER SYSTEMS

Outline. What Can Regularization Offer for Estimation of Dynamical Systems? State-of-the-Art System Identification

Transcription:

008 American Control Conference Westin Seattle Hotel Seattle Washington USA June -3 008 WeB8.4 Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case Magnus Mossberg Abstract The method of covariance cross-covariance matching is considered for errors-in-variables identification where the measurement noises are colored. The covariance cross-covariance functions are estimated from the noisecorrupted data the corresponding functions parameterized by the unnown parameters are matched to the estimated functions. The main contribution of the paper is a step-by-step algorithm for the computation of the covariance matrix of the estimated system parameters. I. INTRODUCTION Consider the errors-in-variables EIV [] [] [3] [4] system Aq y 0 t = Bq u 0 t Aq = + a q +... + a n q n Bq = b q +... + b n q n where q is the bacward shift operator where u 0 t y 0 t denote the noise-free input output signals respectively. It is assumed that u 0 t is a stationary stochastic process with rational spectrum it is therefore represented as Cq u 0 t = Dq et Cq = + c q +... + c m q m Dq = d q +... + d m q m where the white noise source et is of zero mean variance λ e. The measurements ut = u 0 t + ũt yt = y 0 t + ỹt are available for t =...N where ũt ỹt are independent colored noise sequences given by Gq ũt = vt Gq = + g q +... + g α q α 3 Hq ỹt = wt Hq = + h q +... + h β q β 4 where vt wt are independent white noise sources of zero mean variances λ v λ w respectively independent of et. The case with white measurement noises was M. Mossberg is with the Department of Electrical Engineering Karlstad University Sweden. E-mail: Magnus.Mossberg@au.se considered in [5]. The problem is to estimate the unnown system parameters θ 0 = [ a a n b b n ] T from the data {utyt} N t=. The parameters ψ 0 = [ c c m d d m ] T are also unnown but are not of primary interest. The solution considered in this paper is to estimate covariance cross-covariance functions from the noise corrupted data to match the corresponding functions parameterized by the unnown parameters to the estimated functions. The estimate of the cross-covariance function based on colored noise-corrupted measurements ut yt is consistent provided that the measurement noises are independent. This maes the method of cross-covariance matching an interesting choice for EIV identification. The main contribution of the paper is the derivation of the covariance matrix of the estimate of θ 0 given by the cross-covariance matching method. The expression is approximative valid for a large number of data N. A detailed description on how to compute the involving elements is given in the paper. The outline of the paper is as follows. In the next section some preliminaries regarding the expressions for the covariance cross-covariance functions are given together with definitions of some of their estimates the asymptotic properties of these estimates. The estimators based on covariance cross-covariance matching are described in Section III including discussions on consistency of these estimators. Section IV is devoted to the covariance matrix of the estimate of θ 0 a step-by-step algorithm for its computation is given in Algorithm in the end of the section. An example in which the variances are compared with variances from a Monte Carlo simulation is presented in Section V conclusions are drawn in Section VI. II. PRELIMINARIES Some material on covariance cross-covariance functions important for the coming sections of the paper is presented in this section. First the covariance crosscovariance functions for the signals u 0 t y 0 t are given. Represent the system from u 0 t to y 0 t as xt + = Aθ 0 ψ 0 xt + Bψ 0 et z 0 t = Cxt + Det 978--444-079-7/08/$5.00 008 AACC. 30

where z 0 t = [ u 0 t y 0 t ] T. The following result can now be stated. Result. The covariance function R z0 τθ 0 ψ 0 of z 0 t is given as [ ] r R z0 τθ 0 ψ 0 = u0 τψ 0 r u0y 0 τθ 0 ψ 0 r y0u 0 τθ 0 ψ 0 r y0 τθ 0 ψ 0 = E{z 0 t + τz T 0 t} CPθ 0 ψ 0 C T + λ edd T τ = 0 = CA τ θ 0 ψ 0 Aθ 0 ψ 0 Pθ 0 ψ 0 C T + λ ebψ 0 D T τ > 0 where Pθ 0 ψ 0 is the unique non-negative definite solution to the Lyapunov equation Pθ 0 ψ 0 = Aθ 0 ψ 0 Pθ 0 ψ 0 A T θ 0 ψ 0 + λ ebψ 0 B T ψ 0. Proof: See [6]. An estimate of R z0 τθ 0 ψ 0 is suggested in the following proposition. Proposition. If the mean values of ut yt are zero a possible estimator ˆR z τ of R z0 τθ 0 ψ 0 is [ ] ˆru τ ˆr ˆR z τ = uy τ ˆr yu τ ˆr y τ where = N τ zt + τz T t τ 0 N τ t= zt = [ ut yt ] T. The data are stationary ergodic under the given assumptions so E{y 0 t ũt } = 0 t t E{ỹt u 0 t } = 0 t t E{ỹt ũt } = 0 t t 5 6 ˆr lim yuτ = r y0u 0 τθ 0 ψ 0 τ 0 7 ˆr lim uyτ = r u0y 0 τθ 0 ψ 0 τ 0. For the diagonal elements of ˆR z τ it holds that ˆr lim uτ = r u0 τψ 0 + rũτγ 0 τ 0 8 ˆr lim yτ = r y0 τθ 0 ψ 0 + rỹτκ 0 τ 0 where rũτγ 0 rỹτκ 0 are respectively the covariance functions of ũt ỹt. Here γ 0 = [ g g α ] κ 0 = [ h h β ] contain the filter parameters of 3 4 respectively. This information about the asymptotic properties of the estimators of the covariance cross-covariance functions is needed in the discussion on consistency of the proposed estimators of ψ 0 θ 0 in Section III as well as in the expression for the covariance matrix for the estimate of θ 0 in Section IV. The covariance functions rũτγ 0 rỹτκ 0 are found next. To find rũτγ 0 represent ũt in state space form as x g t + = A g γ 0 x g t + B g vt ũt = C g x g t. The covariance function is given by the following result. Result. The covariance function rũτγ 0 of ũt is given as rũτγ 0 = C g A τ gγ 0 P g γ 0 C T g τ 0 where P g γ 0 is the unique non-negative definite solution to the Lyapunov equation P g γ 0 = A g γ 0 P g γ 0 A T g γ 0 + λ vb g B T g. Proof: The result follows from Result. Alternatively consider the Yule-Waler equation rũτγ 0 + g rũτ γ 0 +... + g α rũτ αγ 0 { 0 τ > 0 = λ v τ = 0 for τ = 0...α provided that λ v is nown in order to determine rũ0γ 0...rũαγ 0. The Yule-Waler equation can then be iterated for τ > α to find rũτγ 0 τ > α. Analogously to find rỹτκ 0 first represent ỹt in state space form as x h t + = A h κ 0 x h t + B h wt ỹt = C h x h t. The covariance function is given by the following result. Result 3. The covariance function rỹτκ 0 of ỹt is found by solving the Lyapunov equation P h κ 0 = A h κ 0 P h κ 0 A T hκ 0 + λ wb h B T h computing rỹτκ 0 = C h A τ hκ 0 P h κ 0 C T h τ 0. Proof: The result follows from Result. 3

III. ESTIMATION In this section estimators of ψ 0 θ 0 based on covariance cross-covariance matching are suggested. Two different estimators for ψ 0 one estimator for θ 0 are given consistency of these estimators are discussed. It is assumed that ˆRz τ from Proposition is available for τ = 0...l. Proposition. Define the loss function V ψ = j τ=j ˆru τ r u0 τψ where 0 j j l from which an estimate ˆψ is obtained as ˆψ = arg min V ψ. 9 ψ The estimate ˆψ from 9 is not consistent due to the colored measurement noise. Consider j lim V ψ = { ˆr lim uτ + ru 0 τψ τ=j r u0 τψ 0 + rũτγ 0 r u0 τψ} 0 where 8 is used. Note that nothing is said about lim ˆr uτ since this information is not needed. It is seen that 0 is minimized by ru0 τψ min = r u 0 τψ 0 + rũτγ 0 i.e. not by ψ = ψ 0. This means that r u0 τψ min r u 0 τψ 0 = rũτγ 0 in the limiting case. The following proposition is an alternative if α in 3 is nown. Proposition 3. Define the loss function Sψγ = j τ=j ˆru τ r u0 τψ rũτγ where 0 j j l from which estimates ˆψ ˆγ are obtained as {ˆψ ˆγ} = arg min Sψγ. ψγ The estimates ˆψ ˆγ from are consistent since j lim Sψγ = { ˆr lim uτ + ru 0 τψ + rũτγ τ=j r u0 τψ 0 + rũτγ 0 r u0 τψ r u0 τψ 0 + rũτγ 0 rũτγ + r u0 τψrũτγ} is minimized by ru0 τψ min = r u 0 τψ 0 rũτγ min = r ũτγ 0 i.e. by ψ = ψ 0 γ = γ 0. In just as in 0 8 is used information about lim ˆr uτ is not needed. After an estimate ˆψ is obtained for example as described in Proposition or in Proposition 3 an estimate of θ 0 can be found as described next. Proposition 4. Consider the loss function Wθ = ˆryu τ r y0u 0 3 where l from which an estimate ˆθ is given as ˆθ = arg min Wθ. 4 θ It holds that the estimate ˆθ from 4 is consistent provided that ˆψ is a consistent estimate of ψ 0. Consider lim Wθ = { lim ˆr yuτ + ry 0u 0 r y0u 0 τθ 0 ψ 0 r y0u 0 } 5 where 7 is used. Here information about lim ˆr yuτ is not needed. It holds that 5 is uniquely minimized by ry0u 0 min = r y 0u 0 τθ 0 ψ 0 i.e. by θ = θ 0 provided that ˆψ is a consistent estimate of ψ 0. The estimation method is now summarized in Algorithm. Algorithm. Summary of the estimation method. Estimate the covariance function R z0 τθ 0 ψ 0 as described in Proposition. Compute the estimate ˆψ as suggested in Proposition or in Proposition 3. 3 Compute the estimate ˆθ as described in Proposition 4. IV. COVARIANCE MATRIX An approximative expression valid for large N for the covariance matrix of the estimate of θ 0 described in Proposition 4 is given in this section. The computation of the involving elements is the main contribution of the paper a step-by-step algorithm is given in the end of the section. Since ˆθ minimizes Wθ it holds that Ẇˆθ = 0 where Ẇθ denotes the first order derivative of Wθ. By the mean value theorem Ẇˆθ = 0 can be written as 0 = Ẇθ 0 + Ẅθ ξˆθ θ 0 3

where Ẅθ denotes the second order derivative of Wθ where θ ξ is between θ 0 ˆθ. Due to consistency of ˆθ lim θ ξ = θ 0. Let Wθ = lim Ẅθ use the triangle inequality to get Ẅθ ξ Wθ0 Ẅθ ξ Ẅθ 0 + Ẅθ 0 Wθ0. Since the right-h side tends to zero as N lim Ẅθ ξ = Wθ0 = H. This gives the approximation ˆθ θ 0 H Ẇθ 0 for large N provided that H exists. The following result can now be given. Result 4. For large N the covariance matrix of ˆθ is approximately given as K = E{ˆθ θ 0 ˆθ θ 0 T } H QH 6 provided that H exists where Q = E{Ẇθ 0Ẇ T θ 0 } H = lim Ẅθ 0. Next the matrices Q H are to be found. Section IV-A is devoted to the computation of Q whereas Section IV-B describes the computation of H. A. Computation of Q To find Q it is first noted that Ẇθ = ry0u0 ˆr yu τ ṙ y0u 0 where ṙ y0u 0 denotes the derivative of r y0u 0 with respect to θ. The elements of ṙ y0u 0 = [ ry0u = 0 r y0u 0 ] T 7 θ θ n where θ i is the ith element of the vector θ are found by differentiating 5. More exactly the ith element of 7 is found as element of the matrix Pθ ˆψ C C T τ = 0 R z0 = C Aτ θ ˆψ Pθ θ ˆψC T i + CA τ Pθ ˆψ θ ˆψ C T + λ ec Aτ θ ˆψ BˆψD T τ > 0 8 where Pθ ˆψ/ is given as the unique non-negative definite solution to the Lyapunov equation Pθ ˆψ Pθ ˆψ = Aθ ˆψ A T θ θ ˆψ i Aθ ˆψ + Pθ θ ˆψA T θ ˆψ i + Aθ ˆψPθ ˆψ AT θ ˆψ. 9 Here Pθ ˆψ is given from 6 the partial derivatives of the functions of Aθ ˆψ with respect to θ i are straightforward to find from the chosen state space form. Assume that ˆr yu τ = r y0u 0 τθ 0 ˆψ + ετ. This means that Q = 4 E{ετεs}ṙ y0u 0 τθ 0 ˆψṙ y T 0u 0 sθ 0 ˆψ. s=0 Compute the element E{ετεs} E{ˆr yu τˆr yu s} r y0u 0 τθ 0 ψ 0 r y0u 0 sθ 0 ψ 0 0 where the approximation is motivated by 7 the fact that N is large by the assumption that lim ˆψ = ψ0. Here E{ˆr yu τˆr yu s} = N τ N s t = t = N τn s E{yt + τut yt + sut }. For the four jointly Gaussian variables ζ ζ ζ 3 ζ 4 of zero mean it holds that Hence E{ζ ζ ζ 3 ζ 4 } = E{ζ ζ }E{ζ 3 ζ 4 } + E{ζ ζ 3 }E{ζ ζ 4 } + E{ζ ζ 4 }E{ζ ζ 3 }. E{yt + τut yt + sut } where f = f = = r y0u 0 τθ 0 ψ 0 r y0u 0 sθ 0 ψ 0 + r y0 t + τ t s θ 0 ψ 0 + rỹ t + τ t s κ 0 r u0 t t ψ 0 + rũ t t γ 0 + f f 3 { r y0 u 0 t + τ t θ 0 ψ 0 t + τ t 0 r u0y 0 t t τθ 0 ψ 0 t + τ t < 0 4 { r y0u 0 t + s t θ 0 ψ 0 t + s t 0 r u0y 0 t t sθ 0 ψ 0 t + s t < 0. 5 33

B. Computation of H To find the Hessian H compute Ẅθ = to get + H ṙ y0u 0 ṙ y T 0u 0 ry0u0 ˆr yu τ r y0u 0 ṙ y0u 0 τψ 0 θ 0 ṙy T 0u 0 τψ 0 θ 0 6 where the approximation is motivated by 7 the fact that N is large by the assumption that lim ˆψ = ψ0. The computation of the covariance matrix is summarized in Algorithm. Algorithm. The computation of the covariance matrix K. Compute R z0 τθ 0 ψ 0 as described in Result. Compute rũτγ 0 rỹτκ 0 as described in Results 3 respectively. 3 Compute E{ˆr yu τˆr yu s} in using 3 5 the results from Steps. 4 Compute E{ετεs} in using the results from Steps 3. 5 Compute ṙ y0u 0 τθ 0 ˆψ in 7 through 8 9. 6 Compute Q in 0 using the results from Steps 4 5. 7 Compute H in 6 using the results from Step 5. 8 Compute K in 6 using the results from Steps 6 7. V. EXAMPLE The system defined by a b 0 0 Aθ 0 ψ 0 = a 0 b 0 0 0 c = 0.5 0 0.8 0 0 0 0 0 c 0 0 0 0.5 0 0 0 [ ] [ ] Bψ 0 = 0 d = 0 0 0 0 0 C = D = 0 0 0 0 d 0.3 λ e = where the disturbance dynamics are described by A g γ 0 = g = 0.8 B g = C g = λ v = A h κ 0 = h = 0.8 B h = C h = λ w = is considered in an example. The aim is to compare the variances from K in 6 with variances from a Monte Carlo simulation with 00 realizations of N = 0000 data points. Here ˆψ is taen as ψ0 in the loss function Wθ in 3 in the estimation of θ 0 in each realization in order to mae a fair comparison between the variances the variances from K. The variances for the estimates â â ˆb ˆb as functions of the maximum lag considered in the loss function Wθ are shown in Fig.. It is seen that the variances are well described by the variances that the validity of the expressions is confirmed for this example. VI. CONCLUSIONS The EIV identification problem with colored measurement noises was studied. The solution considered in this paper was to estimate covariance cross-covariance functions from the noise corrupted data to match the corresponding functions parameterized by the unnown parameters to the estimated functions. A step-by-step algorithm for the computation of the elements of an expression for the covariance matrix of the estimated system parameters valid for a large amount of data points was given. The variances were verified by variances from a Monte Carlo simulation in an example. REFERENCES [] B. D. O. Anderson M. Deistler Identification of dynamic errorsin-variables models Journal of Time Series Analysis vol. 5 pp. 3 984. [] V. Solo Identifiability of time series models with errors in variables Journal of Applied Probability vol. 3A pp. 63 7 986. [3] R. Guidorzi R. Diversi U. Soverini Optimal errors-in-variables filtering Automatica vol. 39 pp. 8 89 003. [4] T. Söderström Errors-in-variables methods in system identification Automatica vol. 43 pp. 939 958 007. [5] M. Mossberg Analysis of a covariance matching method for discretetime errors-in-variables identification in Proc. 4th IEEE Statistical Signal Processing Worshop Madison WI August 6 9 007 pp. 759 763. [6] T. Söderström Discrete-Time Stochastic Systems nd ed. London U.K.: Springer-Verlag 00. 34

3.5 x 0 3 3 4.5 x 0 3 4 3.5.5 3 var a var a.5.5.5 0.5 0.5 0 5.5 x 0 3 5 0.0 0.0 4.5 0.08 4 0.06 var b 3.5 3.5.5 var b 0.04 0.0 0.0 0.008 0.006 0.004 0.00 Fig.. The variances for the estimates â upper left â upper right ˆb lower left ˆb lower right as functions of the maximum lag. 35