Growing Window Recursive Quadratic Optimization with Variable Regularization

Size: px
Start display at page:

Download "Growing Window Recursive Quadratic Optimization with Variable Regularization"

Transcription

1 49th IEEE Conference on Decision and Control December 15-17, Hilton Atlanta Hotel, Atlanta, GA, USA Growing Window Recursive Quadratic Optimization with Variable Regularization Asad A. Ali 1, Jesse B. Hoagg 2, Magnus Mossberg 3 and Dennis S. Bernstein 4 Abstract We present a growing-window variableregularization recursive least squares GW-VR-RLS algorithm. Standard recursive least squares RLS uses a time-invariant regularization. More specifically, the inverse of the initial covariance matrix in classical RLS can be viewed as a regularization term, which weights the difference between the next state estimate and the initial state estimate. The present paper allows for time-varying in the weighting as well as what is being weighted. This extension can be used to modulate the speed of convergence of the estimates versus the magnitude of transient estimation errors. Furthermore, the regularization term can weight the difference between the next state estimate and a time-varying vector of parameters rather than the initial state estimate as is required in standard RLS. I. INTRODUCTION Recursive least squares RLS is widely used in signal processing, identification, estimation, and control [1], [2], [3], [4], [5], [6], [7], [8]. Under ideal conditions, that is, nonnoisy measurements and persistency of the data, RLS is guaranteed to converge to the minimizer of a quadratic function [5], [6]. In practice, the accuracy of the estimates and the rate of convergence depend on the level of noise and persistency of the data. The goal of the present paper is to extend standard RLS in two ways. First, in standard RLS, the positive-definite initialization of the covariance matrix serves as the weighting of a regularization term within the context of a quadratic optimization. Until at leastnmeasurements are available, this regularization term compensates for the lac of persistency in order to obtain a unique minimizer. Traditionally, the regularization weighting is fixed for all steps of the recursion. In the present wor, we derive a growing-window variable-regularization RLS GW-VR-RLS algorithm, where the weighting of the regularization term changes at each step. As a special case, the regularization can be decreased in magnitude or ran as the ran of the covariance matrix increases, and can be removed entirely when no longer needed. This ability is not available in standard RLS where the regularization term is weighted by the inverse of the initial covariance at every step. A second extension presented in this paper also involves the regularization term. Specifically, the regularization term in standard RLS weights the difference between the next state estimate and the initial state. In the present paper, the regularization term weights the difference between the next state estimate and an arbitrarily chosen time-varying vector of parameters. As a special case, the time-varying vector can be the current state estimate, and thus the regularization term weights the difference between the next state estimate and the current state estimate. This formulation allows us to modulate the rate at which the current estimate changes from step to step. For these extensions, we derive GW-VR-RLS update equations. While standard RLS entails the update of the state estimate and the covariance matrix, GW-VR-RLS entails the update of an additional symmetric matrix of dimension n n to allow for the variable regularization. Thus, GW-VR- RLS entails some additional computational burden relative to classical RLS. II. PROBLEM FORMULATION AND GW-VR-RLS ALGORITHM For all i, let A i R n n, b i,α i R n, and R i R n n, where A i and R i are positive semidefinite, define A, b, and assume that, for all, A i + R and A i+r +1 are positive definite. Hence R and R 1 are positive definite. For, define the regularized quadratic cost J x x T A i x+b T i x +x α T R x α, 1 where x R n. The minimizer x of 1 is given by x 1 2 A i +R 1 b i 2R α Note that x α is the minimizer of J x. To rewrite 2 recursively, define P A i +R 1. 2, 3 1 Graduate student, Department of Aerospace Engineering, The University of Michigan, Ann Arbor, MI , asadali@umich.edu. 2 Assistant Professor, Department of Mechanical Engineering, The University of Kentucy, Lexington, KY 6-3, jhoagg@engr.uy.edu. 3 Associate Professor, Department of Physics and Electrical Engineering, Karlstad University, Sweden, Magnus.Mossberg@au.se 4 Professor, Department of Aerospace Engineering, The University of Michigan, Ann Arbor, MI , dsbaero@umich.edu. where the inverse exists by assumption, so that x 1 2 P b i 2R α //$26. IEEE 496

2 Using 4, it follows that +1 x P +1 b i 2R +1 α P +1 b i +b +1 2R +1 α P +1 2P 1 x +2R α +b +1 2R +1 α +1 P +1 P 1 +1 x +R +1 x R x +A +1 x +R α b +1 R +1 α +1 x P +1 A +1 x +R +1 R x +R α R +1 α b Next, it follows from 3 that +1 A i +A +1 +R +1 P 1 Consider the decomposition P 1 +R +1 R +A +1. A +1 ψ +1 ψ T +1, 6 where ψ +1 R n n +1 and n +1 rana+1. Consequently, where P +1 P 1 +R +1 R +ψ +1 ψ T +1 1 Q ψ +1ψ T +1 1, 7 Q +1 A i +R +1 1 P 1 +R +1 R 1, 8 where the inverse exists by assumption. Note that Next, using the matrix inversion lemma P +1 Q X +UCV 1 X 1 X 1 U C 1 +VX 1 U 1 VX 1 with X Q 1 +1, U ψ +1, C I, and V ψ+1 T, it follows from 7 that P +1 Q +1 I n ψ +1 I n+1 +ψ+1q T +1 ψ +1 1 ψ+1 T Q Next, consider the decomposition R +1 R φ +1 S +1 φ T +1, 12 where φ +1 R n m +1, m +1 ranr+1 R, and S +1 R m +1 m +1 is a matrix of the form ±1 S +1 ± ±1 497 Therefore, 8 can be expressed as Q +1 P 1 +φ +1 S +1 φ T Letting X P 1, U φ +1, C S +1, and V φ T +1 it follows from and 14 that Q +1 P P φ +1 S +1 +φ T +1 P φ +1 1 φ T +1 P P I n φ +1 S +1 +φ T +1P φ +1 1 φ T +1P. 15 Therefore, for, the recursive regularized quadratic cost minimizer of 1 is given by 5, 11, and 15, that is, Q +1 P I n φ +1 S +1 +φ T +1P φ +1 1 φ T +1P, 16 P +1 Q +1 I n ψ +1 I n+1 +ψ+1 T Q +1ψ +1 1 ψ T +1Q +1, 17 x +1 x P +1 A +1 x +R +1 R x +R α R +1 α b +1, 18 where x α, P R 1, ψ +1 is given by 6, and φ +1 is given by 12. A. Standard RLS III. SPECIALIZATIONS Consider the special case R R and α x. Then the quadratic cost J x x T A i x+b T i x +x x T R x x 19 is minimized by x +1 x P +1 A +1 x b +1, P +1 P I n ψ +1 I n+1 +ψ+1p T ψ +1 1 ψ+1p T, 21 where P R 1. Since the recursive update for Q +1 given by 16 simplifies to Q +1 P +1, standard RLS does not require the update of Q +1. B. Standard RLS with α x 1 and R R Consider the special case R R and α x 1. Then the quadratic cost J x x T A i x+b T i x +x x 1 T R x x 1 is minimized by P +1 P I n+1 ψ +1 I n+1 +ψ+1p T ψ +1 1 ψ+1p T, x +1 x P +1 A +1 x +P 1 x 1 x b +1, where P R 1. Note that the update for P does not require Q.

3 C. Standard RLS with forgetting factor Let < λ 1, and consider the modified cost J x λ i x T Ā i x+ b T i x +x x T λ R x x, where for i, Ā i ψ i ψt i. Next, it follows that J x λ λ i x T Ā i x+ b T i x +x x T R x x λ x T A i x+b T i x +x x T R x x, where A i λ i Ā i, b i λ i bi, and R R. Therefore, J x λ J x, where J x is given by the traditional RLS quadratic cost 19. Minimizing J x is equivalent to minimizing J x. In this case, the minimizer of J is given by and 21; however, the minimizer x is expressed in terms of A +1 and b +1 rather than Ā+1 and b +1. Substituting A +1 λ 1 Ā +1, b +1 λ 1 b +1, and ψ +1 λ +1/2 ψ +1 into and 21 yields P +1 P I n λ 1 ψ+1 I n+1 +λ 1 ψt +1 P ψ+1 1 ψt +1 P, x +1 x P +1 λ 1 Ā +1 x λ 1 b+1. Next, for i, define P i λ i P i, and it follows that the minimizer of J is given by P +1 λ 1 P I n ψ +1 λi n+1 + ψ T +1 P ψ+1 1 ψt +1 P, x +1 x P +1 Ā+1 x b +1, where P R 1. D. Standard RLS with α x 1 and forgetting factor Let < λ 1, and consider the modified cost J x λ i x T Ā i x+ b T i x +x x 1 T λ R x x 1, where for i, Ā i ψ i ψt i. Next, it follows that J x λ λ i x T Ā i x+ b T i x +x x 1 T R x x 1 λ x T A i x+b T i x +x x 1 T R x x 1, 22 where A i λ i Ā i, b i λ i bi, and R R. Combining the steps in Section III-C and Section III-D, it follows that the minimizer of J is given by 498 P +1 λ 1 P I n ψ +1 λi n+1 + ψ T +1 P ψ+1 1 ψt +1 P, x +1 x P +1 Ā+1 x +λ +1 P 1 x 1 x b +1, where P R 1. IV. SETUP FOR NUMERICAL SIMULATIONS For all, let x,opt R n, ψ R n, where its i th entry ψ,i is generated from a zero mean, unit variance Gaussian distribution. The entries of ψ are independent. Define β ψ T x,opt. Let l be the number of data points. Define σ ψ,i 1 l ψ 2 l,i 1, l σ β 1 l 1 l 1 β 2 l x T,opt x,opt. Next, for i 1,...,n, let N,i R, and M R be generated from zero-mean Gaussian distributions with variances σ 2 N,i and σ2 M, respectively, where σ N,i and σ M are determined from the signal-to-noise ratio SNR. More specifically, for i 1,...,n, σ ψ,i σ β SNR ψ,i, and SNR β, σ N,i σ M where, for i 1,...,n, σ N,i 1 K K 1 M2. 1 K K 1 N2,i and σ M Finally, for, define A ψ + N ψ + N T and b 2β + M ψ +N, where N is the noise in ψ and M is the noise in β. Define z 1 [ ] T, z 2 [ ] T. Unless otherwise specified, for all, x,opt z 1, α x, and x 7 1. Define the performance x,opt x ε. x,opt V. NUMERICAL SIMULATIONS WITH NOISELESS DATA We now investigate the effect of R, α and λ on GW- VR-RLS. Furthermore, in this section, A and b contain no noise, specifically, for all, N 7 1 and M. A. Effect of R We begin by testing the effect of R on convergence of ε when R is constant. In the following example, we test GW-VR-RLS for three values of R. Specifically, for all, R I 7 7, R.1I 7 7 or R.1I 7 7. In all three cases, for all, A and b are the same. For this example, Figure 1 shows that smaller values of R yield faster convergence of ε to zero. This effect occurs because decreasing R reduces the magnitude of the regularization

4 ε % R I 7 7 R.1I 7 7 R.1I Time step Fig. 1. Effect of R on convergence of x to x,opt. For this example, smaller values of R yield faster convergence of ε to zero. term in the cost function 1. Next, we let R be constant and positive definite until 1 A i has full ran, then we let R. { More specifically,.1i R 7 7, if ran 1 A i < n,, if ran 1 A 23 i n. For R given by 23, if there is no noise in the data, then x may converge to x,opt in finite time. In particular, if there exists a positive integer N such that N 1 A i has full ran, then, for all N, x x,opt. Figure 2 shows that ε converges to zero in finite time when R is given by 23. In this case for all 7, 1 A i has full ran. Thus, for all 8, x x,opt. ε % Time step Fig. 2. Effect of R on convergence of x to x,opt. In this example, 7 A i has full ran. Therefore, for 8, R and x x,opt. Next, we choose the smallest R such that A i is positive definite. More specifically, we conduct the singular value decomposition USU T 1 A i, where U R n n, S R n n and has the form [ ] S Γ m m m n m, n m m n m n m whereγ R m m contains themnon-zero singular values of 1 A i. Note that the singular value decomposition has the form USU T because 1 A i is symmetric [9, Corollary 5.4.5]. Next, define [ Ŝ ] m m m n m, n m m ǫi n m n m where ǫ. Finally, R, R UŜUT, if ran 1 A i < n,, if ran 1 A i n, 24 In the following example we compare GW-VR-RLS with R I 3 3 and R given by 24 with ǫ 1. In both cases, for all, A and b are the same. For this example, Figure 3 shows that setting R given by 24 with ǫ 1 yields faster convergence of ε to zero than setting R I 7 7. ε % Time step Fig. 3. Effect of R on convergence of x to x,opt. The solid line denotes ε with R given by 24 and the dashed line denotes ε with R I 7 7. For this example, setting R given by 24 with ǫ 1 yields faster convergence of ε to zero than setting R I 7 7. B. Effect of α Figure 4 compares GW-VR-RLS with α x 1 and α x, where, for all, R I 7 7. For this example, setting α x 1 yields faster convergence of ε to zero than setting α x. ε % α x 1 α x Time step Fig. 4. Effect of one step regularization on convergence of x to x,opt. For this example, setting α x 1 yields faster convergence of ε to zero than setting α x C. Effect of Forgetting Factor In this section, we examine standard RLS with forgetting factor as described in Section III-C. In the following example, we consider three values of λ, specifically λ1, λ.995 or λ.9. For all, R.1I 7 7 and { z1, x,opt z 2, > For this example, Figure 5 shows that, for, the forgetting factor has negligible impact on the behavior of ε. For >, smaller values of λ yield faster convergence of ε to zero. 499

5 ε % λ 1 λ.995 λ.9 Time step Fig. 5. Effect of forgetting factor on convergence of x to x,opt. For, the forgetting factor has negligible impact on the behavior of ε. For >, x,opt x,opt, and a smaller value of λ yields faster convergence of x to x,opt. VI. NUMERICAL SIMULATIONS WITH NOISY DATA We now investigate the effect of R, α, and λ on GW- VR-RLS when the data have noise. More specifically, for all, M and N,i are generated from zero mean Gaussian distributions with variances depending on SNR ψ,i and SNR β, respectively. Figure 6 shows the effect of noise on standard RLS for different SNR values. In this example, a smaller value of SNR yields a larger asymptotic value of ε. In the next example, we examine the convergence of ε for standard RLS when ψ and β have constant bias. We consider three cases of constant bias, specifically, for all, N and M.2, N and M or N 7 1 and M.2. For this example, Figure 7 shows that bias increases the asymptotic value of ε. Furthermore, bias in β yields a higher asymptotic value of ε than an equal percent of bias in ψ. ε % Time step SNR ψ,i SNR β SNR ψ,i SNR β SNR ψ,i SNR β 5 Fig. 6. Effect of noise on standard RLS. In this example, smaller values of SNR yield larger asymptotic values of ε. A. Effect of R In this section, we examine the effect of R where R is constant. In the following example, we test GW-VR-RLS for three different values of R. Specifically, for all, R I 7 7, R.1I 7 7 or R.1I 7 7. Furthermore, SNR ψ,i SNR β 5 and, for all, A and b are the same. For this example, Figure 8 shows that smaller values of R can result in larger pea values of ε. Recall that, Figure 1 showed that smaller values of R can yield faster convergence of ε to zero. However, if the data have noise, then Figure 8 shows that the transient response ε % N.2I 7 1, M.2 N.2I 7 1, M N 7 1, M.2 Time step Fig. 7. Effect of bias on standard RLS. For this example, bias increases the asymptotic value of ε. Furthermore, bias in β yields a higher asymptotic value of ε than an equal percent of bias in ψ. of ε can be worse for smaller values of R. As the SNR increases, Figure 8 converges to Figure 1. ε % R I 7 7 R.1I 7 7 R.1I 7 7 Time step Fig. 8. Effect of R on convergence of x to x,opt. For this example, smaller values of R can result in larger pea values of ε. B. Effect of α Figure 9 compares GW-VR-RLS with α x 1 and α x, where, SNR ψ,i SNR β 5 and for all R.1I 7 7. For this example, Figure 9 shows that the transient response of ε can be worse for α x 1 than it is for α x. Figure 4 showed that setting α x 1 can yield faster convergence of ε to zero than setting α x. However, if the data have noise, then Figure 9 shows that the transient response of ε can be worse with α x 1 than it is with α x. As the SNR increases, Figure 9 converges to Figure 4. Next, we compare GW-VR-RLS for different choices of α. More specifically, we let α L ν where L ν { x 1, < ν, x ν, > ν, where ν is a positive integer. In the following example, we test GW-VR-RLS for three different ν. Specifically, ν 1, ν 5, ν. In all cases, for all, A and b are the same, R I 7 7 and SNR β SNR ψ,i 5. For this example, Figure shows that larger values of ν can yield better transient performance of ε. Next, we let α W ρ where x, 1, W ρ i1 x i, 1 < ρ, 1 ρ ρ i1 x i, > ρ,

6 ε % α x 1 α x Time step Fig. 9. Effect of α on convergence x to x,opt. For this example, this figure shows that the transient response of ε can be worse for α x 1 than it is for α x. ε % α L ν, ν 1 α L ν, ν 5 α L ν, ν Time step Fig.. Convergence of x to x,opt. For this example, larger values of ν yield better transient performance of ε. where ρ is a positive integer. In the following example, we test GW-VR-RLS for three different values ofρ. Specifically, ρ 1, ρ 5, ρ. In all cases, for all, A and b are the same, R I 7 7 and SNR β SNR ψ,i 5. For this example, Figure 11 shows that larger values of ρ can yield better transient performance of ε than smaller values of ρ. ε % α W ρ, ρ 1 α W ρ, ρ 5 α W ρ, ρ Time step Fig. 11. Convergence of x to x,opt. In this example, larger values of ρ yield better transient performance of ε than smaller values of ρ. C. Effect of Forgetting Factor In this section, we examine standard RLS with forgetting factor. In the following example, we test RLS for three values of λ, specifically λ1, λ.95 or λ.9. Let SNR ψ,i SNR β 5, and, for all, R.1I 7 7. For this example, Figure 12 shows that smaller values of λ yield larger asymptotic value of ε. Next, we let x,opt { z1,, z 2, >, For this example, Figure 13 shows that, for, smaller values ofλyield larger asymptotic values ofε. For >, x,opt x,opt, and a smaller value of λ yields faster convergence of ε to its asymptotic value. ε % 1 1 λ 1 λ.95 λ.9 Time step Fig. 12. Effect of forgetting factor on convergence of x to x,opt. For this example smaller values of λ yield larger asymptotic values of ε. ε % λ 1 λ.95 λ.9 Time step Fig. 13. Effect of forgetting factor on convergence of x to x,opt. For, smaller values of λ yield larger asymptotic values of ε. For >, x,opt x,opt, and a smaller value of λ yields faster convergence of ε to its asymptotic value. VII. CONCLUSIONS In this paper, we presented a growing-window variableregularization recursive least squares GW-VR-RLS algorithm. This algorithm allows for a time-varying regularization term in the RLS cost function. More specifically, GW-VR- RLS allows us to vary both the weighting in the regularization as well as what is being weighted, that is, the regularization term can weight the difference between the next state estimate and a time-varying vector of parameters rather than the initial state estimate. Future wor will include an investigation of the convergence properties. REFERENCES [1] A. H. Sayed, Adaptive Filters. Hoboen, New Jersey: John Wiley and Sons, Inc., 8. [2] J. N. Juang, Applied System Identification. Prentice-Hall, [3] L. Ljung and T. Söderström, Theory and practice of Recursive Identification. Cambridge, MA: The MIT Press, [4] L. Ljung, System Identification: Theory for the User, 2nd ed. Upper Saddle River, NJ: Prentice-Hall, [5] K. J. Åström and B. Wittenmar, Adaptive Control, 2nd ed. Addison- Wesley, [6] G. C. Goodwin and K. S. Sin, Adaptive Filtering, Prediction, and Control. Prentice Hall, [7] G. Tao, Adaptive Control Design and Analysis. Wiley, 3. [8] P. Ioannou and J. Sun, Robust Adaptive Control. Prentice Hall, [9] D. S. Bernstein, Matrix Mathematics, 2nd ed. Princeton University Press, 9. 1

Sliding Window Recursive Quadratic Optimization with Variable Regularization

Sliding Window Recursive Quadratic Optimization with Variable Regularization 11 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 1, 11 Sliding Window Recursive Quadratic Optimization with Variable Regularization Jesse B. Hoagg, Asad A. Ali,

More information

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise

More information

System Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating

System Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating 9 American Control Conference Hyatt Regency Riverfront, St Louis, MO, USA June 1-1, 9 FrA13 System Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating M A Santillo

More information

MANY adaptive control methods rely on parameter estimation

MANY adaptive control methods rely on parameter estimation 610 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 52, NO 4, APRIL 2007 Direct Adaptive Dynamic Compensation for Minimum Phase Systems With Unknown Relative Degree Jesse B Hoagg and Dennis S Bernstein Abstract

More information

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games Adaptive State Feedbac Nash Strategies for Linear Quadratic Discrete-Time Games Dan Shen and Jose B. Cruz, Jr. Intelligent Automation Inc., Rocville, MD 2858 USA (email: dshen@i-a-i.com). The Ohio State

More information

Subspace Identification With Guaranteed Stability Using Constrained Optimization

Subspace Identification With Guaranteed Stability Using Constrained Optimization IEEE TANSACTIONS ON AUTOMATIC CONTOL, VOL. 48, NO. 7, JULY 2003 259 Subspace Identification With Guaranteed Stability Using Constrained Optimization Seth L. Lacy and Dennis S. Bernstein Abstract In system

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Overview Guy Dumont Department of Electrical and Computer Engineering University of British Columbia Lectures: Thursday 09h00-12h00 Location: PPC 101 Guy Dumont (UBC) EECE 574

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

6.435, System Identification

6.435, System Identification System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency

More information

Cumulative Retrospective Cost Adaptive Control with RLS-Based Optimization

Cumulative Retrospective Cost Adaptive Control with RLS-Based Optimization 21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 ThC3.3 Cumulative Retrospective Cost Adaptive Control with RLS-Based Optimization Jesse B. Hoagg 1 and Dennis S.

More information

Experiences in an Undergraduate Laboratory Using Uncertainty Analysis to Validate Engineering Models with Experimental Data

Experiences in an Undergraduate Laboratory Using Uncertainty Analysis to Validate Engineering Models with Experimental Data Experiences in an Undergraduate Laboratory Using Analysis to Validate Engineering Models with Experimental Data W. G. Steele 1 and J. A. Schneider Abstract Traditionally, the goals of engineering laboratory

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Adaptive Channel Modeling for MIMO Wireless Communications

Adaptive Channel Modeling for MIMO Wireless Communications Adaptive Channel Modeling for MIMO Wireless Communications Chengjin Zhang Department of Electrical and Computer Engineering University of California, San Diego San Diego, CA 99- Email: zhangc@ucsdedu Robert

More information

KNOWN approaches for improving the performance of

KNOWN approaches for improving the performance of IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 58, NO. 8, AUGUST 2011 537 Robust Quasi-Newton Adaptive Filtering Algorithms Md. Zulfiquar Ali Bhotto, Student Member, IEEE, and Andreas

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Recursive Least Squares and Accelerated Convergence in Stochastic Approximation Schemes Lennart Ljung Department of Electrical Engineering Linkping University, S-581 83 Linkping, Sweden WWW: http://www.control.isy.liu.se

More information

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems Mathematical Problems in Engineering Volume 2012, Article ID 257619, 16 pages doi:10.1155/2012/257619 Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise

More information

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model BULGARIAN ACADEMY OF SCIENCES CYBERNEICS AND INFORMAION ECHNOLOGIES Volume No Sofia Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model sonyo Slavov Department of Automatics

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Auxiliary signal design for failure detection in uncertain systems

Auxiliary signal design for failure detection in uncertain systems Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a

More information

Adaptive Stabilization of a Class of Nonlinear Systems With Nonparametric Uncertainty

Adaptive Stabilization of a Class of Nonlinear Systems With Nonparametric Uncertainty IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 11, NOVEMBER 2001 1821 Adaptive Stabilization of a Class of Nonlinear Systes With Nonparaetric Uncertainty Aleander V. Roup and Dennis S. Bernstein

More information

Adaptive Control Using Retrospective Cost Optimization with RLS-Based Estimation for Concurrent Markov-Parameter Updating

Adaptive Control Using Retrospective Cost Optimization with RLS-Based Estimation for Concurrent Markov-Parameter Updating Joint 48th IEEE Conference on Decision and Control and 8th Chinese Control Conference Shanghai, PR China, December 6-8, 9 ThA Adaptive Control Using Retrospective Cost Optimization with RLS-Based Estimation

More information

Further Results on Model Structure Validation for Closed Loop System Identification

Further Results on Model Structure Validation for Closed Loop System Identification Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

A Strict Stability Limit for Adaptive Gradient Type Algorithms

A Strict Stability Limit for Adaptive Gradient Type Algorithms c 009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional A Strict Stability Limit for Adaptive Gradient Type Algorithms

More information

Simplified marginal effects in discrete choice models

Simplified marginal effects in discrete choice models Economics Letters 81 (2003) 321 326 www.elsevier.com/locate/econbase Simplified marginal effects in discrete choice models Soren Anderson a, Richard G. Newell b, * a University of Michigan, Ann Arbor,

More information

Improved Methods for Monte Carlo Estimation of the Fisher Information Matrix

Improved Methods for Monte Carlo Estimation of the Fisher Information Matrix 008 American Control Conference Westin Seattle otel, Seattle, Washington, USA June -3, 008 ha8. Improved Methods for Monte Carlo stimation of the isher Information Matrix James C. Spall (james.spall@jhuapl.edu)

More information

Signal Denoising with Wavelets

Signal Denoising with Wavelets Signal Denoising with Wavelets Selin Aviyente Department of Electrical and Computer Engineering Michigan State University March 30, 2010 Introduction Assume an additive noise model: x[n] = f [n] + w[n]

More information

GPS Multipath Detection Based on Sequence of Successive-Time Double-Differences

GPS Multipath Detection Based on Sequence of Successive-Time Double-Differences 1 GPS Multipath Detection Based on Sequence of Successive-Time Double-Differences Hyung Keun Lee, Jang-Gyu Lee, and Gyu-In Jee Hyung Keun Lee is with Seoul National University-Korea University Joint Research

More information

The Kalman filter is arguably one of the most notable algorithms

The Kalman filter is arguably one of the most notable algorithms LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

Adaptive Dynamic Inversion Control of a Linear Scalar Plant with Constrained Control Inputs

Adaptive Dynamic Inversion Control of a Linear Scalar Plant with Constrained Control Inputs 5 American Control Conference June 8-, 5. Portland, OR, USA ThA. Adaptive Dynamic Inversion Control of a Linear Scalar Plant with Constrained Control Inputs Monish D. Tandale and John Valasek Abstract

More information

Stabilization of a Specified Equilibrium in the Inverted Equilibrium Manifold of the 3D Pendulum

Stabilization of a Specified Equilibrium in the Inverted Equilibrium Manifold of the 3D Pendulum Proceedings of the 27 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 27 ThA11.6 Stabilization of a Specified Equilibrium in the Inverted Equilibrium

More information

Closed-Loop Identification of Unstable Systems Using Noncausal FIR Models

Closed-Loop Identification of Unstable Systems Using Noncausal FIR Models 23 American Control Conference (ACC) Washington, DC, USA, June 7-9, 23 Closed-Loop Identification of Unstable Systems Using Noncausal FIR Models Khaled Aljanaideh, Benjamin J. Coffer, and Dennis S. Bernstein

More information

Stabilization of a 3D Rigid Pendulum

Stabilization of a 3D Rigid Pendulum 25 American Control Conference June 8-, 25. Portland, OR, USA ThC5.6 Stabilization of a 3D Rigid Pendulum Nalin A. Chaturvedi, Fabio Bacconi, Amit K. Sanyal, Dennis Bernstein, N. Harris McClamroch Department

More information

Retrospective Cost Adaptive Control for Nonminimum-Phase Systems with Uncertain Nonminimum-Phase Zeros Using Convex Optimization

Retrospective Cost Adaptive Control for Nonminimum-Phase Systems with Uncertain Nonminimum-Phase Zeros Using Convex Optimization American Control Conference on O'Farrell Street, San Francisco, CA, USA June 9 - July, Retrospective Cost Adaptive Control for Nonminimum-Phase Systems with Uncertain Nonminimum-Phase Zeros Using Convex

More information

A COMPARISON OF TWO METHODS FOR STOCHASTIC FAULT DETECTION: THE PARITY SPACE APPROACH AND PRINCIPAL COMPONENTS ANALYSIS

A COMPARISON OF TWO METHODS FOR STOCHASTIC FAULT DETECTION: THE PARITY SPACE APPROACH AND PRINCIPAL COMPONENTS ANALYSIS A COMPARISON OF TWO METHODS FOR STOCHASTIC FAULT DETECTION: THE PARITY SPACE APPROACH AND PRINCIPAL COMPONENTS ANALYSIS Anna Hagenblad, Fredrik Gustafsson, Inger Klein Department of Electrical Engineering,

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

Minimax MMSE Estimator for Sparse System

Minimax MMSE Estimator for Sparse System Proceedings of the World Congress on Engineering and Computer Science 22 Vol I WCE 22, October 24-26, 22, San Francisco, USA Minimax MMSE Estimator for Sparse System Hongqing Liu, Mandar Chitre Abstract

More information

sine wave fit algorithm

sine wave fit algorithm TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner STOCHASTIC STABILITY H.J. Kushner Applied Mathematics, Brown University, Providence, RI, USA. Keywords: stability, stochastic stability, random perturbations, Markov systems, robustness, perturbed systems,

More information

Analysis of incremental RLS adaptive networks with noisy links

Analysis of incremental RLS adaptive networks with noisy links Analysis of incremental RLS adaptive networs with noisy lins Azam Khalili, Mohammad Ali Tinati, and Amir Rastegarnia a) Faculty of Electrical and Computer Engineering, University of Tabriz Tabriz 51664,

More information

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis 84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.

More information

Trimmed Diffusion Least Mean Squares for Distributed Estimation

Trimmed Diffusion Least Mean Squares for Distributed Estimation Trimmed Diffusion Least Mean Squares for Distributed Estimation Hong Ji, Xiaohan Yang, Badong Chen School of Electronic and Information Engineering Xi an Jiaotong University Xi an 710049, P.R. China chenbd@mail.xjtu.edu.cn,

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

IN neural-network training, the most well-known online

IN neural-network training, the most well-known online IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 161 On the Kalman Filtering Method in Neural-Network Training and Pruning John Sum, Chi-sing Leung, Gilbert H. Young, and Wing-kay Kan

More information

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012 Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang 1-1 Course Description Emphases Delivering concepts and Practice Programming Identification Methods using Matlab Class

More information

Estimation, Detection, and Identification

Estimation, Detection, and Identification Estimation, Detection, and Identification Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Chapter 5 Best Linear Unbiased Estimators Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt

More information

IMPROVEMENTS IN MODAL PARAMETER EXTRACTION THROUGH POST-PROCESSING FREQUENCY RESPONSE FUNCTION ESTIMATES

IMPROVEMENTS IN MODAL PARAMETER EXTRACTION THROUGH POST-PROCESSING FREQUENCY RESPONSE FUNCTION ESTIMATES IMPROVEMENTS IN MODAL PARAMETER EXTRACTION THROUGH POST-PROCESSING FREQUENCY RESPONSE FUNCTION ESTIMATES Bere M. Gur Prof. Christopher Niezreci Prof. Peter Avitabile Structural Dynamics and Acoustic Systems

More information

SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION ALGORITHMS

SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION ALGORITHMS 3 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August -6, 24 Paper No. 278 SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION

More information

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Recursive Least Squares for an Entropy Regularized MSE Cost Function Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

Stabilization with Disturbance Attenuation over a Gaussian Channel

Stabilization with Disturbance Attenuation over a Gaussian Channel Proceedings of the 46th IEEE Conference on Decision and Control New Orleans, LA, USA, Dec. 1-14, 007 Stabilization with Disturbance Attenuation over a Gaussian Channel J. S. Freudenberg, R. H. Middleton,

More information

An Adaptive LQG Combined With the MRAS Based LFFC for Motion Control Systems

An Adaptive LQG Combined With the MRAS Based LFFC for Motion Control Systems Journal of Automation Control Engineering Vol 3 No 2 April 2015 An Adaptive LQG Combined With the MRAS Based LFFC for Motion Control Systems Nguyen Duy Cuong Nguyen Van Lanh Gia Thi Dinh Electronics Faculty

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute

More information

A Numerical Study on Controlling a Nonlinear Multilink Arm Using A Retrospective Cost Model Reference Adaptive Controller

A Numerical Study on Controlling a Nonlinear Multilink Arm Using A Retrospective Cost Model Reference Adaptive Controller 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December -5, A Numerical Study on Controlling a Nonlinear Multilink Arm Using A Retrospective Cost

More information

Factors affecting the Type II error and Power of a test

Factors affecting the Type II error and Power of a test Factors affecting the Type II error and Power of a test The factors that affect the Type II error, and hence the power of a hypothesis test are 1. Deviation of truth from postulated value, 6= 0 2 2. Variability

More information

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Nuntanut Raksasri Jitkomut Songsiri Department of Electrical Engineering, Faculty of Engineering,

More information

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM Kutluyıl Doğançay Reza Arablouei School of Engineering, University of South Australia, Mawson Lakes, SA 595, Australia ABSTRACT

More information

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997 798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 44, NO 10, OCTOBER 1997 Stochastic Analysis of the Modulator Differential Pulse Code Modulator Rajesh Sharma,

More information

ADaptive signal processing algorithms have been widely

ADaptive signal processing algorithms have been widely Variable Step-Size Affine Projection Algorithm with a Weighted and Regularized Projection Matrix Tao Dai, Andy Adler, and Behnam Shahrrava Abstract This paper presents a forgetting factor scheme for variable

More information

Objective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for

Objective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for Objective Functions for Tomographic Reconstruction from Randoms-Precorrected PET Scans Mehmet Yavuz and Jerey A. Fessler Dept. of EECS, University of Michigan Abstract In PET, usually the data are precorrected

More information

A technique for simultaneous parameter identification and measurement calibration for overhead transmission lines

A technique for simultaneous parameter identification and measurement calibration for overhead transmission lines A technique for simultaneous parameter identification and measurement calibration for overhead transmission lines PAVEL HERING University of West Bohemia Department of cybernetics Univerzitni 8, 36 4 Pilsen

More information

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms 2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior

More information

Thrust acceleration estimation using an on-line non-linear recursive least squares algorithm

Thrust acceleration estimation using an on-line non-linear recursive least squares algorithm 7 Thrust acceleration estimation using an on-line non-linear recursive least squares algorithm N Ghahramani, A Naghash 2, and F Towhidkhah 2 Department of Aerospace Engineering, Amirkabir University of

More information

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Optimal Signal to Noise Ratio in Feedback over Communication Channels with Memory

Optimal Signal to Noise Ratio in Feedback over Communication Channels with Memory Proceedings of the 45th IEEE Conference on Decision & Control Manchester rand Hyatt Hotel San Diego, CA, USA, December 13-15, 26 Optimal Signal to Noise Ratio in Feedback over Communication Channels with

More information

Retrospective Cost Model Reference Adaptive Control for Nonminimum-Phase Discrete-Time Systems, Part 1: The Adaptive Controller

Retrospective Cost Model Reference Adaptive Control for Nonminimum-Phase Discrete-Time Systems, Part 1: The Adaptive Controller 2011 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 01, 2011 Retrospective Cost Model Reference Adaptive Control for Nonminimum-Phase Discrete-Time Systems, Part

More information

Gaussian Estimation under Attack Uncertainty

Gaussian Estimation under Attack Uncertainty Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary

More information

State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems

State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems Mehdi Tavan, Kamel Sabahi, and Saeid Hoseinzadeh Abstract This paper addresses the problem of state and

More information

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

Optimal State Estimators for Linear Systems with Unknown Inputs

Optimal State Estimators for Linear Systems with Unknown Inputs Optimal tate Estimators for Linear ystems with Unknown Inputs hreyas undaram and Christoforos N Hadjicostis Abstract We present a method for constructing linear minimum-variance unbiased state estimators

More information

Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise

Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise MarcBodson Department of Electrical Engineering University of Utah Salt Lake City, UT 842, U.S.A. (8) 58 859 bodson@ee.utah.edu

More information

Real time detection through Generalized Likelihood Ratio Test of position and speed readings inconsistencies in automated moving objects

Real time detection through Generalized Likelihood Ratio Test of position and speed readings inconsistencies in automated moving objects Real time detection through Generalized Likelihood Ratio Test of position and speed readings inconsistencies in automated moving objects Sternheim Misuraca, M. R. Degree in Electronics Engineering, University

More information

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems July 2001 Revised: December 2005 Ronald J. Balvers Douglas W. Mitchell Department of Economics Department

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Error control of line codes generated by finite Coxeter groups

Error control of line codes generated by finite Coxeter groups Error control of line codes generated by finite Coxeter groups Ezio Biglieri Universitat Pompeu Fabra, Barcelona, Spain Email: e.biglieri@ieee.org Emanuele Viterbo Monash University, Melbourne, Australia

More information

On the Behavior of Information Theoretic Criteria for Model Order Selection

On the Behavior of Information Theoretic Criteria for Model Order Selection IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 8, AUGUST 2001 1689 On the Behavior of Information Theoretic Criteria for Model Order Selection Athanasios P. Liavas, Member, IEEE, and Phillip A. Regalia,

More information

IDENTIFICATION AND DAHLIN S CONTROL FOR NONLINEAR DISCRETE TIME OUTPUT FEEDBACK SYSTEMS

IDENTIFICATION AND DAHLIN S CONTROL FOR NONLINEAR DISCRETE TIME OUTPUT FEEDBACK SYSTEMS Journal of ELECTRICAL ENGINEERING, VOL. 57, NO. 6, 2006, 329 337 IDENTIFICATION AND DAHLIN S CONTROL FOR NONLINEAR DISCRETE TIME OUTPUT FEEDBACK SYSTEMS Narayanasamy Selvaganesan Subramanian Renganathan

More information

Subspace Identification for Nonlinear Systems That are Linear in Unmeasured States 1

Subspace Identification for Nonlinear Systems That are Linear in Unmeasured States 1 Subspace Identification for Nonlinear Systems hat are Linear in nmeasured States 1 Seth L Lacy and Dennis S Bernstein 2 Abstract In this paper we apply subspace methods to the identification of a class

More information

On the Equivalence of OKID and Time Series Identification for Markov-Parameter Estimation

On the Equivalence of OKID and Time Series Identification for Markov-Parameter Estimation On the Equivalence of OKID and Time Series Identification for Markov-Parameter Estimation P V Albuquerque, M Holzel, and D S Bernstein April 5, 2009 Abstract We show the equivalence of Observer/Kalman

More information

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence 1 EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence Jean Walrand I. SUMMARY Here are the key ideas and results of this important topic. Section II reviews Kalman Filter. A system is observable

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS AAS 6-135 A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS Andrew J. Sinclair,JohnE.Hurtado, and John L. Junkins The concept of nonlinearity measures for dynamical systems is extended to estimation systems,

More information

Discrete-Time H Gaussian Filter

Discrete-Time H Gaussian Filter Proceedings of the 17th World Congress The International Federation of Automatic Control Discrete-Time H Gaussian Filter Ali Tahmasebi and Xiang Chen Department of Electrical and Computer Engineering,

More information

General Properties for Determining Power Loss and Efficiency of Passive Multi-Port Microwave Networks

General Properties for Determining Power Loss and Efficiency of Passive Multi-Port Microwave Networks University of Massachusetts Amherst From the SelectedWorks of Ramakrishna Janaswamy 015 General Properties for Determining Power Loss and Efficiency of Passive Multi-Port Microwave Networks Ramakrishna

More information

510 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 2, FEBRUARY X/$ IEEE

510 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 2, FEBRUARY X/$ IEEE 510 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 2, FEBRUARY 2010 Noisy Data and Impulse Response Estimation Soosan Beheshti, Senior Member, IEEE, and Munther A. Dahleh, Fellow, IEEE Abstract This

More information

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples Martin Enqvist, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581

More information

"ZERO-POINT" IN THE EVALUATION OF MARTENS HARDNESS UNCERTAINTY

ZERO-POINT IN THE EVALUATION OF MARTENS HARDNESS UNCERTAINTY "ZERO-POINT" IN THE EVALUATION OF MARTENS HARDNESS UNCERTAINTY Professor Giulio Barbato, PhD Student Gabriele Brondino, Researcher Maurizio Galetto, Professor Grazia Vicario Politecnico di Torino Abstract

More information