Sliding Window Recursive Quadratic Optimization with Variable Regularization

Size: px
Start display at page:

Download "Sliding Window Recursive Quadratic Optimization with Variable Regularization"

Transcription

1 11 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 1, 11 Sliding Window Recursive Quadratic Optimization with Variable Regularization Jesse B. Hoagg, Asad A. Ali, Magnus Mossberg, and Dennis S. Bernstein Abstract In this paper, we present a sliding-window variable-regularization recursive least squares algorithm. In contrast to standard recursive least squares, the algorithm presented in this paper operates on a finite window of data, where old data are discarded as new data become available. This property can be beneficial for estimating time-varying parameters. Furthermore, standard recursive least squares uses time-invariant regularization. More specifically, the inverse of the initial covariance matrix in standard recursive least squares can be viewed as a regularization term, which weights the difference between the next estimate and the initial estimate. This regularization is fixed for all steps of the recursion. The algorithm derived in this paper allows for time-varying regularization. In particular, the present paper allows for timevarying regularization in the weighting as well as what is being weighted. Specifically, the regularization term can weight the difference between the next estimate and a time-varying vector of parameters rather than the initial estimate. I. INTRODUCTION Within signal processing, identification, estimation, and control, recursive least squares RLS and gradient-based optimization techniques are among the most fundamental and widely used algorithms [1] [8]. The standard RLS algorithm operates on a growing window of data, that is, new data are added to the RLS cost function as they become available and old data are not directly discarded but rather progressively discounted through the use of a forgetting factor. In contrast, a sliding-window RLS algorithm operates on a finite window of data with fixed length; new data replace old data in the sliding-window RLS cost function. Sliding-window leastsquares techniques are available in both batch and recursive formulations [9] [13]. A sliding-window RLS algorithm with time-varying regularization is developed in the present paper. A growingwindow RLS algorithm with time-varying regularization is presented in [14]. In standard RLS, the positive-definite initialization of the covariance matrix can be interpreted as the weighting on a regularization term within the context of a quadratic optimization. Until at least n measurements are available, this regularization term compensates for the lac of persistency in order to obtain a unique solution from the RLS algorithm. Traditionally, the regularization term is fixed for J. B. Hoagg is with the Department of Mechanical Engineering, The University of Kentucy, Lexington, KY. jhoagg@engr.uy.edu A. A. Ali is with the Department of Aerospace Engineering, The University of Michigan, Ann Arbor, MI. asadali@umich.edu M. Mossberg is with the Department of Physics and Electrical Engineering, Karlstad University, Karlstad, Sweden. Magnus.Mossberg@au.se D. S. Bernstein is with the Department of Aerospace Engineering, The University of Michigan, Ann Arbor, MI. dsbaero@umich.edu all steps of the recursion. In the present wor, we derive a sliding-window variable-regularization RLS SW-VR-RLS algorithm, where the weighting in the regularization term may change at each step. As a special case, the regularization can be decreased in magnitude or ran as the ran of the covariance matrix increases, and can be removed entirely when no longer needed. This ability is not available in standard RLS where the regularization term is weighted by the inverse of the initial covariance. A second extension presented in this paper also involves the regularization term. Specifically, the regularization term in traditional RLS weights the difference between the next estimate and the initial estimate. In the present paper, the regularization term weights the difference between the next estimate and an arbitrarily chosen time-varying vector. As a special case, the time-varying vector can be the current estimate, and thus the regularization term weights the difference between the next estimate and the current estimate. This formulation allows us to modulate the rate at which the current estimate changes from step to step. For these extensions, we derive SW-VR-RLS update equations. In the next section, we derive the update equations for SW-VR-RLS. In the remaining sections of the paper, we investigate the performance of SW-VR-RLS under various conditions of noise and persistency. II. SW-VR-RLS ALGORITHM For all i, let A i R n n, b i,α i R n, and R i R n n, where A i and R i are positive semidefinite, define A, b, let r be a nonnegative integer, and assume that, for all r, i r A i + R and i r+1 A i + R +1 are positive definite. Define the sliding-window regularized quadratic cost J x i r x T A i x+b T i x +x α T R x α, 1 where x R n and x α is the minimizer of J x. The minimizer x of 1 is given by 1 x 1 2 A i +R b i 2R α. 2 i r i r We now derive the update equations for the SW-VR-RLS algorithm. To rewrite 2 recursively, define P i r A i +R 1, /11/$ AACC 3275

2 which means that and +1 P 1 +1 Now, x 1 2 P i+1 r i r i r A i +R +1 b i 2R α A i +A +1 A r +R +1 R +R P 1 +A +1 A r +R +1 R. x P P i+1 r i r b i 2R +1 α +1 b i +b +1 b r 2R +1 α and it follows from 3 that x P +1 2P 1 x +2R α +b +1 b r 2R +1 α P +1 2P 1 +1 A +1 +A r R +1 +R x +2R α +b +1 b r 2R +1 α +1 x P +1 A +1 A r x +R +1 R x +R α b +1 b r R +1 α +1. To rewrite P +1 recursively, consider the decomposition A +1 ψ +1 ψ T +1, 4 where ψ +1 R n n +1 and n +1 rana+1. Consequently, P +1 P 1 +A +1 A r +R +1 R 1, 5 where the inverse exists since i r A i + R is positive definite. Next, define M +1 P 1 +R +1 R A r 1, 6 where the inverse exists since P 1 +R +1 R A r i r+1. A i +R +1, which is assumed to be positive definite. It follows from 4 6 that P +1 M A 1 +1 M ψ +1ψ+1 T Using the matrix inversion lemma X +UCV 1 X 1 X 1 U C 1 +VX 1 U 1 VX 1, 7 with X M 1 +1, U ψ +1, C I, and V ψ+1 T, it follows that P +1 M +1 I n ψ +1 In+1 +ψ+1m T 1 +1 ψ +1 Next, define ψ T +1M Q +1 P 1 +R +1 R 1, 9 where this inverse exists since P 1 +R +1 R M 1 +1, and thus Q +1 M +1. It follows from 4, 6, and 9 that M +1 Q 1 +1 A 1 r Q ψ rψ r T Using 7 with X Q 1 +1, U ψ r, C I, and V, it follows that ψ T r M +1 Q +1 I n ψ r In+1 +ψ T rq +1 ψ r 1 ψ T rq +1. Next, consider the decomposition R +1 R φ +1 S +1 φ T +1, 1 where φ +1 R n m +1, m +1 ranr+1 R, and S +1 R m +1 m +1 is a matrix of the form ±1 S +1 ± ±1 It follows from 9 and 1 that Q +1 P 1 +φ +1 S +1 φ T Using 7 with X P 1, U φ +1, C S +1, and V φ T +1, it follows that Q +1 P I n φ +1 S+1 +φ T 1 +1P φ +1 φ T +1P. In summary, for, the recursive minimizer of 1 is 3276

3 given by Q +1 P I n φ +1 S+1 +φ T +1P φ +1 1 φ T +1P, 11 M +1 Q +1 I n ψ r In+1 +ψ T rq +1 ψ r 1 ψ T rq +1, 12 P +1 M +1 I n ψ +1 In+1 +ψ T +1M +1 ψ +1 1 ψ T +1M +1, 13 x +1 x P +1 A +1 A r x +R +1 R x +R α b +1 b r R +1 α +1, 14 where x α, P R 1, ψ +1 is given by 4, and φ +1 is given by 1. In the case where the regularization weighting is constant, that is, for all,r R >, 11 simplifies to Q +1 P, and thus propagation of Q is not required. III. SETUP FOR NUMERICAL SIMULATIONS For all, let x,opt R n, ψ R n, where its i th entry ψ,i is generated from a zero mean, unit variance Gaussian distribution. The entries of ψ are independent. Define β ψ T x,opt. Let l be the number of data points. Define σ ψ,i 1 l ψ 2 l,i 1, l σ β 1 l 1 l 1 β 2 l x T,opt x,opt. Next, for i 1,...,n, let N,i R, and M R be generated from zero-mean Gaussian distributions with variances σ 2 N,i and σ2 M, respectively, where σ N,i and σ M are determined from the signal-to-noise ratio SNR. More specifically, for i 1,...,n, σ ψ,i σ β SNR ψ,i, and SNR β, σ N,i σ M 1 K where, for i 1,...,n, σ N,i K 1 N2,i and σ M 1 K K 1 M2. For, define A ψ + N ψ + N T and b 2β +M ψ +N, where N is the noise in ψ and M is the noise in β. Define z 1 [ ] T, z 2 [ ] T. Unless otherwise specified, for all, x,opt z 1, α x, and x 7 1. Define the performance x,opt x ε. x,opt IV. NUMERICAL SIMULATIONS OF SW-VR-RLS WITH NOISELESS DATA In this section, we investigate the effect of window size r, R, and α on SW-VR-RLS. The data contain no noise, specifically, for all, N 7 1, and M. A. Effect of Window Size In the following example, we test SW-VR-RLS for three different values of r. Specifically, r 1, r 1, or r 5. In all cases, α x 1, for all, R I 7 7, A and b are the same for all cases, and z1, 115, x,opt z 2, > 115. For this example, Figure 1 shows that, for 115, larger values of r yield faster convergence of ε to zero. For > 115, x,opt x 115,opt, and larger values of r yield faster convergence of ε to zero; however, larger values of r can yield worse transient performance because the larger window retains the data relating to z 1 for more time steps. ε % r 1 r 1 r 5 Fig. 1: Effect of r on convergence of x to x,opt. In this example, for 115, larger values of r yield faster convergence of ε to zero. For > 115, x,opt x 115,opt, and larger values of r yield faster convergence of ε to zero; however, larger values of r yield worse transient performance because the larger window retains the data relating to z 1 for more time steps. B. Effect of R In this section, we examine the effect of R, where, for all, R is constant. In the following example, we test SW-VR-RLS for three different R. Specifically, for all, R 1I 7 7, R I 7 7, R.1I 7 7. In all cases, for all, A and b are the same, α x 1, r 15, and z1, 115, x,opt z 2, > 115. For this example, Figure 2 shows that, for 115, smaller values of R yields faster convergence of ε to zero. Similarly, for > 115, x,opt x 115,opt, and smaller values of R yields faster convergence of ε to zero. 3277

4 ε % R 1I 7 7 R I 7 7 R.1I 7 7 ε % Fig. 2: Effect of R on convergence of x to x,opt. For this example, for 115, smaller values of R yields faster convergence of ε to zero. Similarly, for > 115, x,opt x 115,opt, and smaller values of R yields faster convergence of ε to zero. C. Loss of Persistency In this section, we study the effect of loss of persistency on SW-VR-RLS. More specifically, for all 5, A A 5 and b b 5. Moreover, for all, R.1I 7 7, r 15, and α x 1. For this example, Figure 3 shows that ε approaches zero; however, Figure 4 shows that P increases after the data lose persistency, but P remains bounded Fig. 4: Effect of loss of persistency on P. In this example, P increases after the data lose persistency, but P remains bounded. where ν is a positive integer. In the following example, we test SW-VR-RLS for three different ν. Specifically, ν 1, ν 5 or ν 15. In all cases, for all, A and b are the same, R I 7 7, SNR β SNR ψ,i 5 and r 1. For this example, Figure 5 shows that larger values of ν yield smaller asymptotic values of ε. 9 7 α L ν, ν 1 α L ν, ν 5 α L ν, ν ε % 5 ε % Fig. 3: Effect of loss of persistency on convergence of x to x,opt. The data lose persistency at the 5th step. In this example, ε approaches zero. V. NUMERICAL SIMULATIONS WITH NOISY DATA In this section, we investigate the effect of window size r, R, and α on SW-VR-RLS when the data have noise. More specifically, for all, M and N,i are generated from zero mean Gaussian distributions with variances depending on SNR ψ,i and SNR β. A. Effect of α In this section, we first compare SW-VR-RLS for different choices of α. More specifically, we let α L ν where L ν x 1, < ν, x ν, > ν, Fig. 5: Convergence of x to x,opt. For this example, larger values of ν yield smaller asymptotic values of ε. Next, we let α W ρ where x, 1, W ρ i1 x i, 1 < ρ, 1 ρ ρ i1 x i, > ρ, where ρ is a positive integer. In the following example, we test SW-VR-RLS for three different values of ρ. Specifically, ρ 1, ρ 5 or ρ 15. In all cases, for all, A and b are the same, R I 7 7, SNR β SNR ψ,i 5 and r 1. For this example, Figure 6 shows that larger values of ρ yield smaller asymptotic values of ε. B. Effect of Window Size In the following example, we test SW-VR-RLS for three different r. Specifically, r 5, r 7, or r 1. In all cases, α x 1, SNR β SNR ψ,i 5, for all, 3278

5 ε % α W ρ, ρ 1 α W ρ, ρ 5 α W ρ, ρ 15 Fig. 6: Convergence of x to x,opt. For this example, larger values of ρ yield smaller asymptotic values of ε. R I 7 7, A and b are the same for all cases, and z1, 115, x,opt z 2, > 115. For this example, Figure 7 shows thatr 5 yields a smaller asymptotic value of ε than r 1 and r 7. However, this trend is not monotonic since r 7 yields a larger asymptotic value of ε than r 1. Numerical tests suggest that the asymptotic value of ε increases as the window size is increased from r 1 until it peas at a certain value of r. After this the asymptotic value of ε decreases as r increases. ε % r 5 r 7 r 1 Fig. 7: Effect ofr on convergence ofx tox,opt. This figure shows that r 5 yields a smaller asymptotic value of ε than r 1 and r 7. However, this trend is not monotonic since r 7 yields a larger asymptotic value of ε than r 1. C. Effect of R First, we examine the effect of R where for all, R is constant. In the following example, we test SW-VR-RLS for three different R. Specifically, for all, R I 7 7, R.1I 7 7, R.1I 7 7. In all cases, α x 1, SNR β SNR ψ,i 5, r 1, for all, A and b are the same for all cases, and z1, 115, x,opt z 2, > 115. For this example, Figure 8 shows that a smaller value of R ε % R 1I 7 7 R I 7 7 R.1I 7 7 Fig. 8: Effect of R on convergence of x to x,opt. For this example, a smaller value of R results in faster convergence of ε to its asymptotic value, but yields a larger asymptotic value of ε. results in faster convergence of ε to its asymptotic value but yields a larger value. Once x becomes close to x,opt, x oscillates about x,opt. The amplitude of this oscillation depends on R, specifically, a larger value of R allows less change in x,opt which maes the amplitude of oscillation smaller. Therefore, choosing a larger R yields a smaller asymptotic value of ε. Next, we let R start small and then grow to a specified value as increases. More specifically R X X I 7 7 e y e γ, 15 where X R 7 7 and γ is a positive integer. In this example, we compare SW-VR-RLS with R I 7 7 and R given by 15 with X I 7 7 and γ.2. For this example, Figure 9 shows that R given by 15 yields smaller asymptotic value of ε than R I 7 7. ε % R 23.67e.2 I 7 7 R I Fig. 9: Effect of R on convergence of x to x,opt. For this example, R 23.67e.2 I 7 7 yields a smaller asymptotic value of ε than R I 7 7. D. Loss of Persistency In this section, we study the effect of loss of persistency on SW-VR-RLS. More specifically, for all 5, A A

6 ε % Fig. 1: Effect of loss of persistency on convergence ofx tox,opt. The data lose persistency at the 5th step. In this example, ε increases after the data lose persistency, but ε remains bounded. P Fig. 11: Effect of loss of persistency on P. In this example, P increases after the data lose persistency, but P remains bounded. and b b 5. Moreover, for all, R.1I 7 7, r 15, SNR β SNR ψ,i 5, and α x 1. For this example, Figure 1 shows that ε increases after the data lose persistency, but ε remains bounded. Figure 11 shows that P increases after the data lose persistency, but P remains bounded. VI. CONCLUSIONS In this paper, we presented a sliding-window variableregularization recursive least squares SW-VR-RLS algorithm. This algorithm operates on a finite window of data, where old data are discarded as new data become available. Furthermore, this algorithm allows for a time-varying regularization term in the sliding-window RLS cost function. More specifically, SW-VR-RLS allows us to vary both the weighting in the regularization as well as what is being weighted, that is, the regularization term can weight the difference between the next state estimate and a time-varying vector of parameters rather than the initial state estimate. REFERENCES [1] A. H. Sayed, Adaptive Filters. Hoboen, New Jersey: John Wiley and Sons, Inc., 8. [2] J. N. Juang, Applied System Identification. Upper Saddle River, NJ: Prentice-Hall, [3] L. Ljung and T. Söderström, Theory and practice of Recursive Identification. Cambridge, MA: The MIT Press, [4] L. Ljung, System Identification: Theory for the User, 2nd ed. Upper Saddle River, NJ: Prentice-Hall Information and Systems Sciences, [5] K. J. Åström and B. Wittenmar, Adaptive Control, 2nd ed. Addison- Wesley, [6] G. C. Goodwin and K. S. Sin, Adaptive Filtering, Prediction, and Control. Prentice Hall, [7] G. Tao, Adaptive Control Design and Analysis. Wiley, 3. [8] P. Ioannou and J. Sun, Robust Adaptive Control. Prentice Hall, [9] F. Gustafsson, Adaptive Filtering and Change Detection. Wiley,. [1] J. Jiang and Y. Zhang, A novel variable-length sliding window blocwise least-squares algorithm for on-line estimation of timevarying parameters, Int. J. Adaptive Contr. Sig. Proc., vol. 18, pp , 4. [11] M. Belge and E. L. Miller, A sliding window RLS-lie adaptive algorithm for filtering alpha-stable noise, IEEE Sig. Proc. Let., vol. 7, no. 4, pp ,. [12] B. Y. Choi and Z. Bien, Sliding-windowed weighted recursive leastsquares method for parameter estimation, Electronics Let., vol. 25, no., pp , [13] H. Liu and Z. He, A sliding-exponential window RLS adaptive algorithm: properties and applications, Sig. Proc., vol. 45, no. 3, pp , [14] A. A. Ali, J. B. Hoagg, M. Mossberg, and D. S. Bernstein, Growing window recursive quadratic optimization with variable regularization, in Proc. Conf. Dec. Contr., Atlanta, GA, December 1, pp

Growing Window Recursive Quadratic Optimization with Variable Regularization

Growing Window Recursive Quadratic Optimization with Variable Regularization 49th IEEE Conference on Decision and Control December 15-17, Hilton Atlanta Hotel, Atlanta, GA, USA Growing Window Recursive Quadratic Optimization with Variable Regularization Asad A. Ali 1, Jesse B.

More information

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise

More information

System Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating

System Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating 9 American Control Conference Hyatt Regency Riverfront, St Louis, MO, USA June 1-1, 9 FrA13 System Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating M A Santillo

More information

MANY adaptive control methods rely on parameter estimation

MANY adaptive control methods rely on parameter estimation 610 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 52, NO 4, APRIL 2007 Direct Adaptive Dynamic Compensation for Minimum Phase Systems With Unknown Relative Degree Jesse B Hoagg and Dennis S Bernstein Abstract

More information

Cumulative Retrospective Cost Adaptive Control with RLS-Based Optimization

Cumulative Retrospective Cost Adaptive Control with RLS-Based Optimization 21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 ThC3.3 Cumulative Retrospective Cost Adaptive Control with RLS-Based Optimization Jesse B. Hoagg 1 and Dennis S.

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Overview Guy Dumont Department of Electrical and Computer Engineering University of British Columbia Lectures: Thursday 09h00-12h00 Location: PPC 101 Guy Dumont (UBC) EECE 574

More information

Subspace Identification With Guaranteed Stability Using Constrained Optimization

Subspace Identification With Guaranteed Stability Using Constrained Optimization IEEE TANSACTIONS ON AUTOMATIC CONTOL, VOL. 48, NO. 7, JULY 2003 259 Subspace Identification With Guaranteed Stability Using Constrained Optimization Seth L. Lacy and Dennis S. Bernstein Abstract In system

More information

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model BULGARIAN ACADEMY OF SCIENCES CYBERNEICS AND INFORMAION ECHNOLOGIES Volume No Sofia Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model sonyo Slavov Department of Automatics

More information

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games Adaptive State Feedbac Nash Strategies for Linear Quadratic Discrete-Time Games Dan Shen and Jose B. Cruz, Jr. Intelligent Automation Inc., Rocville, MD 2858 USA (email: dshen@i-a-i.com). The Ohio State

More information

Adaptive Channel Modeling for MIMO Wireless Communications

Adaptive Channel Modeling for MIMO Wireless Communications Adaptive Channel Modeling for MIMO Wireless Communications Chengjin Zhang Department of Electrical and Computer Engineering University of California, San Diego San Diego, CA 99- Email: zhangc@ucsdedu Robert

More information

Retrospective Cost Model Reference Adaptive Control for Nonminimum-Phase Discrete-Time Systems, Part 1: The Adaptive Controller

Retrospective Cost Model Reference Adaptive Control for Nonminimum-Phase Discrete-Time Systems, Part 1: The Adaptive Controller 2011 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 01, 2011 Retrospective Cost Model Reference Adaptive Control for Nonminimum-Phase Discrete-Time Systems, Part

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

Adaptive Control Using Retrospective Cost Optimization with RLS-Based Estimation for Concurrent Markov-Parameter Updating

Adaptive Control Using Retrospective Cost Optimization with RLS-Based Estimation for Concurrent Markov-Parameter Updating Joint 48th IEEE Conference on Decision and Control and 8th Chinese Control Conference Shanghai, PR China, December 6-8, 9 ThA Adaptive Control Using Retrospective Cost Optimization with RLS-Based Estimation

More information

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute

More information

KNOWN approaches for improving the performance of

KNOWN approaches for improving the performance of IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 58, NO. 8, AUGUST 2011 537 Robust Quasi-Newton Adaptive Filtering Algorithms Md. Zulfiquar Ali Bhotto, Student Member, IEEE, and Andreas

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

Retrospective Cost Adaptive Control for Nonminimum-Phase Systems with Uncertain Nonminimum-Phase Zeros Using Convex Optimization

Retrospective Cost Adaptive Control for Nonminimum-Phase Systems with Uncertain Nonminimum-Phase Zeros Using Convex Optimization American Control Conference on O'Farrell Street, San Francisco, CA, USA June 9 - July, Retrospective Cost Adaptive Control for Nonminimum-Phase Systems with Uncertain Nonminimum-Phase Zeros Using Convex

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Recursive Least Squares and Accelerated Convergence in Stochastic Approximation Schemes Lennart Ljung Department of Electrical Engineering Linkping University, S-581 83 Linkping, Sweden WWW: http://www.control.isy.liu.se

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

A COMPARISON OF TWO METHODS FOR STOCHASTIC FAULT DETECTION: THE PARITY SPACE APPROACH AND PRINCIPAL COMPONENTS ANALYSIS

A COMPARISON OF TWO METHODS FOR STOCHASTIC FAULT DETECTION: THE PARITY SPACE APPROACH AND PRINCIPAL COMPONENTS ANALYSIS A COMPARISON OF TWO METHODS FOR STOCHASTIC FAULT DETECTION: THE PARITY SPACE APPROACH AND PRINCIPAL COMPONENTS ANALYSIS Anna Hagenblad, Fredrik Gustafsson, Inger Klein Department of Electrical Engineering,

More information

Adaptive Stabilization of a Class of Nonlinear Systems With Nonparametric Uncertainty

Adaptive Stabilization of a Class of Nonlinear Systems With Nonparametric Uncertainty IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 11, NOVEMBER 2001 1821 Adaptive Stabilization of a Class of Nonlinear Systes With Nonparaetric Uncertainty Aleander V. Roup and Dennis S. Bernstein

More information

Further Results on Model Structure Validation for Closed Loop System Identification

Further Results on Model Structure Validation for Closed Loop System Identification Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification

More information

A Numerical Study on Controlling a Nonlinear Multilink Arm Using A Retrospective Cost Model Reference Adaptive Controller

A Numerical Study on Controlling a Nonlinear Multilink Arm Using A Retrospective Cost Model Reference Adaptive Controller 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December -5, A Numerical Study on Controlling a Nonlinear Multilink Arm Using A Retrospective Cost

More information

A Strict Stability Limit for Adaptive Gradient Type Algorithms

A Strict Stability Limit for Adaptive Gradient Type Algorithms c 009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional A Strict Stability Limit for Adaptive Gradient Type Algorithms

More information

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM Kutluyıl Doğançay Reza Arablouei School of Engineering, University of South Australia, Mawson Lakes, SA 595, Australia ABSTRACT

More information

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Recursive Least Squares for an Entropy Regularized MSE Cost Function Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University

More information

IN neural-network training, the most well-known online

IN neural-network training, the most well-known online IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 161 On the Kalman Filtering Method in Neural-Network Training and Pruning John Sum, Chi-sing Leung, Gilbert H. Young, and Wing-kay Kan

More information

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation

More information

NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS

NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS J.E. COUSSEAU, J.P. SCOPPA and P.D. DOÑATE CONICET- Departamento de Ingeniería Eléctrica y Computadoras Universidad Nacional del Sur Av. Alem 253, 8000

More information

Fault Detection and Diagnosis of an Electrohydrostatic Actuator Using a Novel Interacting Multiple Model Approach

Fault Detection and Diagnosis of an Electrohydrostatic Actuator Using a Novel Interacting Multiple Model Approach 2011 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 01, 2011 Fault Detection and Diagnosis of an Electrohydrostatic Actuator Using a Novel Interacting Multiple Model

More information

Adaptive Linear Filtering Using Interior Point. Optimization Techniques. Lecturer: Tom Luo

Adaptive Linear Filtering Using Interior Point. Optimization Techniques. Lecturer: Tom Luo Adaptive Linear Filtering Using Interior Point Optimization Techniques Lecturer: Tom Luo Overview A. Interior Point Least Squares (IPLS) Filtering Introduction to IPLS Recursive update of IPLS Convergence/transient

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

The Rationale for Second Level Adaptation

The Rationale for Second Level Adaptation The Rationale for Second Level Adaptation Kumpati S. Narendra, Yu Wang and Wei Chen Center for Systems Science, Yale University arxiv:1510.04989v1 [cs.sy] 16 Oct 2015 Abstract Recently, a new approach

More information

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS Control 4, University of Bath, UK, September 4 ID-83 NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS H. Yue, H. Wang Control Systems Centre, University of Manchester

More information

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Variable Learning Rate LMS Based Linear Adaptive Inverse Control * ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen CONROL SYSEMS, ROBOICS, AND AUOMAION - Vol. V - Recursive Algorithms - Han-Fu Chen RECURSIVE ALGORIHMS Han-Fu Chen Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy

More information

Closed-Loop Identification of Unstable Systems Using Noncausal FIR Models

Closed-Loop Identification of Unstable Systems Using Noncausal FIR Models 23 American Control Conference (ACC) Washington, DC, USA, June 7-9, 23 Closed-Loop Identification of Unstable Systems Using Noncausal FIR Models Khaled Aljanaideh, Benjamin J. Coffer, and Dennis S. Bernstein

More information

Stabilization of a Specified Equilibrium in the Inverted Equilibrium Manifold of the 3D Pendulum

Stabilization of a Specified Equilibrium in the Inverted Equilibrium Manifold of the 3D Pendulum Proceedings of the 27 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 27 ThA11.6 Stabilization of a Specified Equilibrium in the Inverted Equilibrium

More information

The Kalman filter is arguably one of the most notable algorithms

The Kalman filter is arguably one of the most notable algorithms LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive

More information

Stabilization of a 3D Rigid Pendulum

Stabilization of a 3D Rigid Pendulum 25 American Control Conference June 8-, 25. Portland, OR, USA ThC5.6 Stabilization of a 3D Rigid Pendulum Nalin A. Chaturvedi, Fabio Bacconi, Amit K. Sanyal, Dennis Bernstein, N. Harris McClamroch Department

More information

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that

More information

IMPROVEMENTS IN MODAL PARAMETER EXTRACTION THROUGH POST-PROCESSING FREQUENCY RESPONSE FUNCTION ESTIMATES

IMPROVEMENTS IN MODAL PARAMETER EXTRACTION THROUGH POST-PROCESSING FREQUENCY RESPONSE FUNCTION ESTIMATES IMPROVEMENTS IN MODAL PARAMETER EXTRACTION THROUGH POST-PROCESSING FREQUENCY RESPONSE FUNCTION ESTIMATES Bere M. Gur Prof. Christopher Niezreci Prof. Peter Avitabile Structural Dynamics and Acoustic Systems

More information

Analysis of incremental RLS adaptive networks with noisy links

Analysis of incremental RLS adaptive networks with noisy links Analysis of incremental RLS adaptive networs with noisy lins Azam Khalili, Mohammad Ali Tinati, and Amir Rastegarnia a) Faculty of Electrical and Computer Engineering, University of Tabriz Tabriz 51664,

More information

Factors affecting the Type II error and Power of a test

Factors affecting the Type II error and Power of a test Factors affecting the Type II error and Power of a test The factors that affect the Type II error, and hence the power of a hypothesis test are 1. Deviation of truth from postulated value, 6= 0 2 2. Variability

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Small Gain Theorems on Input-to-Output Stability

Small Gain Theorems on Input-to-Output Stability Small Gain Theorems on Input-to-Output Stability Zhong-Ping Jiang Yuan Wang. Dept. of Electrical & Computer Engineering Polytechnic University Brooklyn, NY 11201, U.S.A. zjiang@control.poly.edu Dept. of

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD

More information

Co-Prime Arrays and Difference Set Analysis

Co-Prime Arrays and Difference Set Analysis 7 5th European Signal Processing Conference (EUSIPCO Co-Prime Arrays and Difference Set Analysis Usham V. Dias and Seshan Srirangarajan Department of Electrical Engineering Bharti School of Telecommunication

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang 1-1 Course Description Emphases Delivering concepts and Practice Programming Identification Methods using Matlab Class

More information

Ch6-Normalized Least Mean-Square Adaptive Filtering

Ch6-Normalized Least Mean-Square Adaptive Filtering Ch6-Normalized Least Mean-Square Adaptive Filtering LMS Filtering The update equation for the LMS algorithm is wˆ wˆ u ( n 1) ( n) ( n) e ( n) Step size Filter input which is derived from SD as an approximation

More information

Trimmed Diffusion Least Mean Squares for Distributed Estimation

Trimmed Diffusion Least Mean Squares for Distributed Estimation Trimmed Diffusion Least Mean Squares for Distributed Estimation Hong Ji, Xiaohan Yang, Badong Chen School of Electronic and Information Engineering Xi an Jiaotong University Xi an 710049, P.R. China chenbd@mail.xjtu.edu.cn,

More information

Virtual Array Processing for Active Radar and Sonar Sensing

Virtual Array Processing for Active Radar and Sonar Sensing SCHARF AND PEZESHKI: VIRTUAL ARRAY PROCESSING FOR ACTIVE SENSING Virtual Array Processing for Active Radar and Sonar Sensing Louis L. Scharf and Ali Pezeshki Abstract In this paper, we describe how an

More information

An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control

An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control 21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 WeC14.1 An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control Qing Liu and Douglas

More information

Stabilization with Disturbance Attenuation over a Gaussian Channel

Stabilization with Disturbance Attenuation over a Gaussian Channel Proceedings of the 46th IEEE Conference on Decision and Control New Orleans, LA, USA, Dec. 1-14, 007 Stabilization with Disturbance Attenuation over a Gaussian Channel J. S. Freudenberg, R. H. Middleton,

More information

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels Diffusion LMS Algorithms for Sensor Networs over Non-ideal Inter-sensor Wireless Channels Reza Abdolee and Benoit Champagne Electrical and Computer Engineering McGill University 3480 University Street

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

Nonlinear Tracking Control of Underactuated Surface Vessel

Nonlinear Tracking Control of Underactuated Surface Vessel American Control Conference June -. Portland OR USA FrB. Nonlinear Tracking Control of Underactuated Surface Vessel Wenjie Dong and Yi Guo Abstract We consider in this paper the tracking control problem

More information

H 2 Adaptive Control. Tansel Yucelen, Anthony J. Calise, and Rajeev Chandramohan. WeA03.4

H 2 Adaptive Control. Tansel Yucelen, Anthony J. Calise, and Rajeev Chandramohan. WeA03.4 1 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July, 1 WeA3. H Adaptive Control Tansel Yucelen, Anthony J. Calise, and Rajeev Chandramohan Abstract Model reference adaptive

More information

Selective Hybrid RSS/AOA Approximate Maximum Likelihood Mobile intra cell Localization

Selective Hybrid RSS/AOA Approximate Maximum Likelihood Mobile intra cell Localization Selective Hybrid RSS/AOA Approximate Maximum Likelihood Mobile intra cell Localization Leila Gazzah Leila Najjar and Hichem Besbes Carthage University Lab. COSIM Higher School of Communication of Tunis

More information

Taguchi s Approach to Robust Parameter Design: A New Perspective

Taguchi s Approach to Robust Parameter Design: A New Perspective Taguchi s Approach to Robust Parameter Design: A New Perspective V. ROSHAN JOSEPH School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30332, USA Email: roshan@isye.gatech.edu

More information

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms 2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior

More information

An Invariance Property of the Generalized Likelihood Ratio Test

An Invariance Property of the Generalized Likelihood Ratio Test 352 IEEE SIGNAL PROCESSING LETTERS, VOL. 10, NO. 12, DECEMBER 2003 An Invariance Property of the Generalized Likelihood Ratio Test Steven M. Kay, Fellow, IEEE, and Joseph R. Gabriel, Member, IEEE Abstract

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon Cover page Title : On-line damage identication using model based orthonormal functions Author : Raymond A. de Callafon ABSTRACT In this paper, a new on-line damage identication method is proposed for monitoring

More information

Auxiliary signal design for failure detection in uncertain systems

Auxiliary signal design for failure detection in uncertain systems Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a

More information

J. Liang School of Automation & Information Engineering Xi an University of Technology, China

J. Liang School of Automation & Information Engineering Xi an University of Technology, China Progress In Electromagnetics Research C, Vol. 18, 245 255, 211 A NOVEL DIAGONAL LOADING METHOD FOR ROBUST ADAPTIVE BEAMFORMING W. Wang and R. Wu Tianjin Key Lab for Advanced Signal Processing Civil Aviation

More information

Adaptive Control of an Aircraft with Uncertain Nonminimum-Phase Dynamics

Adaptive Control of an Aircraft with Uncertain Nonminimum-Phase Dynamics 1 American Control Conference Palmer House Hilton July 1-3, 1. Chicago, IL, USA Adaptive Control of an Aircraft with Uncertain Nonminimum-Phase Dynamics Ahmad Ansari and Dennis S. Bernstein Abstract This

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner STOCHASTIC STABILITY H.J. Kushner Applied Mathematics, Brown University, Providence, RI, USA. Keywords: stability, stochastic stability, random perturbations, Markov systems, robustness, perturbed systems,

More information

Applied Nonlinear Control

Applied Nonlinear Control Applied Nonlinear Control JEAN-JACQUES E. SLOTINE Massachusetts Institute of Technology WEIPING LI Massachusetts Institute of Technology Pearson Education Prentice Hall International Inc. Upper Saddle

More information

Semiparametric Identification of Wiener Systems Using a Single Harmonic Input and Retrospective Cost Optimization

Semiparametric Identification of Wiener Systems Using a Single Harmonic Input and Retrospective Cost Optimization American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July, ThC7. Semiparametric Identification of Wiener Systems Using a Single Harmonic Input and Retrospective Cost Optimization

More information

A Comparison of the Extended and Unscented Kalman Filters for Discrete-Time Systems with Nondifferentiable Dynamics

A Comparison of the Extended and Unscented Kalman Filters for Discrete-Time Systems with Nondifferentiable Dynamics Proceedings of the 27 American Control Conference Marriott Marquis Hotel at Times Square New Yor City, USA, July -3, 27 FrA7.2 A Comparison of the Extended and Unscented Kalman Filters for Discrete-Time

More information

Optimal Control of Linear Systems with Stochastic Parameters for Variance Suppression: The Finite Time Case

Optimal Control of Linear Systems with Stochastic Parameters for Variance Suppression: The Finite Time Case Optimal Control of Linear Systems with Stochastic Parameters for Variance Suppression: The inite Time Case Kenji ujimoto Soraki Ogawa Yuhei Ota Makishi Nakayama Nagoya University, Department of Mechanical

More information

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Thore Graepel and Nicol N. Schraudolph Institute of Computational Science ETH Zürich, Switzerland {graepel,schraudo}@inf.ethz.ch

More information

Adaptive Harmonic Steady State Control for Disturbance Rejection

Adaptive Harmonic Steady State Control for Disturbance Rejection Adaptive Harmonic Steady State Control for Disturbance Rejection J. Chandrasekar L. Liu D. Patt P. P. Friedmann and D. S. Bernstein I. INTRODUCTION The use of feedback control for disturbance rejection

More information

ADaptive signal processing algorithms have been widely

ADaptive signal processing algorithms have been widely Variable Step-Size Affine Projection Algorithm with a Weighted and Regularized Projection Matrix Tao Dai, Andy Adler, and Behnam Shahrrava Abstract This paper presents a forgetting factor scheme for variable

More information

A Multiplay Model for Rate-Independent and Rate-Dependent Hysteresis with Nonlocal Memory

A Multiplay Model for Rate-Independent and Rate-Dependent Hysteresis with Nonlocal Memory Joint 48th IEEE Conference on Decision and Control and 8th Chinese Control Conference Shanghai, PR China, December 6-8, 9 FrC A Multiplay Model for Rate-Independent and Rate-Dependent Hysteresis with onlocal

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems Mathematical Problems in Engineering Volume 2012, Article ID 257619, 16 pages doi:10.1155/2012/257619 Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

To appear in IEEE Trans. on Automatic Control Revised 12/31/97. Output Feedback

To appear in IEEE Trans. on Automatic Control Revised 12/31/97. Output Feedback o appear in IEEE rans. on Automatic Control Revised 12/31/97 he Design of Strictly Positive Real Systems Using Constant Output Feedback C.-H. Huang P.A. Ioannou y J. Maroulas z M.G. Safonov x Abstract

More information

An Adaptive LQG Combined With the MRAS Based LFFC for Motion Control Systems

An Adaptive LQG Combined With the MRAS Based LFFC for Motion Control Systems Journal of Automation Control Engineering Vol 3 No 2 April 2015 An Adaptive LQG Combined With the MRAS Based LFFC for Motion Control Systems Nguyen Duy Cuong Nguyen Van Lanh Gia Thi Dinh Electronics Faculty

More information

State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems

State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems State and Parameter Estimation Based on Filtered Transformation for a Class of Second-Order Systems Mehdi Tavan, Kamel Sabahi, and Saeid Hoseinzadeh Abstract This paper addresses the problem of state and

More information

Gaussian Estimation under Attack Uncertainty

Gaussian Estimation under Attack Uncertainty Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary

More information

6.435, System Identification

6.435, System Identification System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency

More information

A METHOD FOR NONLINEAR SYSTEM CLASSIFICATION IN THE TIME-FREQUENCY PLANE IN PRESENCE OF FRACTAL NOISE. Lorenzo Galleani, Letizia Lo Presti

A METHOD FOR NONLINEAR SYSTEM CLASSIFICATION IN THE TIME-FREQUENCY PLANE IN PRESENCE OF FRACTAL NOISE. Lorenzo Galleani, Letizia Lo Presti A METHOD FOR NONLINEAR SYSTEM CLASSIFICATION IN THE TIME-FREQUENCY PLANE IN PRESENCE OF FRACTAL NOISE Lorenzo Galleani, Letizia Lo Presti Dipartimento di Elettronica, Politecnico di Torino, Corso Duca

More information

Identification of ARX, OE, FIR models with the least squares method

Identification of ARX, OE, FIR models with the least squares method Identification of ARX, OE, FIR models with the least squares method CHEM-E7145 Advanced Process Control Methods Lecture 2 Contents Identification of ARX model with the least squares minimizing the equation

More information

Hyung So0 Kim and Alfred 0. Hero

Hyung So0 Kim and Alfred 0. Hero WHEN IS A MAXIMAL INVARIANT HYPOTHESIS TEST BETTER THAN THE GLRT? Hyung So0 Kim and Alfred 0. Hero Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor, MI 48109-2122

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors

Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors César San Martin Sergio Torres Abstract In this paper, a model-based correlation measure between

More information

Improved Methods for Monte Carlo Estimation of the Fisher Information Matrix

Improved Methods for Monte Carlo Estimation of the Fisher Information Matrix 008 American Control Conference Westin Seattle otel, Seattle, Washington, USA June -3, 008 ha8. Improved Methods for Monte Carlo stimation of the isher Information Matrix James C. Spall (james.spall@jhuapl.edu)

More information

Nonlinear Identification of Backlash in Robot Transmissions

Nonlinear Identification of Backlash in Robot Transmissions Nonlinear Identification of Backlash in Robot Transmissions G. Hovland, S. Hanssen, S. Moberg, T. Brogårdh, S. Gunnarsson, M. Isaksson ABB Corporate Research, Control Systems Group, Switzerland ABB Automation

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

Adaptive Dynamic Inversion Control of a Linear Scalar Plant with Constrained Control Inputs

Adaptive Dynamic Inversion Control of a Linear Scalar Plant with Constrained Control Inputs 5 American Control Conference June 8-, 5. Portland, OR, USA ThA. Adaptive Dynamic Inversion Control of a Linear Scalar Plant with Constrained Control Inputs Monish D. Tandale and John Valasek Abstract

More information