Rational bases for system identification Adhemar Bultheel, Patrick Van gucht Department Computer Science Numerical Approximation and Linear Algebra Group (NALAG) K.U.Leuven, Belgium adhemar.bultheel.cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ ade/ March 2001
Systems 1/25 We consider discrete time systems u G y u is input U(z) = k u kz 1 k y is output Y (z) = k y kz 1 k G transfer function Y (z) = G(z)U(z)
Frequency Domain Identification 2/25 Estimate G as Ĝ such that we minimize (L2 (T)-norm) Y Ŷ = GU ĜU = (G Ĝ)U = G Ĝ w +π min 1 G(z) 2π Ĝ(z) 2 w(z)dω, z = e iω, w(z) = U(z) 2 π Measurements {G(z j )} N j=1, z j T, with variance {σ j } N j=1. min G Ĝ w = min N j=1 G(z j ) Ĝ(z j) 2 σ 2 j 1/2, w j = σ 2 j,
Frequency Domain Identification(2) 3/25 or Y Ŷ = (G Ĝ)U = G Ĝ w N = G(z j ) Ĝ(z j) 2 w j j=1 1/2, w j = U(z j ) 2 /σ 2 j or G Ĝ = Y U B A = Y A BU AU = Y A BU w, w = 1/ AU 2
Linear/Nonlinear problem 4/25 Since the approximant Ĝ is a rational function, we have a mixed linear/nonlinear approximation problem. Ĝ(z) = B(z) A(z), A, B Π n nonlinear A(z) = linear B(z) = n b k z k k=0 n a k z k = k=0 n (1 α k /z). k=1 Given an estimate for A, finding B is a weighted but linear least squares problem.
Nonlinear problem 5/25 Given a stable and efficient method to solve the linear parameters λ as a function of the nonlinear parameters ν, we can consider the problem of minimizing a cost function C(ν) = K(ν, λ(ν)). Need system stability. If ν = α (system poles), then need α k < 1. So have to solve a constrained nonlinear weighted least squares problem in C. Solve nonlinear problem with standard routine. How to solve the linear problem?
Orthogonal Rational Functions Ĝ(z) = n λ k φ k (z), k=0 6/25 with for given α k φ k (z) L k = { } p k (z) k j=1 (1 α j/z) : p k Π k orthogonal basis functions with respect to an appropriate inner product. The inner product can be discrete or continuous, but is in general with respect to a weight (measure) on T. f, g µ = T f(z)g(z)dµ(z) or f, g w = N f(z j )g(z j )w j. j=1
ORF recurrence 7/25 Forward recursion [ φn (z) φ n(z) ] = e n z α n 1 z α n [ 1 Ln L n 1 φ k, 1 α n 1z z α n φ n 1 L n =, e n = φ k, z α n 1 z α n φ n 1 ] [ ζn 1 (z) 0 0 1 ] [ φn 1 (z) φ n 1(z) ( 1 αn 2 ) 1/2 1 1 α n 1 2 1 L n 2 ζ k (z) = 1 α kz z α k, B n = ζ 1 ζ n, φ n(z) = B n (z)φ n (1/z). Backward recursion = Nevanlinna-Pick algorithm ]
Inner product evaluation 8/25 If data are available in z = {z j } N j=1 T with corresponding weights w = {w j } N j=1 then choose the discrete inner product. For a continuous inner product, choose points on the circle, e.g. equidistant z j = exp{2ijπ/n}, j = 1,..., N and evaluate weight w j = w(z j ) and use discrete inner product. For example w(z) = U(z) 2, and z j equidistant on T, then given time domain data u k, this can be computed very efficiently by FFT. We can evaluate the moments c l in U(z) 2 = l Z c lz l by convolution and use a Nevanlinna-Pick type algorithm on the PR function Ω(z) = c 0 /2 + l=1 c lz l. (approximately)
The linear least squares problem 9/25 So we can compute Φ(α) = Φ(z; α) = [φ 0 (z) φ 1 (z) φ n (z)] C N n+1 W 1/2 = diag(w) 1/2, λ = {λ k } n k=0, G = G(z) Setting Ĝn = n k=0 λ kφ k, solve in least squares sense W 1/2 Φ(α)λ(α) = W 1/2 G by orthogonality however, λ(α) = Φ(α) H WG.
Related work 10/25 Work by Ninnes, Van den Hof, Hueberger, Bokor, and others: Use ORF with respect to Lebesgue measure. These have explicit expressions 1 αn φ n (z) = 2 z B n 1 (z) z α n but that does not help for the condition number of Φ H WΦ in the linear least squares problem. And/or use a finite number of α k that are cyclically repeated.
Nonlinear problem 11/25 K(α) = j G(z j ) n k=0 λ k (α)φ k (z j ; α) 2 w j min α D K(α) = (I Φ(α)Φ(α) H W)G 2 w constraint: set α j = r j e iω j with 1 r j 1, j = 1,..., n. α j can be forced to be real or complex conjugate.
Robot arm 12/25 100 data points, 200 Hz, condensed in the beginning, order [6/6] 10 2 frequency response fct 10 0 10 2 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 10 5 variance 10 0 10 5 10 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 10 2 relative error 10 0 10 2 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 poles 0.0665 ± 0.0690i 0.8770 ± 0.4765i 0.9733 ± 0.2266i zeros 1.9350, 0.8649 1.0164 ± 0.1056i 0.6905 ± 0.1238i 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 1 0.5 0 0.5 1 1.5 2
Band pass filter 50 data points, 20 khz, in lower half, order [6/6] 13/25 1.5 frequency response fct 1 0.5 10 10 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 10 9 variance poles zeros 0.0487-0.4006i 9.6766 0.0487 + 0.4006i 1.4742 0.6070-0.4944i 1.0955 0.6070 + 0.4944i 0.9032 0.7356-0.3739i -0.6507 0.7356 + 0.3739i 0.4337 10 11 1 10 12 10 2 10 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 10 4 relative error 10 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 0.5 0 0.5 1 1.5 2
Band pass filter 14/25 50 data points, 20 khz, in lower half, order [22/22] 1.5 frequency response fct 4 frequency response fct 1 2 0 1 0.5 2 0.8 0.6 0 0 0.5 1 1.5 2 4 0 0.5 1 1.5 2 0.4 10 9 variance 10 0 relative error 0.2 0 0.2 10 10 10 11 10 5 0.4 0.6 0.8 10 12 0 0.5 1 1.5 2 10 10 0 0.5 1 1.5 2 1 1 0.5 0 0.5 1 1.5
Electrical Machine 110 data points, 4 khz, condensed in the beginning, order [5/5] 15/25 10 0 frequency response fct 10 1 10 2 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 10 6 variance 10 8 10 10 10 12 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 10 1 relative error poles zeros 0.1875 0.5461 0.3886 0.9239 0.9609 0.9656 0.9966 0.9969 0.9970 0.9990 1 0.8 0.6 0.4 10 2 0.2 0 10 3 10 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.2 0.4 0.6 0.8 1 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
Sensitivity, Electrical Machine 16/25 10 2 relative error, variable parameters 5 10 3 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
Polynomial method 17/25 min Y U B A 2 = min Y A UB 2 AU 2 The weight depends on the solution. Estimate A so that we have an estimate for the weight 1/ AU, Set w = [Y/AU U/AU], P = [A B] T min N P (z j ) H W j P (z j ), W j = w(z j ) H w(z j ) j=1 need orthogonal block polynomials φ k and write P = n k=0 φ kλ k, φ k Π 2 2 k, λ k C 2 1. Constraint: P monic.
Monic solution 18/25 Φ = [φ 0 (z)... φ n (z)] C 2N 2(n+1), W = diag(w j ) C 2N 2N min λ H Φ H WΦλ, Φλ monic orthogonal polynomial Φ H WΦ = I P = φ n λ n. Szegő-type recurrence φ n σ n = zφ n 1 + φ n 1γ n φ nσ H n = zφ n 1 γ H n + φ n 1 σ n, γ n C 2 2 (block Schur parameters)
Hessenberg matrix 19/25 w 1 z 1 w 2 z 2... w N z N unitary similarity transf. Block Upper Hessenberg
Efficient algorithm 20/25 Store the block upper Hessenberg matrix in factored form H = G 1 G 2 G m G k = I 2(k 1) γ k ˆσ k σ k ˆγ k I... Chasing the elements to block upper Hessenberg form by similarity transformations percolating through the product requires operations on 3 3 or 5 5 blocks fast algorithm.
Nonlinear problem 21/25 Once P = [A B] T has been found, use A to modify the weight w = 1/ AU and reiterate. or Use nonlinear method to improve the estimate n k=0 φ kλ k. Weight depends on A loss of orthogonality, but condition number of Jacobian increases only slightly.
CD s radial servo system 22/25 sampling frequency 9.7 khz, order [5/5] 40 Magnitude FRF (db) 20 0-20 -40 frf error -60 0 0.1 0.2 0.3 Relative Freq
Conclusion 23/25 Rational approximation in weighted discrete least squares sense. Linear and nonlinear parameters. Solve for linear in terms of nonlinear. Use orthogonal basis to minimize the condition number. Either direct orthogonal rational basis or linearized vector polynomial as a combination of orthogonal block polynomials.
References ORF 24/25 [1], P. González-Vera, E. Hendriksen, and O. Njåstad. Orthogonal rational functions. Cambridge University Press, 1999. [2] P. Van gucht,. Using orthogonal rational functions for system identification, Report TW314, Dept. Computer Science, K.U.Leuven, September 2000. [3] P. Van gucht,. Matlab routines for system identification using orthogonal rational functions. http://www.cs.kuleuven.ac.be/ nalag/research/software/orf/orfidentification.html.
References OPV 25/25 [1], M. Van Barel, Y. Rolain. Robust rational approximation for identification, 2001, Submitted [2] M. Van Barel,, Discrete linearized least squares approximation on the unit circle, J. Comput. Appl. Math. 50 (1994) 965-972. [3], M. Van Barel. Vector orthogonal polynomials and least squares approximation, SIAM J. Matrix Anal. Appl. 16 (1995) 863-885. [4] M. Van Barel,, Orthogonal polynomial vectors and least squares approximation for a discrete inner product, Electron. Trans. Numer. Anal. 3 (1995) 1-23.