Selected Topics in Iterative Learning Control Research

Size: px
Start display at page:

Download "Selected Topics in Iterative Learning Control Research"

Transcription

1 Selected Topics in Iterative Learning Control Research Electronics and Computer Science University of Southampton Southampton, SO17 1BJ, UK

2 Contents Constrained ILC Point-to-Point ILC Robust ILC with Experimental Verification A Brief Visit to Nonlinear Model ILC

3 Introduction However, most of the currently known ILC algorithms on assume that the system is unconstrained systems, e.g., no limits on actuator demands. It is, however, possible that such limits exist in some applications. In this section the ILC design problem with general convex input constraints is considered and two algorithms are developed to solve the problem with well defined convergence properties based on a successive projection method.

4 ILC Design with General Convex Input Constraints Consider the discrete linear time-invariant system y = Gu + d, where G is a linear operator. The system is required to track a given signal r(t) defined on a finite duration [0, N] repeatedly under convex input constraints. In practice, the input constraint set Ω could, for example, be of the following forms Ω = {u H : u(t) M(t)} Ω = {u H : λ(t) u(t) µ(t)} Ω = {u H : 0 u(t)}

5 ILC Design with General Convex Input Constraints Successive Projection An iterative algorithm to find a point in the intersection of two closed, convex sets S 1, S 2. k 1 k 3 S 1 k 0 S 1 S 2 k 2 S 2

6 ILC Design with General Convex Input Constraints Successive Projection An iterative algorithm to find a point in the intersection of two closed, convex sets S 1, S 2. Iterative Learning Control To find a point (0, u ) in the intersection of k 1 k 3 S 1 S 1 = {(e, u) : e = r Gu} S 2 = {(e, u) : e = 0} k 0 S 1 S 2 k 2 S 2

7 Successive Projection Theorem Let K 1 H and K 2 H be two closed convex sets in a real Hilbert space H. Define K j = K 1 (resp.k 2 ) when j is odd (resp. even). Then, given the initial guess k 0 H, the sequence {k j } j 0 k j k j 1 = min k K j k k j 1, j 1 is uniquely defined for each k 0 H and, continuously gets arbitrarily close to every point in K 1 K 2 and when K 1 K 2 is empty, the algorithm converges in the sense that k j+1 k j d(k 1, K 2 ) defining the minimum distance d(k 1, K 2 ) between the two sets K 1 and K 2.

8 Interpretation of Constrained ILC Problem The constrained ILC problem is to find the intersection of the following closed, convex sets in H = R N R N : S 1 = {(e, u) H : e = r Gu} S 2 = {(e, u) H : e = 0} under the constraint S 3 = {(e, u) H : u Ω}. This problem is equivalent to finding the intersection of two closed, convex sets K 1 and K 2 K 1 = S 1 S 3 K 2 = S 2

9 Interpretation of Constrained ILC Problem The constrained ILC problem is to find the intersection of the following closed, convex sets in H = R N R N : S 1 = {(e, u) H : e = r Gu} S 2 = {(e, u) H : e = 0} under the constraint S 3 = {(e, u) H : u Ω}. This problem is equivalent to finding the intersection of two closed, convex sets K 1 and K 2 K 1 = S 1 S 3 K 2 = S 2 K 1 = S 1 K 2 = S 2 S 3 Computational algorithm can be obtained by successive projection.

10 Constrained ILC Algorithm 1 Taking K 1 = S 1 S 3 and K 2 = S 2 results in: u k+1 = arg min u Ω { e 2 Q + u u k 2 R } e = r Gu r 3 r 1 (0, u ) r 2 (k 1 ) r 0 (k 0 ) e = 0

11 Convergence of Constrained ILC Algorithm 1 Theorem The Constrained Algorithm 1 achieves monotonic convergence in tracking error norm, i.e., e k+1 e k, k = 0, 1, to the point that is uniquely defined by u s = arg min u Ω e 2 The algorithm has desired convergence properties but considerable computational effort is required to solve the constrained Quadratic Programming (QP) problem. Various solutions are available, e.g., iterative algorithms, and receding horizon methods.

12 Constrained ILC Algorithm 2 Taking K 1 = S 1 and K 2 = S 2 S 3 results in: } NOILC : ũ k+1 = arg min { e 2 u Q + u u k 2 R Clipping : u k+1 = arg min u ũ k+1 u Ω e = r Gu r 1 1 Original Input Clipping Result r 3 (0, u ) r 2(k 1) r 0(k 0) e = 0 u(t) Time

13 Convergence of Constrained ILC Algorithm 2 Theorem The Constrained Algorithm 2 achieves monotonic convergence with respect to performance index J k = Ee k 2 Q + F e k 2 R to the point that is uniquely defined by u { s = arg min Ee 2 Q + F e 2 } R u Ω where E = I G ( G T QG + R ) 1 G T Q, F = ( G T QG + R ) 1 G T Q Compared with the best solution in terms of tracking error norm u s = arg min u Ω e 2, this algorithm provides an approximate solution using simple computations!

14 Numerical Example 1 Example Consider the following second-order plant model s 4 G(s) = s 2 + 5s + 6, which is sampled using a zero-order hold and a sampling time of 0.1 sec. The trial duration is 20 sec and the actuator constraint is u(t) 1, t = 0, 1, 1 Input signal u ref (t) Reference signal r(t) uref (t) and r(t) Time

15 10 1 NOILC Algorithm 1 Algorithm lg ek Trials Comparison of convergence for Example 1.

16 Numerical Example 2 Example Consider the same plant model as Example 1 s 4 G(s) = s 2 + 5s + 6. but with actuator constraint u(t) 0.8, t = 0, 1,. 1 Input signal u ref (t) Reference signal r(t) uref (t) and r(t) Time

17 Algorithm 1 Algorithm 2 Constrained Optimal lg ek Trials Comparison of convergence for Example 2.

18 Conclusions Two algorithms for constrained ILC have been developed that can be computed using successive projection that have well defined convergence properties. Both algorithms can solve the constrained ILC problem but the computational complexity and convergence properties do differ. The ideas and results can apply more widely and, in particular, to the case of continuous-time systems with no change in the abstract form of the algorithm or results.

19 References D. H. Owens and R. P. Jones. Iterative solution of constrained differential/algebraic systems, International Journal of Control, 27(6): , B. Chu and D. H. Owens. Accelerated Norm-Optimal iterative learning control algorithms using successive projection, International Journal of Control, 82(8): , 2009.

20 Point-to-Point ILC

21 Background In this presentation we concentrate on point-to-point movement operations This is motivated by 1. Industrial robotic movement tasks 2. Recent successful use of ILC in stroke rehabilitation here electrical stimulation must extend the patient s arm, but the exact path taken is not significant, rather the final position reached. The ILC framework allows accuracy to be gained by learning from experience over previous trials There are few ILC algorithms in which the repeated operation consists of a more general objective than the tracking of a static pre-defined reference. Those which exist are usually application-specific, such as gas arc welding.

22 ILC... or they 1. do not involve updating the reference which is static throughout, and 2. involve the derivation of explicit algorithms rather than a technique which can be applied to an existing ILC scheme Point-to-point motion control usually involves generation of a reference, then design of a controller to track this signal.

23 Motivation: Stroke Rehabilitation Patient s arm must move through predetermined points though application of electrical stimulation. Points prescribed by physio, inter-point trajectory unimportant Can build up complex movements through selection of points

24 Motivation: Stroke Rehabilitation In this context high value is placed on a control structure which can be used with both standard tracking trajectories and point-to-point tasks use of existing ILC laws which have already been developed and tested with patients adding robustness to these laws

25 3D Rehabilitation Robot

26 ILC A SISO LTI system is considered, and given, on trial k, by x k (t + 1) = Ax k (t) + Bu k (t) y k (t) = Cx k (t) The tracking task is only defined at times 0 and T when the output must equal 0 and r N respectively for all k, where T denotes the task length. To achieve the tracking task, we consider an ILC algorithm of the form u k+1 = u k + Ke k (1) where K is a linear operator which may be non-causal, and e k = r y k where r is the reference. At the end of the k th trial the reference is allowed to change, and is replaced with r k+1, leading to e k = r k+1 y k (2) to produce u k+1 = u k + Ke k (3)

27 ILC The time domain relationships which then arise are y k+1 = y k + GKe k r k+1 y k+1 = r k+1 y k GKe k e k+1 = (I GK) (r k+1 y k ) where G is the plant operator, and I is the identity operator. The monotonic convergence criterion is e k+1 2 = (I GK) (r k+1 y k ) 2 < e k 2 A sufficient condition for monotonic convergence is therefore I GK 2 < 1 (4) together with which equates to r k+1 y k 2 e k 2 = r k+1 y k 2 r k y k 2 1 r k+1 y k 2 r k y k 2 (5)

28 ILC The first condition, (4), is the standard monotonic convergence criterion for a static reference The second condition is r k+1 y k 2 = r k + e k 2 e k 2 (6) where r k = r k+1 r k r k must therefore be chosen from a suitable set of functions with end-points equal to zero for k 2. We consider the set of harmonic sinewaves, since this leads to simplification in how r k is chosen This choice is equivalent to taking the DFT of r k, with components, R k,i, and requiring that they are all real, i.e. Im{ R k,i } = 0 for i = {0, 1,... N 1}

29 Approach formulation Taking the inner product of the updated error and applying Parseval s theorem e k 2 = r k + e k 2 = r k 2 + e k 2 + 2Re r k, e k = 1 N 1 E k,i N 1 R k,i ( R k,i + 2Re{E k,i }) N N i=0 i=0 The differential with respect to R k is r k + e k 2 = 2 N 1 R k,i + 2 R k N N which is minimized using i=0 N 1 i=0 Re{E k,i } R k,i = αre{e k,i } for i = 0, 1... N 1 (7) with α = 1. However, we will consider α (0, 1].

30 Approach formulation cont d The corresponding updated cost on trial k is given by r k + e k 2 = 1 N since 0 < α(2 α) 0 N 1 i=0 E k,i 2 α(2 α) 1 N < e k 2 iff i s.t Re{E k,i } = 0 Note that the reference is being updated using N 1 i=0 (Re{E k,i }) 2 r k+1 = r k + r k (8) which we chose to speed-up learning of the final trajectory. It is always possible to ensure r k + e k 2 e k 2 since N 1 r k + e k 2 e k 2 = α(2 α) 1 (Re{E k,i }) 2 N i=0 < 0 iff i s.t Re{E k,i } = 0 (9)

31 Approach formulation cont d The error norm is given by e k+1 2 = ( r k + e k ) 2 (I GK) e k e k ( r k + e k ) 2 (I GK) e k 2 (10) e k in which the division, and multiplication,, are executed component-wise. The norm error ratio is e k+1 2 e k 2 N 1 ( 1 Ek,i 2 α(2 α) (Re{E k,i }) 2) 1 G ik i 2 N E k,i 2 (11) i=0 = 1 N 1 ( 1 α(2 α) cos 2 ( E k,i ) ) 1 G ik i 2 N i=0

32 Approach formulation cont d We have introduced the multiplier the i th frequency component. 1 α(2 α) cos 2 ( E k,i ) 1 on This multiplier relaxes the monotonic convergence criterion given by (4). From (11) a sufficient condition to produce trial-to-trial error reduction is now ( 1 α(2 α) cos 2 ( E k,i ) ) 1 G ik i 2 < 1 (12) for each frequency, i, so that 1 G ik i < 1 1 α(2 α) cos 2 ( E k,i ) (13) Increasing α reduces the denominator and so provides additional robustness with respect to the constantly modified reference.

33 System Convergence Properties Substitute R k,i = αre{e k,i } into the frequency-transformed ILC algorithm (3) (with E k,i = R k+1,i Y k,i = R k+1,i G iu k,i ). The resulting system is Ȳ k+1,i Ŷ k+1,i ˆR k+1,i = Ξ(i) Ȳ k,i Ŷ k,i ˆR k,i Re{G ik i} R 0,i + Im{G ik i} R 0,i 0 where ˆ and denote Re{ } and Im{ } respectively. (14) Ξ(i) is given by 1 Re{G ik i} (1 α)im{g ik i} (1 α)im{g ik i} Im{G ik i} 1 (1 α)re{g ik i} (1 α)re{g ik i} (15) 0 α 1 α and R k,i = R 0,i.

34 System convergence and robustness The eigenvalues of Ξ(i) are λ 1 = 1 and ( λ 2,3 = 1 α ) ( α ) 2 (1 Re{G ik i}) ± (1 Re{GiK i}) 2 (1 α) (Im{G ik i}) (16) Movement of eigenvalues (16) due to learning factor, α.

35 Proof of convergence Theorem 1. The system (14) converges asymptotically to a constant vector as k. Proof. Let D = diag{λ 1, λ 2, λ 3}, and V contain the corresponding eigenvectors such that V DV 1 represents the matrix diagonalization of Ξ(i). It is straightforward to show that V 1 ˆR k+1,i Re{G ik i} Im{G ik i} 0 ˆR k,i = 0 v 1 v 2 (17) where v 1, v 2 may be complex (and if so occur as a conjugate pair). Then (14) can be rewritten as Ȳ k+1,i Ȳ k,i 0 Ŷ k+1,i = D Ŷ k,i + v 1 (18) v 2 with [ Ȳ k,i, Ŷ ] T ] T k,i, ˆRk,i = V 1 [Ȳk,i Ŷ k,i ˆRk,i.

36 Proof of convergence As k, this converges asymptotically to [ ] T Ȳ 0,i, v 1/(1 λ 2), v 2/(1 λ 3) (19) and it follows that (14) converges asymptotically to Ȳ,i Ȳ 0,i Ŷ,i v = V 1 1 λ 2 (20) v ˆR 2,i 1 λ 3 Here the converged values are given by Ȳ,i = R 0,i (21) Ŷ,i = ξ α + (1 α) Re{G ik i} sec 2 ( G ik i) (22) ˆR,i = Ŷ,i (23) ξ = α (Ŷ0,i + ( Ȳ 0,i R ) ) 0,i tan( GiK i) + (1 α) ˆR 0,iRe{G ik i} sec 2 ( G ik i) (24)

37 Error System Since the plant output and reference both converge asymptotically, the error. Let the error now be redefined to be with respect to the final converged reference value, so that Ê k,i = ˆR,i Ŷk,i and Ē k,i = R,i Ȳk,i. This then gives Ê,i = ˆR,i Ŷ,i = 0 and Ē,i = R,i Ȳ,i = 0. The resulting dynamic error system is Ē k+1,i Êk+1,i = Ξ(i) ˆR k+1,i Ē k,i Ê k,i ˆR k,i (25) with the eigenvalues λ 1 = 1, and λ 2,3 given by (16) governing the asymptotic convergence of the error components to zero, and reference to a fixed value.

38 System Robustness Properties The system robustness can be examined by determining the range of plant uncertainty which may exist such that (14) remains asymptotically stable. Assume the frequency-wise multiplicative plant uncertainty G i = G 0,iM i i = 0, 1,... N 1 (26) where G 0,i is the nominal plant. Insert in the expression for the eigenvalues (16), and find the bound on the region of uncertainty which leads to their lying within the unit circle by setting ( 1 α ) (1 Re{G 0,iK i} 2 ˆM i + Im{G 0,iK i} M ) i ± ς = 1 (27) with ς = ( α ) 2 ( 1 Re{G 0,iK i} 2 ˆM i + Im{G 0,iK i} M ) 2 i ( (1 α) Re{G 0,iK i} M i + Im{G 0,iK i} ˆM ) 2 i (28) This prescribes a region of C in which the uncertainty must lie.

39 System Robustness Properties Hence the solution is the intersection of three sub-regions ( ˆM i Re{G0,iKi} ) 2 ( G 0,iK i 2 + M i + Im{G0,iKi} G 0,iK i 2 1 = (1 α) G 0,iK i 2 ( ˆM i + αre{g0,iki} ) 2 ( 2 G 0,iK i 2 + M i αim{g0,iki} ) 2 (1 α) 2 G 0,iK i 2 (1 α) α 2 = 4 G 0,iK i 2 (1 α) 2 ( ) 2 ( (3α 4) Re{G0,iKi} ˆM i + 2 G 0,iK i 2 + M i (1 α) α 2 = 4 G 0,iK i 2 (1 α) 2 and each represents a circle. ) 2 ) 2 (3α 4) Im{G0,iKi} 2 G 0,iK i 2 (1 α) (29) (30) (31)

40 System Robustness Properties The solution defines the region in which the plant uncertainty is bounded for any given 0 α 1. The transition between the cases α = 0 and α = 1 is shown for the values Re{G 0,iK i} = 0.6, Im{G 0,iK i} = 0.2, and the uncertainty region 30 ˆM i 30, 20 M i 20. Uncertainty region in which the eigenvalues λ 2,3 given by (16) lie in unit circle, with Re{G i K i } = 0.6, Im{G i K i } = 0.2, for a) α = 0.3, b) α = 0.9, c) α = 0.97 and d) α = 0.99.

41 Multiple point-to-point movements Let the reference be specified at a fixed number, M, of sample instants given by I 1, I 2,..., I M with I M = N. Let the prescribed values of the output at these instants be J 1, J 2,..., J M, with J M = r N. It is proposed to choose the initial reference, u, to satisfy the optimisation minimise u 2 2 subject to ΦGu = Γ (32) where the M N matrix Φ is defined by { 1 i = 1, 2... M, p = I i Φ i,p = 0 otherwise and (33) Γ = [ J 1 J 2 J M ] T (34)

42 Multiple point-to-point example Assume T = 10 and I 1 = 300, I 2 = 650, and I 3 = T/T s = 1000 with J 1 = 7, J 2 = 5 and J 3 = 10. Since I 3 > I 2 > I 1, Φ is of the form Φ = (35) so that the term ΦG appearing in (32) is given by ΦG = CA 299 B CAB CB 0 0 CA 649 B CAB CB 0 0 (36) CA 999 B CAB CB and Γ = [ ] T Having performed the optimisation, the corresponding initial reference r 1 = Gu is shown in the results section. (37)

43 Multiple point-to-point tracking cont d Since the reference satisfies the desired point-to-point movement constraints, it can be updated by adding segments which are zero at samples I 1, I 2,... I M. Therefore the single point-to-point update approach can be applied to each inter-point segment of the reference. The update for the p th interval is accordingly given by r p,k+1 = r p,k + r p,k p = 1, 2,... M (38) where r p,k = [ r k (I p 1 + 1)... r k (I ] T p) with I 0 = 1. Likewise, the other signals are given by (39) y p,k = [ y k (I p 1 + 1)... y k (I p) ] T e p,k = [ e k (I p 1 + 1)... e k (I p) ] T r p,k = [ r k (I p 1 + 1)... r k (I p) ] T (40) (41) (42) Since the p th segment has I p I p 1 elements, the reference update (7) is replaced by R p,k,i = αêp,k,i i = 0, 1,... I p I p 1 1 (43)

44 Multiple point-to-point tracking cont d Theorem 2. Application of the ILC control law (3), in combination with (2), (8), and the update law (43), for an M segment point-to-point task, yields a system whose eigenvalues encompass those of the M individual segments. Proof. The i th frequency component of the p th segment, E p,k,i, is associated with the system (25), and its eigenvalues govern the convergence of the error in that segment. To find the contribution of each of the M segments to the overall error, e k, the i th frequency component of e k is written as N 1 E k,i = e k e j2πni/n = n=0 M = p=1 M = p=1 I p I p 1 1 n=0 e j2πi p 1i/N I M p 1 e p,k e j2πni/n p=1 n=i p 1 e p,k e j2π(n+i p 1)i/N I p I p 1 1 n=0 M = e j2πip 1i/N E p,k,i(i p I p 1)/N p=1 e p,k e j2πni/n (44)

45 Multiple point-to-point tracking continued Here the DFT components of the p th segment are given by E p,k,i = I p I p 1 1 n=0 e p,k e j2πni/(ip I p 1) (46) so each frequency component of e k is a sum of frequencies from the M segments. The i th component of e k+1 is e j2πi T 0i/N E 1,k+1,i(I1 I 0 )/N e j2πi 1i/N E 2,k+1,i(I2 I 1 )/N E k+1,i =.. e j2πi M 1i/N E M,k+1,i(I M I M 1)/N (47) Each segment s component is governed by (25), and is given by Ē p,k+1,l Ē p,k,l [ ] Ē Êp,k+1,l = Ξ(l) Êp,k,l p,k+1,l, E p,k+1,l = Ê p,k+1,l (48) ˆR p,k+1,l with l = i(i p I p 1)/N. ˆR p,k,l ˆR p,k+1,l

46 Multiple point-to-point tracking cont d Therefore (47) becomes E k+1,i = e j2πi 0i/N e j2πi 1i/N. e j2πi M 1i/N T ΦX k+1 X k+1 = Λ(i)X k (49) with X k+1 R 3M containing all the error and reference components, and {[ ] [ ] [ ]} Φ = diag,, Λ(i) = diag{ξ(i (I 1 I 0) /N), Ξ(i (I 2 I 1) /N),... Ξ(i (I M I M 1) /N)} (50) As Λ(i) is block diagonal, the eigenvalues of this system are λ (Λ(i)) = { λ (Ξ(i (I 1 I 0) /N)), λ (Ξ(i (I 2 I 1) /N)),... λ (Ξ(i (I M I M 1) /N)) } where λ (W ) represents the set of all eigenvalues of W. (51)

47 Hence the multiple point-to-point system inherits the eigenvalues and associated asymptotic convergence and robustness properties of the single point-to-point systems which govern the behavior of each segment. For the case α = 1 the bound on the error, e k, is r k + e k 2 = 1 N = 1 N < 1 N M r p,k + e p,k 2 ( ) M N 1 (I p I p 1) (Im{E p,k,i }) 2 p=1 i=0 ( ) M N 1 (I p I p 1) E p,k,i 2 p=1 p=1 i=0 < e k 2 iff i, p s.t Ê p,k,i 0 (52)

48 Experimental results From the non-minimum phase test facility. Transfer-function G(s) = (4 s) s s s s (53) The adjoint ILC algorithm is selected as a well known member of the class considered, and is given in discrete form by u k+1 (z) = u k (z) + βg (z)e k (z) (54) where G (z) is the adjoint of the plant model used.

49 Experimental results An attractive feature of the adjoint law is that, with a sufficiently small positive scalar multiplier, β, it is guaranteed to satisfy the condition for monotonic convergence over all frequencies, and hence ensure a satisfactory transient response. The static reference monotonic convergence criterion corresponding to (4) in this case is I βgg 2 < 1 (55) The adjoint ILC algorithm is one of the methods used in the stroke rehabilitation programme described in the introduction. This algorithm has many interesting extensions...! In the following experimental tests, a sampling period of T s = 0.01 sec has been used.

50 Experimental results: reference and output evolution We start with a single point-to-point trajectory (M = 1, T = 5.12s, r N = 12rad I 1 = 512, J 1 = r N ) Figure 1: Objective-driven ILC using the adjoint algorithm with β = 0.7, α = 1, showing a) reference evolution, b) plant output and c) error norm e k.

51 Experimental results: error norm and total point-to-point error Figure 2:Point-to-point error for objective-driven ILC using the adjoint algorithm with β = 0.7, α = 1.

52 Experimental results: reference and output evolution Move onto the same multiple point-to-point trajectory as previously defined Figure 3: Multiple objective-driven ILC using the adjoint algorithm with β = 0.6 and α = 1, showing a) reference evolution, and b) plant output.

53 Experimental results: error norm and total point-to-point error The total error at the three points comprising the point-to-point movement is given by 3 p=1 Jp y k(i p) Figure 4: Multiple objective-driven ILC using the adjoint algorithm with β = 0.6 for a range of α, showing a) error norm e k, and b) total point-to-point error.

54 Experimental results: summary Improvement in tracking of the points (I p, J p) is evident. Use of the proposed method results in faster convergence to the desired points, as well as increased accuracy in attaining them, compared with using a static reference. The results also indicate that the relaxation factor provides additional robustness since less fluctuation is present in the error when higher values of α are used. There is also no significant reduction in convergence speed associated with values of α = 0.5 and α = 0.75 compared with α = 1.

55 Conclusions A novel method of applying ILC to point-to-point movements has been developed and is applicable to a broad class of linear ILC algorithms. The approach is based upon updating the reference between successive trials, and its convergence and robustness properties have been examined both theoretically and experimentally. The technique s ability to vary the reference from trial to trial distinguishes it from traditional point-to-point motion strategies such as input Shaping. This also separates it from almost all previous applications of ILC to point-to-point motion control, which also assume a static reference. Moreover, whilst also providing the ability to learn from experience gained over previous trials of the task, the proposed scheme also has the benefit of being suitable for application to a broad class of existing ILC laws.

56 Future work The approach will be applied to current research in the area of stroke rehabilitation, to assist patients in the performance of reaching tasks with their impaired arms using electrical stimulation. Extend the technique to allow variation in the time-points at which the movement attains the prescribed output values. Extension of the approach for application to non-linear systems will then be formulated, with the addition of other constraints which govern the manner in which the task is performed.

57 References C. T. Freeman, Z. Cai, E. Rogers, C. T. Freeman, P. L. Lewin Iterative Learning Control for Multiple Point-to-Point Tracking Application IEEE Transactions on Control Systems Technology, 19(3): , C. T. Freeman Constrained point-to-point iterative learning control with experimental verification Control Engineering Practice, 20(5): , 2012.

58 Robust ILC with Experimental Verification

59 Adjoint Robustness SISO plant model in lifted form (CB 0) y k = G eu k (56) where G e = CB CAB CB CA N 1 Γ CA N 2 Γ CA N 3 b... 0 (57)

60 Analysis Standard steepest-descent algorithm cost function Use J(u k+1 ) = e k+1 2, e k+1 = r G eu k+1 (58) u k+1 = u k + ɛ k+1 δ k+1 (59) where ɛ k+1 is a scaling factor and δ k+1 is the vector that determines the direction of the update vector. Resulting tracking error J(u k+1 ) = J(u k + ɛ k+1 δ k+1 ) = e k+1 2 = e k 2 2ɛ k+1 δ T k+1g T e k + ɛ 2 k+1δ T k+1g T e G eδ k+1 (60) or e k+1 2 e k 2 = 2ɛ k+1 δ T k+1g T e e k + ɛ 2 k+1δ T k+1g T e G eδ k+1 (61)

61 Analysis For monotonic convergence, the right-hand side in (61) must be negative. One way to achieve this is to take δ k+1 = G T e e k, resulting in the control law u k+1 = u k + ɛ k+1 G T e e k (62) and the difference e k+1 2 e k becomes e k+1 2 e k 2 = 2ɛ k+1 G T e e k 2 + ɛ 2 k+1 G eg T e e k 2 (63) and because the negative term ɛ k+1 G T e e k 2 is of O(ɛ) and the positive term ɛ 2 k+1 G eg T e e k 2 is of O(ɛ 2 ), by using a sufficiently small positive ɛ k+1 the right-hand side of (63) can be made negative (note that G e was assumed to be invertible), resulting in monotonic convergence. In order to automate the selection process for ɛ k+1, it was suggested that ɛ k+1 is taken to be the solution of the following optimization problem

62 Analysis The optimal ɛ k+1 is given by and hence ɛ k+1 = arg min ɛ k+1 R J(u k + ɛg T e e k ) (64) ɛ k+1 = GT e e k 2 G eg T e e k 2 (65) e k+1 2 e k 2 = GT e e k 4 (66) G eg T e e k 2 where the right-hand side is negative, implying monotonic convergence to zero tracking error. If if the original plant is positive-definite in the sense that G e + G T e is a positive-definite matrix, the simple control law u k+1 (t) = u k (t) + γe k (t + 1) (67) will result in convergence if γ is selected to be sufficiently small. In order to automate the selection of γ a parameter-optimal ILC algorithm can be used.

63 Analysis Assume that the true plant contains multiplicative uncertainty, which is modeled with the equation G e = G ou. In this equation G o is the nominal model (i.e. an estimate of the true plant), and U reflects the multiplicative uncertainty (i.e. modeling errors), and G o is used instead of G e in the update law, i.e., u k+1 = u k + ɛ k+1 G T o e k (68) In the presence of uncertainty e k+1 2 e k 2 becomes e k+1 2 e k 2 = 2ɛ k+1 e T k G ou T G T o e k +ɛ 2 k+1e T k G og T e G eg T o e k (69)

64 Analysis Lemma Suppose that U + U T is a positive-definite matrix. If e k = 0 there exists an ɛ k+1 > 0 such that e k+1 2 e k 2 < 0 In the standard steepest-descent algorithm, ɛ k+1 is given by ɛ k+1 = GT o e k 2 G og T o e k 2 (70) and hence there is no clear mechanism to modify ɛ k+1 so that it would be sufficiently small in the above result. u k+1 = u k + ɛ k+1 G T e e k (71) where ɛ k+1 is selected to be the solution of the optimisation problem min ɛk+1 R J(ɛ k+1 ) J(ɛ k+1 ) := e k wɛ 2 k+1 (72) where w R, w > 0.

65 Analysis The cost function J(ɛ k+1 ) in (72) reflects two design objectives. The first term in J(ɛ k+1 ) reflects the objective that the tracking error should be small during each trial. The second term, on the other hand, tries to keep the magnitude of ɛ k+1 small, possibly resulting in a more cautious and robust algorithm when compared to the standard steepest-descent algorithm. The optimal solution is ɛ k+1 = G T e e k 2 w + G eg T e e k 2 (73)

66 Analysis Lemma If w R, w > 0 then e k+1 e k,where equality holds if and only if ɛ k+1 = 0. Furthermore, lim k e k = 0 and lim k ɛ k = 0 (74) demonstrating monotonic convergence to zero tracking error. Lemma Assume that U + U T is positive-definite and w is chosen so that w > 1 G T o 2 G og T e 2 e σ min(g oug T o ) (75) where σ min(g oug T o ) is the smallest singular value of the positive-definite matrix G oug T o. In this case, the sequence of tracking errors satisfies e k+1 < e k when e k 0.

67 Analysis Note that the estimate for w can be very conservative, because the term e T k G oug T o e k is estimated in terms of the smallest singular value of G oug T o. Furthermore, if w is selected to be excessively large in magnitude, this can have a undesirable effect on the convergence speed, because a large w will result in a small ɛ k+1, implying that u k+1 u k in such a case. Consequently this result should be taken as an existence result, and in practice w can be selected by resorting to a trial and improvement approach. Note that the sufficient value of w decreases as e k 2 decreases. This opens up the potential to reduce w with each successive iteration k and a reduced w will possibly result in an increase in the convergence speed. A natural choice to exploit this would be w = w 1 e k 2, however, as e k 2 approaches zero dangerously high inputs could be applied to the plant. A simple remedy would be to choose w = w 0 + w 1 e k 2.

68 Analysis The next result shows that in addition to monotonic convergence, U T + U > 0 also implies that lim k e k = 0 if w is selected to be sufficiently large. Lemma Under the assumptions of the previous result. the algorithm converges monotonically to zero tracking error. For monotonic convergence to zero tracking error it is required that the multiplicative uncertainty U has to be positive in the sense that U T + U is a positive-definite matrix. However, in most applications the nominal matrix model G o is obtained by truncating the lifted transfer function zg o(z), where G o(z) is the transfer function of the nominal plant model when time-axis is infinite. Therefore it is important to understand this positivity result in terms of these transfer-function models.

69 Analysis Using the transfer-function descriptions of the plant, the uncertainty model is zg(z) = U(z)G o(z) or U(z) = G(z) 1 G o(z) (76) Furthermore, it can be shown that if U(z) is a positive-real system (or equivalently, its Nyquist diagram lies strictly in the right-half plane,), the truncated system (matrix) U is positive, i.e., U T + U is a positive-definite matrix. The Nyquist-diagram condition, on the other hand, is equivalent to the condition that phase of the uncertainty model U(z) has to lie inside ±90 degrees, demonstrating a good degree of robustness.

70 Experiments Using the gantry robot, specific features of the algorithm have been investigated, including: learning performance compared to both a PIE controller and a simple P-type ILC controller robustness to plant modeling error robustness to initial state error The dimensions of the G o matrices is determined by the sampling frequency and the time period of one iteration. For the robust optimal algorithm, the selected sampling frequency is very low at only 100 Hz. This choice is justified by a number of factors: In velocity control mode, the linear motor amplifiers operate with their own closed-loop control. The input to the amplifiers is therefore simply a setpoint adjustment and is not directly required to achieve a stable system.

71 Experiments A low sample frequency minimizes the sizes of the G o matrices implying that less memory and computation time are required to generate the next input to the plant. The transfer-function models for the axes were reduced to a second order transfer=function with similar low frequency gain, but none of the high order dynamics. The first trial was set at zero input for all sample intervals, implying that the algorithm must learn the correct input from nothing. G x(s) = 11 s(s + 200) (77) G y(s) = G z(s) = 17 s(s + 300) 10 s(s + 400) (78) (79)

72 Experiments The log mise results in the next figure show that the simplification of the model has little effect on the convergence rate and minimum error, the algorithm also remains stable. The difference is most noticeable for the X-axis where the performance of the low order model is noticeably worse that of the high order model. The X-axis has the most significant high frequency dynamics and so is most affected by the simplification process. Stability theory suggests that a model of the form 1/s is adequate to meet the stability conditions for any plant which has a phase between 0 and -180 degrees. One simple way of testing this would be to use a model of the form 1/s during the controller design process, then implement this controller in practice. The model 1/s satisfies the requirements for phase, but the gain is also likely to have a significant effect on stability. It is therefore more logical to consider a model β/s where β is the model gain.

73

74 Experiments G 0(s) = β (80) s This model has been implemented using a selection of values for β to investigate the effect of the model gain on stability and convergence. The values used were 1.00, 0.10, 0.09, 0.08, 0.07, 0.06, 0.05, 0.04, 0.03, 0.02, Comparing the data for β equal to 1.00, 0.05 and 0.03, confirms that with a large value for β the rate of learning is slower. As β is reduced, the learning steadily becomes faster until an optimum β is reached. Increasing β any further then rapidly reduces the learning rate and very quickly the system becomes unstable. The best values of β for each axis were found to be 0.05 for the X and Y -axes and 0.03 for the Z-axis. Generating Bode plots for 0.05/s and 0.03/s and comparing these plots to the high order models for each axis reveals why these values for β are best. They generate a model gain which is equal to the plant gain at low frequency.

75 Experiments Set β to a large value (if any form of plant model is available ensure that β generates a gain which is greater than the gain of the model at low frequency). Operate the system and ensure that stability is achieved. Reduce β. Continue steps 2 and 3 until optimal convergence is achieved or until the system begins to destabilize. If the system begins to destabilize, increase β slightly. β is set. The results and conclusions of further experimental investigations on the performance of this algorithm are given in the cited reference.

76 Trial 4000, reference and output.

77 References J. D. Ratcliffe, J. J. Hätönen, P. L. Lewin, E. Rogers and D. H. Owens Robustness analysis of an adjoint optimal iterative learning controller with experimental verification International Journal of Robust and Nonlinear Control, 18(10): , D. H. Owens and E. Rogers (Guest Editors) Robust Iterative Learning Control: Theory and Experimental Verification International Journal of Robust and Nonlinear Control, 18(10), 2008.

78 A Brief Visit to Nonlinear Model ILC

79 Nonlinear ILC Nonlinear ILC has received a very large amount of attention in the literature. Arguably far too much attention on trial-to-trial error convergence proofs, all based on the same idea just different assumptions. In this section the background to methods that have gone beyond convergence proof are briefly considered. The application of one such method Newton ILC is critical to the stroke rehabilitation research covered in the next part of the course. Nonlinear systems can, in general terms, be split into two groups those that are affine in the control and those that are not. Those that are affine in the control are assumed to be in the following form

80 Nonlinear ILC ẋ(t) = f(x(t)) + B(x(t))u(t) y(t) = g(x(t)) (81) where x is the state vector, u is the input and y is the output. A special case is M r(x)ẍ C r(x, ẋ)ẋ g r(x) d r(x, ẋ) = τ (82) where for robotics the vectors x, ẍ, ẋ are the positions, velocities and accelerations of the link, τ is the torque input, M r(x) is the symmetric positive-definite matrix of link inertias, C r(x, ẋ) is the Coriolis and centripetal acceleration matrix, g r(x) is the gravitational force vector and d r(x, ẋ) is the friction torque vector. ILC for affine nonlinear systems uses a wide variety of algorithms but a critical common assumption is that the nonlinear system is smooth.

81 Nonlinear ILC This last requirement is often expressed as a global Lipshitz assumption on each of the functions in (81). These take the form f(x 1) f(x 2) f 0 x 1 x 2 B(x 1) B(x 2) b 0 x 1 x 2 g(x 1) g(x 2) g 0 x 1 x 2 (83) The constants f 0, b 0 and g 0 are used in a contraction mapping setting to obtain (sufficient) conditions for trial-to-trial error convergence and ILC law design. Nonaffine systems have the form ẋ(t) = f(x(t)) + B(x(t), u(t)) y(t) = g(x(t)) (84)

82 Nonlinear ILC ILC has also been extended to nonlinear discrete-time systems. Obtaining discrete-time models for nonlinear systems is sometimes non-trivial but, for example, trial-to-trial error convergence proofs are simpler and the final design is directly compatible with digital implementation. One other way is to study and design ILC for nonlinear systems is to treat the nonlinearities as perturbations to a linearized system model.

83 Newton ILC Use the full model in the computation of the next trial input. State-space model x k (t + 1) = f(x k (t), v k (t)) q u,k (t) = h(x k (t)) where t [0, 1, 2,..., N 1] is the sample number, x k (t) is the state vector, and N = T/T s + 1 with T s the sampling frequency. Introduce the vectors v k = [v k (0) T, v k (1) T,..., v k (N 1) T ] T q u,k = [q u,k (0) T, q u,k (1) T,..., q u,k (N 1) T ] T (85)

84 Newton ILC Also the reference vector q u = [q u(0) T, q u(1) T,..., q u(n 1) T ] T (86) The Newton method based ILC update takes the form where e k = q u q u,k is the tracking error. v k+1 = v k + g (v k ) 1 e k (87) The term g (v k ) is equivalent to the system linearization around v k, with the system q u = g (v k )ṽ corresponding to the following linear time-varying, denoted LTV, system

85 Newton ILC with x(t + 1) = A(t) x(t) + B(t)ṽ(t) q u(t) = C(t) x(t) + D(t)ṽ(t) t = 0, 1,..., N 1 ( ) f A(t) = x ( ) h C(t) = x v k (t),x k (t) ( ) f, B(t) = v ( k ) h, D(t) = v k (t),x k (t) v k v k (t),x k (t) v k (t),x k (t) (88)

86 Newton ILC The term g (v k ) 1 in (87) is computationally expensive and may be singular or contain excessive amplitudes and high frequencies. To overcome this difficulty, introduce e k = g (v k ) v k+1 (89) and then v k+1 = v k+1 v k equals the input that forces the LTV system (88) to track the error e k. This is itself an ILC problem and can be solved in between experimental trials using any ILC algorithm that converges globally. In this article norm optimal ILC is considered, with the input and output on trial j denoted by e k,j and v k+1,j, respectively.

87 Newton ILC On trial j + 1, the trade-off between minimizing the tracking error, e k e k,j, and the change in control input, v k+1,j+1 v k+1,j, is represented by the cost function N 1 J j+1 = (e k e k,j )(t) T Q(e k e k,j )(t)+ t=0 N 1 ( v k+1,j+1 v k+1,j )(t) T R( v k+1,j+1 v k+1,j )(t) t=0 The ILC computation is stopped after 100 trials or after the error reaches a preset threshold. The input obtained, v k+1,j, is then used to approximate v k+1 in (87) to generate the control input for the next trial. Actual results from clinical trials with stroke patients where Newton ILC is used are given in the last section of this course.

Iterative Learning Control Analysis and Design I

Iterative Learning Control Analysis and Design I Iterative Learning Control Analysis and Design I Electronics and Computer Science University of Southampton Southampton, SO17 1BJ, UK etar@ecs.soton.ac.uk http://www.ecs.soton.ac.uk/ Contents Basics Representations

More information

A 2D Systems Approach to Iterative Learning Control with Experimental Validation

A 2D Systems Approach to Iterative Learning Control with Experimental Validation Proceedings of the 17th World Congress The International Federation of Automatic Control Seoul, Korea, July 6-11, 28 A 2D Systems Approach to Iterative Learning Control with Experimental Validation Lukasz

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)

More information

D(s) G(s) A control system design definition

D(s) G(s) A control system design definition R E Compensation D(s) U Plant G(s) Y Figure 7. A control system design definition x x x 2 x 2 U 2 s s 7 2 Y Figure 7.2 A block diagram representing Eq. (7.) in control form z U 2 s z Y 4 z 2 s z 2 3 Figure

More information

Goodwin, Graebe, Salgado, Prentice Hall Chapter 11. Chapter 11. Dealing with Constraints

Goodwin, Graebe, Salgado, Prentice Hall Chapter 11. Chapter 11. Dealing with Constraints Chapter 11 Dealing with Constraints Topics to be covered An ubiquitous problem in control is that all real actuators have limited authority. This implies that they are constrained in amplitude and/or rate

More information

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions

More information

Optimal control of nonlinear systems with input constraints using linear time varying approximations

Optimal control of nonlinear systems with input constraints using linear time varying approximations ISSN 1392-5113 Nonlinear Analysis: Modelling and Control, 216, Vol. 21, No. 3, 4 412 http://dx.doi.org/1.15388/na.216.3.7 Optimal control of nonlinear systems with input constraints using linear time varying

More information

Norm Optimal Iterative Learning Control with Application to Problems in Accelerator based Free Electron Lasers and Rehabilitation Robotics

Norm Optimal Iterative Learning Control with Application to Problems in Accelerator based Free Electron Lasers and Rehabilitation Robotics Norm Optimal Iterative Learning Control with Application to Problems in Accelerator based Free Electron Lasers and Rehabilitation Robotics E. Rogers D. H. Owens, H. Werner C. T. Freeman P. L. Lewin S.

More information

Predictive Iterative Learning Control using Laguerre Functions

Predictive Iterative Learning Control using Laguerre Functions Milano (Italy) August 28 - September 2, 211 Predictive Iterative Learning Control using Laguerre Functions Liuping Wang Eric Rogers School of Electrical and Computer Engineering, RMIT University, Victoria

More information

Optimal algorithm and application for point to point iterative learning control via updating reference trajectory

Optimal algorithm and application for point to point iterative learning control via updating reference trajectory 33 9 2016 9 DOI: 10.7641/CTA.2016.50970 Control Theory & Applications Vol. 33 No. 9 Sep. 2016,, (, 214122) :,.,,.,,,.. : ; ; ; ; : TP273 : A Optimal algorithm and application for point to point iterative

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Control of Electromechanical Systems

Control of Electromechanical Systems Control of Electromechanical Systems November 3, 27 Exercise Consider the feedback control scheme of the motor speed ω in Fig., where the torque actuation includes a time constant τ A =. s and a disturbance

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Continuous methods for numerical linear algebra problems

Continuous methods for numerical linear algebra problems Continuous methods for numerical linear algebra problems Li-Zhi Liao (http://www.math.hkbu.edu.hk/ liliao) Department of Mathematics Hong Kong Baptist University The First International Summer School on

More information

Control of Mobile Robots

Control of Mobile Robots Control of Mobile Robots Regulation and trajectory tracking Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Organization and

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

ECE557 Systems Control

ECE557 Systems Control ECE557 Systems Control Bruce Francis Course notes, Version.0, September 008 Preface This is the second Engineering Science course on control. It assumes ECE56 as a prerequisite. If you didn t take ECE56,

More information

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018 Linear System Theory Wonhee Kim Lecture 1 March 7, 2018 1 / 22 Overview Course Information Prerequisites Course Outline What is Control Engineering? Examples of Control Systems Structure of Control Systems

More information

Iterative Learning Control (ILC)

Iterative Learning Control (ILC) Department of Automatic Control LTH, Lund University ILC ILC - the main idea Time Domain ILC approaches Stability Analysis Example: The Milk Race Frequency Domain ILC Example: Marine Vibrator Material:

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability

More information

Part 1: Introduction to the Algebraic Approach to ILC

Part 1: Introduction to the Algebraic Approach to ILC IEEE ICMA 2006 Tutorial Workshop: Control Algebraic Analysis and Optimal Design Presenters: Contributor: Kevin L. Moore Colorado School of Mines YangQuan Chen Utah State University Hyo-Sung Ahn ETRI, Korea

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Pavankumar Tallapragada Nikhil Chopra Department of Mechanical Engineering, University of Maryland, College Park, 2742 MD,

More information

CDS Solutions to the Midterm Exam

CDS Solutions to the Midterm Exam CDS 22 - Solutions to the Midterm Exam Instructor: Danielle C. Tarraf November 6, 27 Problem (a) Recall that the H norm of a transfer function is time-delay invariant. Hence: ( ) Ĝ(s) = s + a = sup /2

More information

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators.

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators. Name: SID: EECS C28/ ME C34 Final Thu. May 4, 25 5-8 pm Closed book. One page, 2 sides of formula sheets. No calculators. There are 8 problems worth points total. Problem Points Score 4 2 4 3 6 4 8 5 3

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Prüfung Regelungstechnik I (Control Systems I) Prof. Dr. Lino Guzzella 9. 8. 2 Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Do not mark up this translation aid -

More information

GATE EE Topic wise Questions SIGNALS & SYSTEMS

GATE EE Topic wise Questions SIGNALS & SYSTEMS www.gatehelp.com GATE EE Topic wise Questions YEAR 010 ONE MARK Question. 1 For the system /( s + 1), the approximate time taken for a step response to reach 98% of the final value is (A) 1 s (B) s (C)

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0). Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D

More information

Lecture 9 Nonlinear Control Design

Lecture 9 Nonlinear Control Design Lecture 9 Nonlinear Control Design Exact-linearization Lyapunov-based design Lab 2 Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.2] and [Glad-Ljung,ch.17] Course Outline

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Numerical Algorithms as Dynamical Systems

Numerical Algorithms as Dynamical Systems A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive

More information

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Prüfung Regelungstechnik I (Control Systems I) Prof. Dr. Lino Guzzella 3. 8. 24 Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Do not mark up this translation aid

More information

EE C128 / ME C134 Final Exam Fall 2014

EE C128 / ME C134 Final Exam Fall 2014 EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket

More information

Adaptive fuzzy observer and robust controller for a 2-DOF robot arm

Adaptive fuzzy observer and robust controller for a 2-DOF robot arm Adaptive fuzzy observer and robust controller for a -DOF robot arm S. Bindiganavile Nagesh, Zs. Lendek, A.A. Khalate, R. Babuška Delft University of Technology, Mekelweg, 8 CD Delft, The Netherlands (email:

More information

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli Control Systems I Lecture 2: Modeling Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch. 2-3 Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich September 29, 2017 E. Frazzoli

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization Lecture 9 Nonlinear Control Design Course Outline Eact-linearization Lyapunov-based design Lab Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.] and [Glad-Ljung,ch.17] Lecture

More information

Time Response Analysis (Part II)

Time Response Analysis (Part II) Time Response Analysis (Part II). A critically damped, continuous-time, second order system, when sampled, will have (in Z domain) (a) A simple pole (b) Double pole on real axis (c) Double pole on imaginary

More information

Value and Policy Iteration

Value and Policy Iteration Chapter 7 Value and Policy Iteration 1 For infinite horizon problems, we need to replace our basic computational tool, the DP algorithm, which we used to compute the optimal cost and policy for finite

More information

Matrix stabilization using differential equations.

Matrix stabilization using differential equations. Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian

More information

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich Control Systems I Lecture 7: Feedback and the Root Locus method Readings: Jacopo Tani Institute for Dynamic Systems and Control D-MAVT ETH Zürich November 2, 2018 J. Tani, E. Frazzoli (ETH) Lecture 7:

More information

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Ugo Rosolia Francesco Borrelli University of California at Berkeley, Berkeley, CA 94701, USA

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Lifted approach to ILC/Repetitive Control

Lifted approach to ILC/Repetitive Control Lifted approach to ILC/Repetitive Control Okko H. Bosgra Maarten Steinbuch TUD Delft Centre for Systems and Control TU/e Control System Technology Dutch Institute of Systems and Control DISC winter semester

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Run-to-Run MPC Tuning via Gradient Descent

Run-to-Run MPC Tuning via Gradient Descent Ian David Lockhart Bogle and Michael Fairweather (Editors), Proceedings of the nd European Symposium on Computer Aided Process Engineering, 7 - June, London. c Elsevier B.V. All rights reserved. Run-to-Run

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

The Rationale for Second Level Adaptation

The Rationale for Second Level Adaptation The Rationale for Second Level Adaptation Kumpati S. Narendra, Yu Wang and Wei Chen Center for Systems Science, Yale University arxiv:1510.04989v1 [cs.sy] 16 Oct 2015 Abstract Recently, a new approach

More information

Denis ARZELIER arzelier

Denis ARZELIER   arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15

More information

An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control

An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control 21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 WeC14.1 An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control Qing Liu and Douglas

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Design of iterative learning control algorithms using a repetitive process setting and the generalized KYP lemma

Design of iterative learning control algorithms using a repetitive process setting and the generalized KYP lemma Design of iterative learning control algorithms using a repetitive process setting and the generalized KYP lemma 22 th July 2015, Dalian, China Wojciech Paszke Institute of Control and Computation Engineering,

More information

Arc Search Algorithms

Arc Search Algorithms Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Introduction to Nonlinear Optimization Paul J. Atzberger

Introduction to Nonlinear Optimization Paul J. Atzberger Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,

More information

Newton Method based Iterative Learning Control of the Upper Limb

Newton Method based Iterative Learning Control of the Upper Limb 28 American Control Conference Westin Seattle Hotel, Seattle, Washington, USA June -3, 28 FrA8.2 Newton Method based Iterative Learning Control of the Upper Limb I. L. Davies, C. T. Freeman, P. L. Lewin,

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

Adaptive fuzzy observer and robust controller for a 2-DOF robot arm Sangeetha Bindiganavile Nagesh

Adaptive fuzzy observer and robust controller for a 2-DOF robot arm Sangeetha Bindiganavile Nagesh Adaptive fuzzy observer and robust controller for a 2-DOF robot arm Delft Center for Systems and Control Adaptive fuzzy observer and robust controller for a 2-DOF robot arm For the degree of Master of

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Nonlinear PD Controllers with Gravity Compensation for Robot Manipulators

Nonlinear PD Controllers with Gravity Compensation for Robot Manipulators BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 4, No Sofia 04 Print ISSN: 3-970; Online ISSN: 34-408 DOI: 0.478/cait-04-00 Nonlinear PD Controllers with Gravity Compensation

More information

Applied Differential Equation. November 30, 2012

Applied Differential Equation. November 30, 2012 Applied Differential Equation November 3, Contents 5 System of First Order Linear Equations 5 Introduction and Review of matrices 5 Systems of Linear Algebraic Equations, Linear Independence, Eigenvalues,

More information

Balanced realization and model order reduction for nonlinear systems based on singular value analysis

Balanced realization and model order reduction for nonlinear systems based on singular value analysis Balanced realization and model order reduction for nonlinear systems based on singular value analysis Kenji Fujimoto a, and Jacquelien M. A. Scherpen b a Department of Mechanical Science and Engineering

More information

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 1 Adaptive Control Part 9: Adaptive Control with Multiple Models and Switching I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 2 Outline

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

A Hybrid Systems Approach to Trajectory Tracking Control for Juggling Systems

A Hybrid Systems Approach to Trajectory Tracking Control for Juggling Systems A Hybrid Systems Approach to Trajectory Tracking Control for Juggling Systems Ricardo G Sanfelice, Andrew R Teel, and Rodolphe Sepulchre Abstract From a hybrid systems point of view, we provide a modeling

More information

Information Structures Preserved Under Nonlinear Time-Varying Feedback

Information Structures Preserved Under Nonlinear Time-Varying Feedback Information Structures Preserved Under Nonlinear Time-Varying Feedback Michael Rotkowitz Electrical Engineering Royal Institute of Technology (KTH) SE-100 44 Stockholm, Sweden Email: michael.rotkowitz@ee.kth.se

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization (Com S 477/577 Notes) Yan-Bin Jia Nov 7, 2017 1 Introduction Given a single function f that depends on one or more independent variable, we want to find the values of those variables

More information

EML5311 Lyapunov Stability & Robust Control Design

EML5311 Lyapunov Stability & Robust Control Design EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.

More information

4F3 - Predictive Control

4F3 - Predictive Control 4F3 Predictive Control - Lecture 3 p 1/21 4F3 - Predictive Control Lecture 3 - Predictive Control with Constraints Jan Maciejowski jmm@engcamacuk 4F3 Predictive Control - Lecture 3 p 2/21 Constraints on

More information

TMA4180 Solutions to recommended exercises in Chapter 3 of N&W

TMA4180 Solutions to recommended exercises in Chapter 3 of N&W TMA480 Solutions to recommended exercises in Chapter 3 of N&W Exercise 3. The steepest descent and Newtons method with the bactracing algorithm is implemented in rosenbroc_newton.m. With initial point

More information

Positioning Servo Design Example

Positioning Servo Design Example Positioning Servo Design Example 1 Goal. The goal in this design example is to design a control system that will be used in a pick-and-place robot to move the link of a robot between two positions. Usually

More information

Exam. 135 minutes, 15 minutes reading time

Exam. 135 minutes, 15 minutes reading time Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.

More information

A Robust Controller for Scalar Autonomous Optimal Control Problems

A Robust Controller for Scalar Autonomous Optimal Control Problems A Robust Controller for Scalar Autonomous Optimal Control Problems S. H. Lam 1 Department of Mechanical and Aerospace Engineering Princeton University, Princeton, NJ 08544 lam@princeton.edu Abstract Is

More information

Automatic Control Systems theory overview (discrete time systems)

Automatic Control Systems theory overview (discrete time systems) Automatic Control Systems theory overview (discrete time systems) Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Motivations

More information

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67 1/67 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 6 Mathematical Representation of Physical Systems II State Variable Models for Dynamic Systems u 1 u 2 u ṙ. Internal Variables x 1, x 2 x n y 1 y 2. y m Figure

More information

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization Global stabilization of feedforward systems with exponentially unstable Jacobian linearization F Grognard, R Sepulchre, G Bastin Center for Systems Engineering and Applied Mechanics Université catholique

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization for robust control with constraints p. 1 Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Variable-gain output feedback control

Variable-gain output feedback control 7. Variable-gain output feedback control 7.1. Introduction PUC-Rio - Certificação Digital Nº 611865/CA In designing control laws, the usual first step is to describe the plant at a given operating point

More information