Iterative Learning Control Analysis and Design I

Size: px
Start display at page:

Download "Iterative Learning Control Analysis and Design I"

Transcription

1 Iterative Learning Control Analysis and Design I Electronics and Computer Science University of Southampton Southampton, SO17 1BJ, UK etar@ecs.soton.ac.uk

2 Contents Basics Representations for Design Stability and Convergence Analysis Robustness Acknowledgement: The material in this section largely follows that in the following paper. D. A. Bristow, M. Tharayil and A. G. Alleyne A Survey of Iterative Learning Control IEEE Control Systems Magazine, 26(3): , 2006.

3 Basics continuous-time ILC has many aspects and design/analysis tools (and open research questions). To start with, it is assumed that the plant to be controlled is adequately modeled by either a linear continuous or discrete-time model in state-space or transfer-function terms. Continuous-time: Plant state-space model in ILC notation ẋ k (t) = Ax k (t) + Bu k (t) y k (t) = Cx k (t) (1) Control task: the output y k (t) is required to track the supplied reference signal over the fixed finite interval 0 t T.

4 Standard Assumptions The trial duration T has the same value for all trials. The initial condition is the same on all trials. The system dynamics are time-invariant. The dynamics are deterministic (noise-free). Many of these assumptions can be relaxed. Many ILC designs are for single-input single-output (SISO) systems In this section it is the SISO case that is considered the multi-input multi-output (MIMO) case is noted where relevant.

5 Original Arimoto Algorithm u k+1 (t) = u k (t) + Γė k (t) (2) where Γ is the learning gain. This ILC law will ensure that y k y d, or e k = y d y k = 0, k (3) if I CBΓ < 1 (4) where is an appropriately chosen norm what happens if CB = 0?. Note that this convergence condition places no constraints on the state matrix A!!

6 More ILC algorithms A PID like ILC algorithm is u k+1 (t) = u k (t) + Φe k (t) + Ψ e k (τ) dτ + Γė k (t) (5) A higher order ILC (HOILC) version of the PID ILC is u k+1 (t) = + N (1 Λ)P k e k (t) + Λu 0 (t) k=1 N (Φ k e i k+1 (t) k=1 + Ψ k e i k+1 (τ)dτ + +γ k ė i k+1 ) (6)

7 More ILC algorithms If N P k = 1, then proper choice of the learning gains k=1 ensures that e k converges asymptotically to zero in k (trial-to-trial error convergence). A time-varying P-type (no derivative and integral effects) ILC law (5) is u k+1 (t) = u k (t) + Γ k (t)e k (t) (7) where Γ(t) is the proportional learning gain that is trial and time-varying (special case is time-invariant heavily used in applications).

8 More ILC algorithms In this simple structure ILC law, the critical feature is the use of information from the most recent trial to update the current trial input. Other time-varying HOILC laws include or k l u k+1 (t) = u k (t) + Γ i (t)e i (t) (8) i=k k l k l u k+1 (t) = Γ i u i (t) + Γ i (t)e i (t) (9) i=k If required, make use of all available previous trials information. i=k

9 Discrete-time ILC algorithms Consider linear discrete-time (SISO) systems with state-space model in the ILC setting x k (p + 1) = Ax k (p) + Bu k (p), 0 p T y k (p) = Cx k (p) + Du k (p), x k (0) = x 0 (10) or (operator representation) y k (p) = G(q)u k (p) + d(p) (11) where q is the forward shift-time operator qx(p) = x(p + 1) and d(p) is an exogenous signal that repeats on each trial. Equivalently: the initial conditions are the same on each trial and (for simplicity) there are no external disturbances.

10 Discrete-time ILC algorithms To derive (11) from (10), write (with D = 0 for simplicity) y k (p) = C(qI A) 1 Bu k (p) + CA p x 0 (12) G(q) = C(qI A) 1 B A widely used ILC algorithm is d(p) = CA p x(0) (13) u k+1 (p) = Q(q) [u k (p) + L(q)e k (p + 1)] (14) where Q(p) is termed the Q-filter and L(p) is the learning function, respectively.

11 Discrete-time ILC algorithms There are many variations of (14) these include time-varying, nonlinear functions and trial-varying functions. Also the order can be increased use of information from more than the previous trial HOILC as above. Current trial feedback is a method of incorporating feedback with ILC and in this case (14) is extended to u k+1 (p) = Q(q) [u k (p) + L(q)e k (p + 1)] + C(q)e k+1 (p) (15) The term C(q)e k+1 (p) is feedback action on the current trial.

12 Discrete-time ILC algorithms Write (15) as Hence u k+1 (p) = w k+1 (p) + C(q)e k+1 (p) (16) w k+1 (p) = Q(q) [ w k (p) + (L(q) + q 1 C(q))e k (p + 1) ] (17) Hence the feedforward part of current trial ILC is identical to (15) with learning function L(q) + q 1 C(q) The ILC law (14) with learning function L(q) + q 1 C(q) combined with a feedback controller in the parallel architecture is equivalent to the complete current trial ILC see Figure 1.

13 Current trial feedback Memory L ILC Memory Q yd e k C Feedback Controller w k uk G Plant y k Figure 1: ILC with current trial feedback.

14 Assumptions/Implications of the model used The plant G(q) in (11) is a proper rational function of q and in general has a delay, or equivalently, a relative degree of m. Assumption: G(q) is stable (asymptotic stability) if not it can be stabilized by applying a feedback controller and then applying ILC to the resulting system. The trial duration is finite but in some analysis transfer-function/frequency domain an infinite duration is assumed (technical point). Discrete-time is a natural domain for ILC because this design method requires storage of past trial data, which is typically sampled. Temporary assumption: non-zero first Markov parameter (CB 0).

15 Assumptions/Implications of the model used The model (11) is sufficiently general to capture IIR and FIR plants. Repeating disturbances, repeated non-zero initial conditions and systems augmented with feedback and feedforward control can be included in the term d(p). Figure 4 illustrates the 2D systems nature of ILC information propagation from trial-to-trial (k) and along a trial (p). 2D control systems analysis is well developed in theory ILC provides an application area see Figure 2 with advantages of this setting for design (more later).

16 2D systems structure of ILC Figure 2: Illustrating the 2D systems structure of ILC.

17 Representations for Design

18 Representations for Design The lifted description is heavily used in discrete-time ILC analysis and design. First expand G(q) from the model (11) as an infinite power series G(q) = p 1 q 1 + p 2 q 2 + p 3 q (18) where the p i are the Markov parameters. The p i form the impulse response and since CB 0 is assumed, p 1 0. In the state-space description, p j = CA j 1 B (still with D = 0 for simplicity). G(p) with relative degree greater than unity is a critical issue in ILC (more later).

19 Lifted Model Introduce the vectors y k (1) y k (2) Y k =., U k = y k (T) u k (0) u k (1). u k (T 1), d = Then the system dynamics can be written as d(1) d(2). d(t) (19) G = Y k = GU k + d (20) p p 2 p (21). p T p T 1... p 1

20 Lifted Model, cont d The entries in Y k and d are shifted by one time step (relative degree is unity) to account for the one step delay in the plant. This ensures that G is invertible. If there is an m > 1 step delay, the above construction generalizes in a natural manner. In the lifted form the time and trial domain dynamics are replaced by an algebraic updating in the trial index only. This means that the along the trial dynamics are hidden the 2D systems approach is a way of avoiding this and enabling simultaneous design for trial-to-trial error convergence and control of the along the trial dynamics (more on this later).

21 Lifted Model, cont d The learning law (14) can also be written in lifted form. The Q-filter and learning function L can be non-causal functions of the impulse response. Q(q) =... + q 2 q 2 + q 1 q + q 0 + q 1 q 1 + q 2 q L(q) =... + l 2 q 2 + l 1 q + l 0 + l 1 q 1 + l 2 q (22) In lifted form U k+1 = QU k + LE k (23) E k = Y d Y k (24) Y d = [ y d (1) y d (2)... y d (T) ] T (25)

22 Lifted Model, cont d Q = L = q 0 q 1... q (T 1) q 1 q 0... q (T 2) q (T 1) q (T 2)... q 0 l 0 l 1... l (T 1) l 1 l 0... l (T 2) l (T 1) l (T 2)... l 0 (26) (27)

23 Lifted Model, cont d When Q(q) and L(q) are causal functions q 1 = q 2 =... = 0, l 1 = l 2 =... = 0 (28) and the matrices Q and L are lower triangular. The matrices G, Q and L are also Toeplitz, i.e., all entries along each diagonal are equal. This setting also extends to linear time-varying systems but the corresponding matrices do not have the Toeplitz structure. Next we introduce the z transform description.

24 Lifted Model, cont d The one-sided z-transform of a signal x(j), j = 0, 1,..., is X(z) = x(j)z j (29) j=0 and is obtained by replacing q by z. The frequency response is obtained by setting z = e jθ, θ [ π, π]. To use the z transform we need to assume T =. This is not an issue!!

25 Lifted Model, cont d In z-transform terms the plant and controller dynamics are Y k (z) = G(z)U k (z) + D(z) (30) U k+1 (z) = Q(z) [U k (z) + zl(z)e k (z)] (31) E k (z) = Y d (z) Y k (z) (32) The z term in this last equation emphasizes the forward time shift.

26 Causality Question: What is causal data for ILC? Definition (Bristow et al.) The ILC law (14) is causal if u k+1 (p) depends only on u k (τ) and the error e k (τ), τ p. It is noncausal if u k+1 (p) is also a function of u k (τ) or e k (τ) for some τ > p. Critical Fact: Unlike the standard concept of causality, a non causal ILC law is implementable in practice because the entire time sequence of data is available from all previous trials. Consider the non causal ILC law and the causal ILC law u k+1 (p) = u k (p) + k p e k (p + 1) (33) u k+1 (p) = u k (p) + k p e k (p) (34)

27 Causality, cont d Moreover, a disturbance d(p) enters the error as e k (p) = y d (p) G(q)u k (p) d(p) (35) Hence the non-causal ILC anticipates the disturbance d(p + 1) and compensates with the control action u k+1 (p). The causal ILC law has no anticipation since u k+1 (p) compensates for the disturbance d(p) with the same time index p. Causality also has consequences for feedback equivalence where the final, or converged, control, denoted u, can instead be obtained by a feedback controller. It can be shown that there is a feedback equivalence for causal ILC laws and the equivalent controller can be obtained directly from the ILC law.

28 Causality, cont d The assertion now is: causal ILC laws are of limited (or no!!) use since the same control action can be obtained by applying the equivalent feedback controller without the learning process. There are, however, critical limitations to this equivalence. The first limitation is the noise-free requirement. Another limitation is that as the ILC performance increases the equivalent feedback controller has increasing gain. In the presence of noise, use of high gain can lead to performance degradation and equipment damage. Hence casual ILC algorithms are still of interest and, in fact, this equivalence was already known in the repetitive process/2d systems literature.

29 Causality, cont d Critical Fact: The equivalent feedback controller may not be stable!! There is no equivalence for non-causal ILC as a feedback controller reacts to errors. P. B. Goldsmith On the equivalence of causal LTI iterative learning control and feedback control. Automatica, 38(4): , D. H. Owens and E.Rogers Comments on On the equivalence of causal LTI iterative learning control and feedback control. Automatica, 40(5): , 2004.

30 Stability and Convergence Analysis

31 Stability and Convergence Analysis We consider the system formed by applying an ILC law of the form (14) to a system described by (11). Note again the stability assumption on the plant dynamics. Definition The system formed by applying an ILC law of the form (14) to a plant described by (11) is asymptotically stable (AS) if there exists û R such that u k (p) û, p = 0, 1,..., T 1, k 0 (36) and lim k u k (p) exists. The symbol denotes for all.

32 Stability and Convergence Analysis The limit u is termed the learned control. In lifted form the controlled dynamics are described by U k+1 = Q(I LG)U k + QL(Y d d) (37) Maths/notation: Let H be an h h matrix with eigenvalues h i, 1 i h. Then r(h) = max i h i is termed its spectral radius and I denotes the identity matrix with compatible dimensions. Theorem The system formed by applying an ILC law of the form (14) to plants described by (11) is AS if and only if r(q(i LG)) < 1 (38)

33 Stability and Convergence Analysis If Q and L are causal, Q(I LG) is lower triangular and Toeplitz with repeated eigenvalues Hence stability provided λ = q 0 (1 l 0 p 1 ) q 0 (1 l 0 p 1 ) < 1 Note: This condition does not hold if p 1 = 0. In the z transfer-function domain U k+1 (z) = Q(z)[1 zl(z)g(z)]u k (z) + zq(z)l(z)[y d (z) D(z)] (39)

34 Stability and Convergence Analysis A sufficient condition for stability of the ILC scheme described by (39) can be obtained by requiring that Q(z)[1 zl(z)g(z)] satisfies the contraction mapping condition terminology is a contraction mapping. For a given T(z) define T(z) = sup θ [ π,π] T(e jθ ) where sup denotes the least upper bound (maximum in many cases). Theorem The system formed by applying an ILC law of the form (14) to a plant described by (11) is AS with T = if Q(z)[1 zl(z)g(z)] < 1 (40)

35 Stability and Convergence Analysis When Q(z) and L(z) are causal this last condition also implies AS for finite-duration ILC. The condition (40) is sufficient but not necessary and in general can be much more conservative than the necessary and sufficient condition. The 2D systems setting (see later) will bring Linear Matrix Inequalities into the analysis. Next we example performance where there are two issues trial-to-trial error convergence (k) and along the trial performance (p). One of many questions: What are the consequences of monotonic trial-to-trial error convergence?

36 Performance If the controlled system is AS, the error as k (asymptotic error) is e (p) = lim k e k(p) = lim d(p) G(p)u k (p) d(p)) k = y r (p) G(p)u (p) d(p) (41) One method of assessing performance is to use e (p) e 0 (p) - either qualitatively or quantitatively by, for example, the Root Mean Square (RMS) error. If the controlled system is AS then for the lifted system Y = [I G[I LG] 1 QL](Y d d) (42)

37 Performance In z transfer-function terms E (z) = 1 Q(z) 1 Q(z)[1 zl(z)g(z)] [Y d(z) D(z)] (43) Essentially, these results can be obtained by replacing k with and then solving for e and E (z). Is it possible to design for e = 0? Theorem If G and L are not identically zero, the system formed by applying an ILC law of the form (14) to a plant described by (11) then e = 0 for all p and for all y d and d if and only if AS holds and Q(q) = 1.

38 Performance

39 Performance Many ILC laws set Q(q) = 1 and hence do not include Q-filtering. The last theorem shows this choice is required for trial-to-trial error convergence to zero (perfect tracking). Q-filtering can improve transient learning and robustness. To explore further, consider selecting Q as an ideal low-pass filter with unity magnitude at low frequencies θ [0, θ 0 ] and zero magnitude for θ (θ 0, π]. For this ideal low-pass filter, using (43), E (e jθ ) = 0 for θ [0, θ 0 ] and equal to Y r (e jθ ) D(e jθ ) for θ (θ 0, π]. For those frequencies where Q(e jθ ) = 1 perfect tracking results and for those where Q(e jθ ) = 0, the ILC is effectively switched off. Hence the Q filter can be used to determine which frequencies are emphasized in the design.

40 Transient Learning Here we concerned with trial-to-trial error convergence. The following example is from Bristow et al. Plant dynamics Control law G(q) = q (q 0.9) 2 u k+1 (p) = u k (p) + 0.5e k (p + 1) (44) In this case p 1 = 1, q 0 = 1 and l 0 = 0.5. Q and L are causal and all eigenvalues of the lifted system are 0.5. Hence the controlled system is AS.

41 Transient Learning In this case Q = 1 and hence e = 0. Take the trial duration as T = 50. Running a simulation shows that over the first 12 trials the trial-to-trial error, measured by the Euclidean or 2-norm, grows by over nine orders of magnitude. This example shows the large trial-to-trial error growth that can arise in this form of ILC. This large growth is problematic since neither the rate or the magnitude is closely related to the stability condition the lifted system eigenvalue is well within the stability region.

42 Transient Learning/Monotonic Convergence It is also difficult to distinguish error growth from instability due to the very large initial growth rate and magnitude. Later we will see that the 2D systems based design can prevent this problem but at the possible cost of a conservative design. To avoid these problems, monotonic convergence is desirable. For any given norm, the system considered is monotonically convergent if e e k+1 η e e k, k = 1, 2,... (45) where 0 η < 1 is the convergence rate.

43 Monotonic Error Convergence Write E E k+1 = GQ(I LG)G 1 (E E k ) (46) When G(q), Q(q) and L(q) are causal, the matrices G, Q and L commute and (46) becomes In the z-domain E E k+1 = Q(I LG)(E E k ) (47) E (z) E k+1 (z) = Q(z)(1 L(z)G(z))(E (z) E k (z)) (48)

44 Monotonic Error Convergence The (non-zero) singular values of a matrix, say H, are given by taking the positive square roots of the eigenvalues of HH T or H T H. Let σ( ) denote the maximum singular value of a matrix. Then from (46) and (47) we have the following result. Theorem If the following condition holds for the system formed by applying an ILC law of the form (14) to plants described by (11) γ 1 = σ(gq(i LG)G 1 ) < 1 (49) then e e k+1 2 < γ 1 e e k 2 (50) for all k = 1, 2,... and 2 denotes the Euclidean norm.

45 Monotonic Error Convergence Theorem If the following condition holds for the system formed by applying an ILC law of the form (14) to plants described by (11) with T = then for all k = 1, 2,... γ 2 = Q(z)[1 zl(z)g(z)] < 1 (51) E (z) E k+1 (z) < γ 2 E (z) E k (z) (52) If Q(z) and L(z) are causal then (51) also implies that for all k = 1, 2,... and finite T. e e k+1 2 < γ 2 e e k 2 (53)

46 Monotonic Error Convergence The z domain monotonic convergence condition (51) is equivalent to the stability condition (40). Hence when Q(z) and L(z) are causal, this stability condition also guarantees monotonic trial-to-trial error convergence independent of T. The lifted system monotonic convergence condition is more strict than the stability condition and both are specific to T. In the presence of AS the worst-case learning can be bounded above by a decaying geometric function e e k 2 kγ k e e k 2 (54) with γ < 1. This is a well-known result in discrete-time linear systems theory and in this area and is a function of T.

47 Robustness Model uncertainties are a fact of life in ILC as in all other areas. Robust ILC is a large problem area and we will revisit it again later after the initial discussion given next. Question: Does a given AS ILC scheme remain AS to plant perturbations? Consider the case of Q(q) = 1, resulting in e = 0, and causal L(q). The stability condition in this case is 1 l 0 p 1 < 1 Hence if l 0 and p 1 are nonzero the ILC scheme is AS if and only if sgn(p 1 ) = sgn(l 0 ) and l 0 p 1 2

48 Robustness As a consequence, ILC can achieve e = 0 using only knowledge of the sign of p 1 and an upper bound on p 1. Perturbations in the higher order Markov parameters do not destabilize!! Also a large upper bound for p 1 is possible by selecting l 0 suitably small. Hence ILC is robust to all perturbations that do not alter the sign of p 1. Fact: Robust stability does not imply acceptable learning transients.

49 Robustness Consider the uncertain plant description (multiplicative uncertainty) Theorem If G(q) = Ĝ(q)[1 + W(q) (q)] (55) where Ĝ(q) is the nominal model, W(q) is known and stable and (q) is unknown but stable with (z) < 1. W(e jθ ) γ Q(e jθ ) 1 e jθ L(e jθ )Ĝ(e jθ ) Q(e jθ ) e jθ L(e jθ )Ĝ(e jθ ) for all θ [ π, π] then the ILC system formed by applying an ILC law of the form (14) to plants described by (11) and (55) with T = is asymptotically convergent with convergence rate γ.

50 Robustness Unlike robust stability, the monotonic robustness conditions also depends on the dynamics of G(q), Q(q) and L(q). The most direct means of increasing the robustness W(e jθ ) at any given θ is to decrease the Q filter gain. There is a trade-off between performance and robustness! Other robustness issues, e.g., noise will be covered later.

Iterative learning control (ILC) is based on the notion

Iterative learning control (ILC) is based on the notion Iterative learning control (ILC) is based on the notion that the performance of a system that executes the same task multiple times can be improved by learning from previous executions (trials, iterations,

More information

Part 1: Introduction to the Algebraic Approach to ILC

Part 1: Introduction to the Algebraic Approach to ILC IEEE ICMA 2006 Tutorial Workshop: Control Algebraic Analysis and Optimal Design Presenters: Contributor: Kevin L. Moore Colorado School of Mines YangQuan Chen Utah State University Hyo-Sung Ahn ETRI, Korea

More information

Lifted approach to ILC/Repetitive Control

Lifted approach to ILC/Repetitive Control Lifted approach to ILC/Repetitive Control Okko H. Bosgra Maarten Steinbuch TUD Delft Centre for Systems and Control TU/e Control System Technology Dutch Institute of Systems and Control DISC winter semester

More information

arxiv: v2 [cs.ro] 26 Sep 2016

arxiv: v2 [cs.ro] 26 Sep 2016 Distributed Iterative Learning Control for a Team of Quadrotors Andreas Hock and Angela P Schoellig arxiv:1635933v [csro] 6 Sep 16 Abstract The goal of this work is to enable a team of quadrotors to learn

More information

An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control

An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control 21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 WeC14.1 An Iteration-Domain Filter for Controlling Transient Growth in Iterative Learning Control Qing Liu and Douglas

More information

Iterative Learning Control (ILC)

Iterative Learning Control (ILC) Department of Automatic Control LTH, Lund University ILC ILC - the main idea Time Domain ILC approaches Stability Analysis Example: The Milk Race Frequency Domain ILC Example: Marine Vibrator Material:

More information

A 2D Systems Approach to Iterative Learning Control with Experimental Validation

A 2D Systems Approach to Iterative Learning Control with Experimental Validation Proceedings of the 17th World Congress The International Federation of Automatic Control Seoul, Korea, July 6-11, 28 A 2D Systems Approach to Iterative Learning Control with Experimental Validation Lukasz

More information

AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPING

AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPING "!# $ %'&)(+* &-,.% /03254-687:9@?A?AB54 C DFEHG)IJ237KI#L BM>A>@ION B5P Q ER0EH?@EHBM4.B3PTSU;V68BMWX2368ERY@BMI Q 7K[25>@6AWX7\4)6]B3PT^_IH7\Y\6A>AEHYK25I#^_4`MER47K7\>AER4` a EH4GbN

More information

Design of iterative learning control algorithms using a repetitive process setting and the generalized KYP lemma

Design of iterative learning control algorithms using a repetitive process setting and the generalized KYP lemma Design of iterative learning control algorithms using a repetitive process setting and the generalized KYP lemma 22 th July 2015, Dalian, China Wojciech Paszke Institute of Control and Computation Engineering,

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

High Precision Control of Ball Screw Driven Stage Using Repetitive Control with Sharp Roll-off Learning Filter

High Precision Control of Ball Screw Driven Stage Using Repetitive Control with Sharp Roll-off Learning Filter High Precision Control of Ball Screw Driven Stage Using Repetitive Control with Sharp Roll-off Learning Filter Tadashi Takemura and Hiroshi Fujimoto The University of Tokyo --, Kashiwanoha, Kashiwa, Chiba,

More information

Predictive Iterative Learning Control using Laguerre Functions

Predictive Iterative Learning Control using Laguerre Functions Milano (Italy) August 28 - September 2, 211 Predictive Iterative Learning Control using Laguerre Functions Liuping Wang Eric Rogers School of Electrical and Computer Engineering, RMIT University, Victoria

More information

Exam. 135 minutes, 15 minutes reading time

Exam. 135 minutes, 15 minutes reading time Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.

More information

Optimal algorithm and application for point to point iterative learning control via updating reference trajectory

Optimal algorithm and application for point to point iterative learning control via updating reference trajectory 33 9 2016 9 DOI: 10.7641/CTA.2016.50970 Control Theory & Applications Vol. 33 No. 9 Sep. 2016,, (, 214122) :,.,,.,,,.. : ; ; ; ; : TP273 : A Optimal algorithm and application for point to point iterative

More information

Iteration-Domain Robust Iterative Learning Control

Iteration-Domain Robust Iterative Learning Control Iteration-Domain Robust Control Iteration-Domain Robust Control Presenter: Contributors: Kevin L. Moore G.A. Dobelman Distinguished Chair and Professor of Engineering Division of Engineering Colorado School

More information

Design strategies for iterative learning control based on optimal control

Design strategies for iterative learning control based on optimal control Selected Topics in Signals, Systems and Control Vol. 2, September 2 Design strategies for iterative learning control based on optimal control Rob Tousain, Eduard van der Meché and Okko Bosgra Mechanical

More information

A Discrete Robust Adaptive Iterative Learning Control for a Class of Nonlinear Systems with Unknown Control Direction

A Discrete Robust Adaptive Iterative Learning Control for a Class of Nonlinear Systems with Unknown Control Direction Proceedings of the International MultiConference of Engineers and Computer Scientists 16 Vol I, IMECS 16, March 16-18, 16, Hong Kong A Discrete Robust Adaptive Iterative Learning Control for a Class of

More information

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton

More information

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli Control Systems I Lecture 2: Modeling Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch. 2-3 Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich September 29, 2017 E. Frazzoli

More information

Recent Advances in Positive Systems: The Servomechanism Problem

Recent Advances in Positive Systems: The Servomechanism Problem Recent Advances in Positive Systems: The Servomechanism Problem 47 th IEEE Conference on Decision and Control December 28. Bartek Roszak and Edward J. Davison Systems Control Group, University of Toronto

More information

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems . AERO 632: Design of Advance Flight Control System Norms for Signals and. Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Norms for Signals ...

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

On the History, Accomplishments, and Future of the Iterative Learning Control Paradigm

On the History, Accomplishments, and Future of the Iterative Learning Control Paradigm Control Paradigm On the History, Accomplishments, and Future of the Control Paradigm Presenter: Contributors: Kevin L. Moore G.A. Dobelman Distinguished Chair and Professor of Engineering Division of Engineering

More information

An Approach of Robust Iterative Learning Control for Uncertain Systems

An Approach of Robust Iterative Learning Control for Uncertain Systems ,,, 323 E-mail: mxsun@zjut.edu.cn :, Lyapunov( ),,.,,,.,,. :,,, An Approach of Robust Iterative Learning Control for Uncertain Systems Mingxuan Sun, Chaonan Jiang, Yanwei Li College of Information Engineering,

More information

Control Systems Theory and Applications for Linear Repetitive Processes

Control Systems Theory and Applications for Linear Repetitive Processes Eric Rogers, Krzysztof Galkowski, David H. Owens Control Systems Theory and Applications for Linear Repetitive Processes Springer Contents 1 Examples and Representations 1 1.1 Examples and Control Problems

More information

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31 Introduction Classical Control Robust Control u(t) y(t) G u(t) G + y(t) G : nominal model G = G + : plant uncertainty Uncertainty sources : Structured : parametric uncertainty, multimodel uncertainty Unstructured

More information

Robust Internal Model Control for Impulse Elimination of Singular Systems

Robust Internal Model Control for Impulse Elimination of Singular Systems International Journal of Control Science and Engineering ; (): -7 DOI:.59/j.control.. Robust Internal Model Control for Impulse Elimination of Singular Systems M. M. Share Pasandand *, H. D. Taghirad Department

More information

저작권법에따른이용자의권리는위의내용에의하여영향을받지않습니다.

저작권법에따른이용자의권리는위의내용에의하여영향을받지않습니다. 저작자표시 - 비영리 - 변경금지 2.0 대한민국 이용자는아래의조건을따르는경우에한하여자유롭게 이저작물을복제, 배포, 전송, 전시, 공연및방송할수있습니다. 다음과같은조건을따라야합니다 : 저작자표시. 귀하는원저작자를표시하여야합니다. 비영리. 귀하는이저작물을영리목적으로이용할수없습니다. 변경금지. 귀하는이저작물을개작, 변형또는가공할수없습니다. 귀하는, 이저작물의재이용이나배포의경우,

More information

Lecture 19 IIR Filters

Lecture 19 IIR Filters Lecture 19 IIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/5/10 1 General IIR Difference Equation IIR system: infinite-impulse response system The most general class

More information

Two-dimensional (2D) Systems based Iterative Learning Control Design

Two-dimensional (2D) Systems based Iterative Learning Control Design Two-dimensional (2D) Systems based Iterative Learning Control Design Electronics and Computer Science University of Southampton Southampton, SO17 1BJ, UK etar@ecs.soton.ac.uk http://www.ecs.soton.ac.uk/

More information

Controlling Human Heart Rate Response During Treadmill Exercise

Controlling Human Heart Rate Response During Treadmill Exercise Controlling Human Heart Rate Response During Treadmill Exercise Frédéric Mazenc (INRIA-DISCO), Michael Malisoff (LSU), and Marcio de Queiroz (LSU) Special Session: Advances in Biomedical Mathematics 2011

More information

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 11, Issue 3 Ver. IV (May - Jun. 2015), PP 52-62 www.iosrjournals.org The ϵ-capacity of a gain matrix and tolerable disturbances:

More information

An LQ R weight selection approach to the discrete generalized H 2 control problem

An LQ R weight selection approach to the discrete generalized H 2 control problem INT. J. CONTROL, 1998, VOL. 71, NO. 1, 93± 11 An LQ R weight selection approach to the discrete generalized H 2 control problem D. A. WILSON², M. A. NEKOUI² and G. D. HALIKIAS² It is known that a generalized

More information

An Input-Output Approach to Structured Stochastic Uncertainty

An Input-Output Approach to Structured Stochastic Uncertainty 1 An Input-Output Approach to Structured Stochastic Uncertainty Bassam Bamieh, Fellow, IEEE, and Maurice Filo, Member, IEEE arxiv:1806.07473v1 [cs.sy] 19 Jun 2018 Abstract We consider linear time invariant

More information

Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering

Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering Adaptive Inverse Control based on Linear and Nonlinear Adaptive Filtering Bernard Widrow and Gregory L. Plett Department of Electrical Engineering, Stanford University, Stanford, CA 94305-9510 Abstract

More information

Denis ARZELIER arzelier

Denis ARZELIER   arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15

More information

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays IEEE TRANSACTIONS ON AUTOMATIC CONTROL VOL. 56 NO. 3 MARCH 2011 655 Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays Nikolaos Bekiaris-Liberis Miroslav Krstic In this case system

More information

Lecture 2 Discrete-Time LTI Systems: Introduction

Lecture 2 Discrete-Time LTI Systems: Introduction Lecture 2 Discrete-Time LTI Systems: Introduction Outline 2.1 Classification of Systems.............................. 1 2.1.1 Memoryless................................. 1 2.1.2 Causal....................................

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

Automatic Control Systems theory overview (discrete time systems)

Automatic Control Systems theory overview (discrete time systems) Automatic Control Systems theory overview (discrete time systems) Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Motivations

More information

LMI based output-feedback controllers: γ-optimal versus linear quadratic.

LMI based output-feedback controllers: γ-optimal versus linear quadratic. Proceedings of the 17th World Congress he International Federation of Automatic Control Seoul Korea July 6-11 28 LMI based output-feedback controllers: γ-optimal versus linear quadratic. Dmitry V. Balandin

More information

Simple Learning Control Made Practical by Zero-Phase Filtering: Applications to Robotics

Simple Learning Control Made Practical by Zero-Phase Filtering: Applications to Robotics IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 49, NO 6, JUNE 2002 753 Simple Learning Control Made Practical by Zero-Phase Filtering: Applications to Robotics Haluk

More information

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 12: Multivariable Control of Robotic Manipulators Part II

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 12: Multivariable Control of Robotic Manipulators Part II MCE/EEC 647/747: Robot Dynamics and Control Lecture 12: Multivariable Control of Robotic Manipulators Part II Reading: SHV Ch.8 Mechanical Engineering Hanz Richter, PhD MCE647 p.1/14 Robust vs. Adaptive

More information

Trajectory tracking control and feedforward

Trajectory tracking control and feedforward Trajectory tracking control and feedforward Aim of this chapter : Jan Swevers May 2013 Learn basic principles of feedforward design for trajectory tracking Focus is on feedforward filter design: comparison

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

FEL3210 Multivariable Feedback Control

FEL3210 Multivariable Feedback Control FEL3210 Multivariable Feedback Control Lecture 8: Youla parametrization, LMIs, Model Reduction and Summary [Ch. 11-12] Elling W. Jacobsen, Automatic Control Lab, KTH Lecture 8: Youla, LMIs, Model Reduction

More information

Structured Stochastic Uncertainty

Structured Stochastic Uncertainty Structured Stochastic Uncertainty Bassam Bamieh Department of Mechanical Engineering University of California at Santa Barbara Santa Barbara, CA, 9306 bamieh@engineeringucsbedu Abstract We consider linear

More information

SELF-REPAIRING PI/PID CONTROL AGAINST SENSOR FAILURES. Masanori Takahashi. Received May 2015; revised October 2015

SELF-REPAIRING PI/PID CONTROL AGAINST SENSOR FAILURES. Masanori Takahashi. Received May 2015; revised October 2015 International Journal of Innovative Computing, Information and Control ICIC International c 2016 ISSN 1349-4198 Volume 12, Number 1, February 2016 pp. 193 202 SELF-REPAIRING PI/PID CONTROL AGAINST SENSOR

More information

IDENTIFICATION FOR CONTROL

IDENTIFICATION FOR CONTROL IDENTIFICATION FOR CONTROL Raymond A. de Callafon, University of California San Diego, USA Paul M.J. Van den Hof, Delft University of Technology, the Netherlands Keywords: Controller, Closed loop model,

More information

Control Systems I. Lecture 2: Modeling and Linearization. Suggested Readings: Åström & Murray Ch Jacopo Tani

Control Systems I. Lecture 2: Modeling and Linearization. Suggested Readings: Åström & Murray Ch Jacopo Tani Control Systems I Lecture 2: Modeling and Linearization Suggested Readings: Åström & Murray Ch. 2-3 Jacopo Tani Institute for Dynamic Systems and Control D-MAVT ETH Zürich September 28, 2018 J. Tani, E.

More information

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08

Fall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08 Fall 2007 線性系統 Linear Systems Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian NTU-EE Sep07 Jan08 Materials used in these lecture notes are adopted from Linear System Theory & Design, 3rd.

More information

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter Robust

More information

Multivariable MRAC with State Feedback for Output Tracking

Multivariable MRAC with State Feedback for Output Tracking 29 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 1-12, 29 WeA18.5 Multivariable MRAC with State Feedback for Output Tracking Jiaxing Guo, Yu Liu and Gang Tao Department

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Automatic Control II Computer exercise 3. LQG Design

Automatic Control II Computer exercise 3. LQG Design Uppsala University Information Technology Systems and Control HN,FS,KN 2000-10 Last revised by HR August 16, 2017 Automatic Control II Computer exercise 3 LQG Design Preparations: Read Chapters 5 and 9

More information

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics Sensitivity Analysis of Disturbance Accommodating Control with Kalman Filter Estimation Jemin George and John L. Crassidis University at Buffalo, State University of New York, Amherst, NY, 14-44 The design

More information

Global output regulation through singularities

Global output regulation through singularities Global output regulation through singularities Yuh Yamashita Nara Institute of Science and Techbology Graduate School of Information Science Takayama 8916-5, Ikoma, Nara 63-11, JAPAN yamas@isaist-naraacjp

More information

Observability and state estimation

Observability and state estimation EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability

More information

(Continued on next page)

(Continued on next page) (Continued on next page) 18.2 Roots of Stability Nyquist Criterion 87 e(s) 1 S(s) = =, r(s) 1 + P (s)c(s) where P (s) represents the plant transfer function, and C(s) the compensator. The closedloop characteristic

More information

Control of industrial robots. Centralized control

Control of industrial robots. Centralized control Control of industrial robots Centralized control Prof. Paolo Rocco (paolo.rocco@polimi.it) Politecnico di Milano ipartimento di Elettronica, Informazione e Bioingegneria Introduction Centralized control

More information

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design

More information

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization for robust control with constraints p. 1 Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization

More information

Gramians based model reduction for hybrid switched systems

Gramians based model reduction for hybrid switched systems Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics

More information

Lecture 9 Nonlinear Control Design

Lecture 9 Nonlinear Control Design Lecture 9 Nonlinear Control Design Exact-linearization Lyapunov-based design Lab 2 Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.2] and [Glad-Ljung,ch.17] Course Outline

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

Discrete and continuous dynamic systems

Discrete and continuous dynamic systems Discrete and continuous dynamic systems Bounded input bounded output (BIBO) and asymptotic stability Continuous and discrete time linear time-invariant systems Katalin Hangos University of Pannonia Faculty

More information

Lecture 19 Observability and state estimation

Lecture 19 Observability and state estimation EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

Introduction. 1.1 Historical Overview. Chapter 1

Introduction. 1.1 Historical Overview. Chapter 1 Chapter 1 Introduction 1.1 Historical Overview Research in adaptive control was motivated by the design of autopilots for highly agile aircraft that need to operate at a wide range of speeds and altitudes,

More information

Robust control for a multi-stage evaporation plant in the presence of uncertainties

Robust control for a multi-stage evaporation plant in the presence of uncertainties Preprint 11th IFAC Symposium on Dynamics and Control of Process Systems including Biosystems June 6-8 16. NTNU Trondheim Norway Robust control for a multi-stage evaporation plant in the presence of uncertainties

More information

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation High-Gain Observers in Nonlinear Feedback Control Lecture # 3 Regulation High-Gain ObserversinNonlinear Feedback ControlLecture # 3Regulation p. 1/5 Internal Model Principle d r Servo- Stabilizing u y

More information

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Pavankumar Tallapragada Nikhil Chopra Department of Mechanical Engineering, University of Maryland, College Park, 2742 MD,

More information

Internal Model Principle

Internal Model Principle Internal Model Principle If the reference signal, or disturbance d(t) satisfy some differential equation: e.g. d n d dt n d(t)+γ n d d 1 dt n d 1 d(t)+...γ d 1 dt d(t)+γ 0d(t) =0 d n d 1 then, taking Laplace

More information

Advanced Adaptive Control for Unintended System Behavior

Advanced Adaptive Control for Unintended System Behavior Advanced Adaptive Control for Unintended System Behavior Dr. Chengyu Cao Mechanical Engineering University of Connecticut ccao@engr.uconn.edu jtang@engr.uconn.edu Outline Part I: Challenges: Unintended

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

Regulating Web Tension in Tape Systems with Time-varying Radii

Regulating Web Tension in Tape Systems with Time-varying Radii Regulating Web Tension in Tape Systems with Time-varying Radii Hua Zhong and Lucy Y. Pao Abstract A tape system is time-varying as tape winds from one reel to the other. The variations in reel radii consist

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 5. Input-Output Stability DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Input-Output Stability y = Hu H denotes

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

Control Systems I. Lecture 1: Introduction. Suggested Readings: Åström & Murray Ch. 1, Guzzella Ch. 1. Emilio Frazzoli

Control Systems I. Lecture 1: Introduction. Suggested Readings: Åström & Murray Ch. 1, Guzzella Ch. 1. Emilio Frazzoli Control Systems I Lecture 1: Introduction Suggested Readings: Åström & Murray Ch. 1, Guzzella Ch. 1 Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich September 22, 2017 E. Frazzoli

More information

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization Lecture 9 Nonlinear Control Design Course Outline Eact-linearization Lyapunov-based design Lab Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.] and [Glad-Ljung,ch.17] Lecture

More information

Point-to-Point Iterative Learning Control with Optimal Tracking Time Allocation: A Coordinate Descent Approach

Point-to-Point Iterative Learning Control with Optimal Tracking Time Allocation: A Coordinate Descent Approach Point-to-Point Iterative Learning Control with Optimal Tracking Time Allocation: A Coordinate Descent Approach Yiyang Chen 1, Bing Chu 1, Christopher T. Freeman 1 1. Electronics and Computer Science, University

More information

Autonomous Helicopter Landing A Nonlinear Output Regulation Perspective

Autonomous Helicopter Landing A Nonlinear Output Regulation Perspective Autonomous Helicopter Landing A Nonlinear Output Regulation Perspective Andrea Serrani Department of Electrical and Computer Engineering Collaborative Center for Control Sciences The Ohio State University

More information

L2 gains and system approximation quality 1

L2 gains and system approximation quality 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 24: MODEL REDUCTION L2 gains and system approximation quality 1 This lecture discusses the utility

More information

Stability Analysis of a Proportional with Intermittent Integral Control System

Stability Analysis of a Proportional with Intermittent Integral Control System American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July, ThB4. Stability Analysis of a Proportional with Intermittent Integral Control System Jin Lu and Lyndon J. Brown Abstract

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

5. Observer-based Controller Design

5. Observer-based Controller Design EE635 - Control System Theory 5. Observer-based Controller Design Jitkomut Songsiri state feedback pole-placement design regulation and tracking state observer feedback observer design LQR and LQG 5-1

More information

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Theory in Model Predictive Control : Constraint Satisfaction and Stability! Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time

More information

Adaptive Inverse Control

Adaptive Inverse Control TA1-8:30 Adaptive nverse Control Bernard Widrow Michel Bilello Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract A plant can track an input command signal if it

More information

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra.

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra. QALGO workshop, Riga. 1 / 26 Quantum algorithms for linear algebra., Center for Quantum Technologies and Nanyang Technological University, Singapore. September 22, 2015 QALGO workshop, Riga. 2 / 26 Overview

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

A Model-Free Control System Based on the Sliding Mode Control Method with Applications to Multi-Input-Multi-Output Systems

A Model-Free Control System Based on the Sliding Mode Control Method with Applications to Multi-Input-Multi-Output Systems Proceedings of the 4 th International Conference of Control, Dynamic Systems, and Robotics (CDSR'17) Toronto, Canada August 21 23, 2017 Paper No. 119 DOI: 10.11159/cdsr17.119 A Model-Free Control System

More information

Analysis of Discrete-Time Systems

Analysis of Discrete-Time Systems TU Berlin Discrete-Time Control Systems 1 Analysis of Discrete-Time Systems Overview Stability Sensitivity and Robustness Controllability, Reachability, Observability, and Detectabiliy TU Berlin Discrete-Time

More information

Stochastic Tube MPC with State Estimation

Stochastic Tube MPC with State Estimation Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems MTNS 2010 5 9 July, 2010 Budapest, Hungary Stochastic Tube MPC with State Estimation Mark Cannon, Qifeng Cheng,

More information

Operator based robust right coprime factorization and control of nonlinear systems

Operator based robust right coprime factorization and control of nonlinear systems Operator based robust right coprime factorization and control of nonlinear systems September, 2011 Ni Bu The Graduate School of Engineering (Doctor s Course) TOKYO UNIVERSITY OF AGRICULTURE AND TECHNOLOGY

More information

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Dynamic Response

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Dynamic Response .. AERO 422: Active Controls for Aerospace Vehicles Dynamic Response Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. . Previous Class...........

More information

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS P. Date paresh.date@brunel.ac.uk Center for Analysis of Risk and Optimisation Modelling Applications, Department of Mathematical

More information

Singular Value Decomposition Analysis

Singular Value Decomposition Analysis Singular Value Decomposition Analysis Singular Value Decomposition Analysis Introduction Introduce a linear algebra tool: singular values of a matrix Motivation Why do we need singular values in MIMO control

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information