Lifted approach to ILC/Repetitive Control

Similar documents
Iterative Learning Control Analysis and Design I

Video 6.1 Vijay Kumar and Ani Hsieh

Analysis of Discrete-Time Systems

Analysis of Discrete-Time Systems

Design strategies for iterative learning control based on optimal control

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Iterative Learning Control (ILC)

EEE582 Homework Problems

High Precision Control of Ball Screw Driven Stage Using Repetitive Control with Sharp Roll-off Learning Filter

Part 1: Introduction to the Algebraic Approach to ILC

Chapter 13 Digital Control

On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time

Design Methods for Control Systems

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

CBE507 LECTURE III Controller Design Using State-space Methods. Professor Dae Ryook Yang

magnitude [db] phase [deg] frequency [Hz] feedforward motor load -

APPROXIMATE REALIZATION OF VALVE DYNAMICS WITH TIME DELAY

Linear State Feedback Controller Design

System Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Iterative Learning Control for Tailor Rolled Blanks Zhengyang Lu DCT

Statistical and Adaptive Signal Processing

Zeros and zero dynamics

Lecture 2. Linear Systems

Design of iterative learning control algorithms using a repetitive process setting and the generalized KYP lemma

An LQ R weight selection approach to the discrete generalized H 2 control problem

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Process Modelling, Identification, and Control

SYSTEMTEORI - ÖVNING 5: FEEDBACK, POLE ASSIGNMENT AND OBSERVER

Subspace Identification Methods

21.4. Engineering Applications of z-transforms. Introduction. Prerequisites. Learning Outcomes

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION CONTENTS VOLUME VII

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

EL2520 Control Theory and Practice

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

Problem Set 5 Solutions 1

Eigenvalues and Eigenvectors

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

4.0 Update Algorithms For Linear Closed-Loop Systems

Lecture 7 LQG Design. Linear Quadratic Gaussian regulator Control-estimation duality SRL for optimal estimator Example of LQG design for MIMO plant

Jordan blocks. Defn. Let λ F, n Z +. The size n Jordan block with e-value λ is the n n upper triangular matrix. J n (λ) =

Cross Directional Control

AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPING

Analysis and Synthesis of Single-Input Single-Output Control Systems

Let the plant and controller be described as:-

Control System Design

ECE504: Lecture 9. D. Richard Brown III. Worcester Polytechnic Institute. 04-Nov-2008

Nonlinear Control Lecture 9: Feedback Linearization

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Andrea Zanchettin Automatic Control AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Linear systems (frequency domain)

Efficient Equalization for Wireless Communications in Hostile Environments

Notes on Determinants and Matrix Inverse

AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain)

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

A Comparison of Fixed Point Designs And Time-varying Observers for Scheduling Repetitive Controllers

1 Linear Algebra Problems

Advanced Control Theory

Properties of Open-Loop Controllers

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

Finite-Time Behavior of Inner Systems

João P. Hespanha. January 16, 2009

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Learn2Control Laboratory

The Essentials of Linear State-Space Systems

Algebra C Numerical Linear Algebra Sample Exam Problems

Background Mathematics (2/2) 1. David Barber

Modelling and Control of Dynamic Systems. Stability of Linear Systems. Sven Laur University of Tartu

A Comparative Study on Automatic Flight Control for small UAV

Problem Set 4 Solution 1

Introduction to Signals and Systems Lecture #4 - Input-output Representation of LTI Systems Guillaume Drion Academic year

Structured State Space Realizations for SLS Distributed Controllers

Automatic Control Systems theory overview (discrete time systems)

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Digital Control Systems State Feedback Control

Geometric Control Theory

Jerk derivative feedforward control for motion systems

Chapter 3. State Feedback - Pole Placement. Motivation

Iterative learning control (ILC) is based on the notion

Chapter 15 - Solved Problems

Optimal Polynomial Control for Discrete-Time Systems

Discrete and continuous dynamic systems

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

10/8/2015. Control Design. Pole-placement by state-space methods. Process to be controlled. State controller

Classifying and building DLMs

Robotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007

Lecture 13: Internal Model Principle and Repetitive Control

5. Observer-based Controller Design

Lecture 2: Discrete-time Linear Quadratic Optimal Control

EE Control Systems LECTURE 9

Computational Methods. Eigenvalues and Singular Values

Internal Model Principle

INVERSE MODEL APPROACH TO DISTURBANCE REJECTION AND DECOUPLING CONTROLLER DESIGN. Leonid Lyubchyk

CDS Solutions to Final Exam

Control Systems Design

Transcription:

Lifted approach to ILC/Repetitive Control Okko H. Bosgra Maarten Steinbuch TUD Delft Centre for Systems and Control TU/e Control System Technology Dutch Institute of Systems and Control DISC winter semester 2003/2004

Contents Formulation lifted approach Internal model principle Design of ILC without Q-filter using LQ theory Analysis of existing design results Analysis of Repetitive Control Properties, convergence, performance

classical ILC structure 1 L(z) Q(z) f kn C(z) z N f k P (z) y ref ε k y k d Update in trial space: f kn = Q(z)(f k L(z)ε k )

classical ILC structure 2 y ref, d assumed to be periodic inputs, period length N ε k f k d y ref C(z) P (z) y k control input f k, disturbance input y ref d, output ε k input/output relationship: ε k = (I P C) 1 (y ref d) (I P C) 1 P fk

System formulation 1 y ref d S(z) f k P (z) ε k Define P (z) : = (I P C) 1 P 1 S(z) : = (I P C)

System formulation 2 Better to represent these relations in the time-domain: (A p, B p, C p ) minimal realization for P (z), (A c, B c, C c, D c ) minimal realization for C(z): P (z) = C p (zi A p ) 1 B p C(z) = C c (zi A c ) 1 B c D c i.e. P (z) strictly proper, C(z) proper. State space representation for joint feedback system: x k1 = Ax k Bf k N r k where r k = y ref d ε k = Cx k r k

System formulation 3 ( ) ( ) ( ) A = C = ( A p B p D c C p B p C c B c C p A c ) C p 0 N = B p D c B c B = B p 0 r k f k A, B, C, N ε k

System formulation 4 Split up of system: r k periodic, r k asymptotically periodic r k A, N, C f k A, B, C r k ε k Simplification: consider r k as being periodic

System formulation 5 r k f k A, B, C ε k Servo problem ILC: x 0 = 0, x N = 0, x 2N = 0, x 3N = 0,... typical pattern of r k : pick and place r k 0 N 2N 3N time

System formulation 6 Periodic disturbance suppression Repetitive Control typical pattern of r k : r k 0 N 2N 3N time e.g. drive control with nonuniform torque pattern x 0 = 0, x N, x 2N, x 3N,... follow from RC algorithm Thus ILC and RC require slightly different formulations

System formulation in trial domain 1 Causal LTI system (A, B, C) yields impulse response h k R m l h k = CA k 1 B k = 1, 2, 3,... = 0 k 0 The system has m outputs, l inputs Let j = 0, 1, 2,... denote the trial number and define y j = y Nj y Nj1 y Nj2. f j = f Nj f Nj1 f Nj2. r = r 0 r 1 r 2. ε j = ε Nj ε Nj1 ε Nj2. y NjN 1 f NjN 1 r N 1 ε NjN 1

System formulation in trial domain 2 The lifted system representation in trial domain is now: x NjN = F x Nj Gf j y j = Hx Nj Jf j ε j = y j r where h 0 0 0...... 0 h 1 h 0 0...... 0 h 2 h 1 h 0... 0 J =........................ 0 h N 1...... h 2 h 1 h 0 ( ) G = A N 1 B A N 2 B... AB B C CA H = CA 2. F = A N CA N 1

System formulation in trial domain 3 time invariance J is Toeplitz causality J is lower triangular Toeplitz matrix J represents convolution (A, B, C) minimal (F, G, H) minimal In trial domain, r is constant disturbance vector For ILC, x k = 0 for k = Nj, j = 0, 1, 2,... system model reduces to ε j = Jf j r

Internal model principle 1 Asymptotic rejection of constant disturbance provided that 1. Error-driven dynamic disturbance model is added to controller 2. Each controller disturbance mode can propagate throught plant, i.e. is not cancelled against transmission zero 3. Dynamics of plant and controller are asymptotically stabilised by feedback Vector-valued constant disturbance r generated in trial domain by integrator having initial value ic = r z 1 r ic

Internal model principle 2 Adding disturbance model in feedback loop with gain L: r f j1 f j x j1 x j y j ε j z 1 G z 1 H F J L Asymptotic rejection: ε j 0 for j while r = constant 0 Then y j must compensate r y j r for j

Internal model principle 3 Only possible if rank of steady-state gain of (F, G, H, J) is Nm and if f j has at least dimension Nm Closed-loop state-space model in trial domain: ( ) ( ) ( ) ( ) f j1 I Nl LJ LH f j L = r G F 0 x j1 ε j = ( ) ( ) f j J H r Closed-loop system matrix: ( ) ( ) ( I Nl LJ LH I Nl 0 = G F G F x j x j I Nl 0 ) ( ) L J H

Internal model principle 4 System is controllable if controllability matrix has rank N l n: [ ] I Nl I Nl I Nl I Nl... rank = 0 G G F G G F G F 2 G... [ ] I Nl 0 0 0... rank = n Nl 0 G F G F 2 G... System is observable if observability matrix has rank N l n: J H I Nm 0 [ ] J HG HF rank J HG HF G HF 2 = rank I Nm H J H I Nm H F H G F I n.... This requires both matrices to have rank at least Nl n:

Internal model principle 5 I Nm 0 I Nm 0 0 H I rank Nm H I Nm H F H = rank 0 F H = Nm n 0 F 2 H.... This requires m l, i.e. number of outputs number of inputs [ ] [ ] J H J H(F I n ) 1 rank = rank = G F I n G I n [ ] J H(I n F ) 1 G 0 rank = nrank (J H(zI n F ) 1 G) 0 I z=1 n Nm Nl plant steady-state gain matrix must have rank Nl

Internal model principle 6 If (F, G, H) is of order zero (ILC case) then the Nm Nl matrix J must have rank Nl, i.e. its columns must be linearly independent If (F, G, H) is of order greater than zero (RC case) then we may assume F = A N 0 Then the requirement is rank(j HG) = Nl where C h N h 3 h 2 h 1 CA. ( )...... h3 h 2 HG = CA 2 A N 1 B... AB B =.......... h3............ CA N 1 h 2N 1 Contribution to rank from upper right influence of previous trial h N

Internal model principle 7 Result: Suppose L asymptotically stabilizes system. Then disturbance r is asymptotically rejected if m l. Proof: L asymptotically stabilizes all dynamics ( ) λ I Nl LJ LH i G F < 1 i = 1, 2,..., Nl n which implies ( ) LJ LH rank = Nl n G I n F i.e. this (Nl n) (Nl n) matrix is invertible For asymptotically stable transfer function matrix P (z), the steady state gain matrix is P (z) z=1

Internal model principle 8 Steady state gain matrix between r and ε j : ( ) [ ( )] 1 ( ) I Nl LJ LH L I Nm J H I Nln = G F 0 ( ) ( ) 1 ( ) LJ LH L = I Nm J H =: Ω G I n F 0 The rank of this steady state gain matrix Ω follows from I Nm J H I Nm Γ 1 Γ 2 rank L LJ LH = rank L I Nl 0 0 G I n F 0 0 I n where ( ) ( ) 1 LJ LH Γ = J H G I n F

The rank of this expression now equals Ω 0 0 rank 0 I Nl 0 = rank Ω Nl n 0 0 I n Internal model principle 9 From its structure, the rank of the first matrix is I Nm J H rank L LJ LH Nm n 0 G I n F so that rank Ω Nm Nl There is asymptotic rejection of r if this rank is zero, i.e. for m l, or number of inputs number of outputs

Internal model principle: conclusion 10 J has dimension Nm Nl. Feedback L requires: system controllable: always system observable if m l and rank(j HG) = Nl i.e. HG can contribute in making J full rank disturbance rejection if m l Thus requirements: m = l and (J HG) square and full rank If not satisfied: not all modes asymptotically stable r not fully compensated

Iterative learning control 1 For ILC, the initial state in each trial is zero, which applies in machine operations like pick-and-place tasks. Thus the system is r f j1 f j y j ε j z 1 J L

Iterative learning control 2 Controller contains all dynamics in system no Q-filter thus uncompromised convergence J square, not necessarily of full rank convergence if eigenvalues of I LJ smaller than 1 faster convergence for smaller eigenvalues of I LJ L can be time-varying, non-causal, i.e. no reason to restrict L to be Toeplitz additive noise disturbances may act on J If J full rank, feedback system is first order with state feedback good robustness properties attainable

Iterative learning control 3 System relations: f j1 = (I Nl LJ)f j Lr (1) ε j = Jf j r (2) Asymptotically stable if λ i (I Nl LJ) < 1 i = 1, 2,..., Nl This implies that LJ is non-singular, so that non-singularity of LJ is necessary in order that the system is asymptotically stable. Transfer function matrix betreen r and ε j (sensitivity function) ε j = [I Nl J(zI Nl I Nl LJ) 1 L]r For z = 1 we have steady state gain I Nl J( LJ) 1 L

Iterative learning control 4 Thus LJ non-singular and steady-state gain zero requires J and L to be square and non-singular If J singular: stabilization and full rejection of r not possible Two reasons for loss of rank for J: strictly proper system, or additional delays non-minimum phase zeroes in i/o behaviour of system If Markov parameters h 0, h 1, h 2,..., h d 1 are zero, then rank J is at most Nm d In that case: no full stabilization, no full rejection

ILC example 1 0 1 r = 1 1 4 0 0 0 0 im J = im 1 0 = im 1 0 1 1 0 1 2 J = J is output matrix in system with unit system matrix third mode unobservable feedback has no effect on pole location of third mode 0 0 0 1 0 0 1 1 0

ILC example 2.1 H(z) = z 2 z(z 1) = 2 z 3 z 1 [ ] zi A B = C zero at z = 2 0 1 impulse response h = 3 3 3 z 0 2 0 z 1 3 1 1 1 1 0 1 observability matrix H = 0 1 0 1 0 1

ILC example 2.2 0 0 0 0 0 1 0 0 0 0 J = 3 1 0 0 0 3 3 1 0 0 3 3 3 1 0 By definition of zeros there exists x 0 and exponential input signal containing zero as exponential factor, such that resulting output is zero: 0 0 0 0 0 1 1 1 1 0 0 0 0 2 0 1 ( ) 3 1 0 0 0 4 0 1 1 = 0 1 3 3 1 0 0 8 0 1 3 3 3 1 0 16 0 1

ILC example 2.3 or J f j H x 0 = 0 Then upper bound on smallest non-zero singular value of J is given by σ(j) < H x 0 f j For nonminimum-phase zero f j will grow for larger dimensions N forcing σ(j) to be small Thus delays and non-minimum phase zeroes create loss of rank for J

ILC singular values 1 If J is square but rank-deficient, the singular value decomposition provides the rank as the dimension of Σ 1 : ( ) ( ) ( ) Σ 1 0 V1 T J = U 1 U 2 0 Σ 2 V2 T where Σ 2 is zero. Then decompose f, the input to J, as where Jf = Jf 1 Jf 2 f 1 imv 1, f 2 imv 2 Thus Jf = U 1 Σ 1 V T 1 f 1 as V T 1 f 2 = 0, V T 2 f 1 = 0

and replace f 1 by V 1 f, i.e dimension reduction of state ILC singular values 2 r u j f j1 f j y j ε j z 1 JV 1 L All poles can now be stabilised or assigned

ILC pole assignment f j1 = (I LJV 1 )f k Lr f 0 = 0 ε j = JV 1 f j r If or then LJV 1 = α I LU 1 Σ 1 V T 1 V 1 = α I L = ασ 1 1 U T 1 If α = 1 then dead-beat. The output y follows r only in subspace im J =im U 1

Alternative for pole assignment: LQ optimal control f j1 = f j u j ILC LQ optimal control 1 y j = J V 1 f j LQ criterion: Cr = = (yk T Q y k u T k R u k ) k=1 k=1 (f T k V T 1 J T QJV 1 f k u T k R u k ) Useful choice: Q = I, R = βi

ILC LQ optimal control 2 Solution to LQ-optimal control problem with unit system and input matrices: u j = (βi P ) 1 P f j where P is stabilizing solution of algebraic Riccati equation or P = P V T 1 J T JV 1 P (βi P ) 1 P 0 = Σ 2 1 P (βi P ) 1 P Thus P is diagonal, with entries p i on diagonal, and Σ 1 = diag (σ i ): ( ) p i = 1 2 σ2 i 1 1 4β σ 2 σi 2 i β

ILC LQ optimal control 3 u j = (βi Σ 2 1 βi) 1 (Σ 2 1 βi) f j = LJV 1 f j Solving for L gives the result L = (2βI Σ 2 1) 1 (Σ 1 βσ 1 )U T 1