Robust Multivariable Control

Similar documents
Model reduction for linear systems by balancing

Robust Multivariable Control

Model Reduction for Unstable Systems

Riccati Equations and Inequalities in Robust Control

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

CDS Solutions to the Midterm Exam

FEL3210 Multivariable Feedback Control

Balanced Truncation 1

Problem set 5 solutions 1

Hankel Optimal Model Reduction 1

Topic # Feedback Control Systems

3 Gramians and Balanced Realizations

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

Model Reduction for Linear Dynamical Systems

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Semidefinite Programming Duality and Linear Time-invariant Systems

Denis ARZELIER arzelier

H 2 Optimal State Feedback Control Synthesis. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Linear System Theory

The norms can also be characterized in terms of Riccati inequalities.

Multivariable Control. Lecture 03. Description of Linear Time Invariant Systems. John T. Wen. September 7, 2006

Problem Set 5 Solutions 1

Zeros and zero dynamics

Modern Optimal Control

6.241 Dynamic Systems and Control

EEE582 Homework Problems

ECEEN 5448 Fall 2011 Homework #4 Solutions

Controllability, Observability, Full State Feedback, Observer Based Control

Problem Set 4 Solution 1

CME 345: MODEL REDUCTION

Nonlinear Control. Nonlinear Control Lecture # 6 Passivity and Input-Output Stability

Lecture 7 (Weeks 13-14)

Control Systems Design

Balanced Model Reduction

Topic # Feedback Control Systems

Solution of Linear State-space Systems

Lecture 8. Chapter 5: Input-Output Stability Chapter 6: Passivity Chapter 14: Passivity-Based Control. Eugenio Schuster.

Lecture 10: Linear Matrix Inequalities Dr.-Ing. Sudchai Boonto

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Lecture 4: Analysis of MIMO Systems

Order Reduction for Large Scale Finite Element Models: a Systems Perspective

Introduction to Nonlinear Control Lecture # 4 Passivity

Order Reduction of a Distributed Parameter PEM Fuel Cell Model

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

Lecture 3. Chapter 4: Elements of Linear System Theory. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

MAE 143B - Homework 8 Solutions

EE 380. Linear Control Systems. Lecture 10

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Lecture Note #6 (Chap.10)

Gramians of structured systems and an error bound for structure-preserving model reduction

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Model reduction of coupled systems

A Singular Value Decomposition Based Closed Loop Stability Preserving Controller Reduction Method

6 Optimal Model Order Reduction in the Hankel Norm

Lecture 15: H Control Synthesis

Moving Frontiers in Model Reduction Using Numerical Linear Algebra

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

Nonlinear Control Systems

Linear Matrix Inequality (LMI)

Human Friendly Control : an application to Drive by Wire

Principles of Optimal Control Spring 2008

Gramians based model reduction for hybrid switched systems

Nonlinear Control Lecture # 14 Input-Output Stability. Nonlinear Control

Topic # Feedback Control

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

System Identification by Nuclear Norm Minimization

EL2520 Control Theory and Practice

SiMpLIfy: A Toolbox for Structured Model Reduction

Stability of Parameter Adaptation Algorithms. Big picture

Lecture 8. Applications

Nonlinear Control. Nonlinear Control Lecture # 18 Stability of Feedback Systems

Professor Fearing EE C128 / ME C134 Problem Set 7 Solution Fall 2010 Jansen Sheng and Wenjie Chen, UC Berkeley

Singular Value Decomposition (SVD)

Modeling and Analysis of Dynamic Systems

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

arxiv: v1 [math.oc] 17 Oct 2014

DECENTRALIZED CONTROL DESIGN USING LMI MODEL REDUCTION

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Robust Control 2 Controllability, Observability & Transfer Functions

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

Weighted balanced realization and model reduction for nonlinear systems

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Control Systems. Frequency domain analysis. L. Lanari

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators.

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Intro. Computer Control Systems: F8

Model Reduction of Inhomogeneous Initial Conditions

Model Reduction using a Frequency-Limited H 2 -Cost

Fluid flow dynamical model approximation and control

D(s) G(s) A control system design definition

Robust Control 5 Nominal Controller Design Continued

5. Observer-based Controller Design

EE363 homework 8 solutions

EE363 homework 7 solutions

Transcription:

Lecture 2 Anders Helmersson anders.helmersson@liu.se ISY/Reglerteknik Linköpings universitet

Today s topics

Today s topics Norms

Today s topics Norms Representation of dynamic systems

Today s topics Norms Representation of dynamic systems Lyapunov equations

Today s topics Norms Representation of dynamic systems Lyapunov equations Gramians

Today s topics Norms Representation of dynamic systems Lyapunov equations Gramians Balancing

Today s topics Norms Representation of dynamic systems Lyapunov equations Gramians Balancing Model reduction

Today s topics Norms Representation of dynamic systems Lyapunov equations Gramians Balancing Model reduction H 2 and H norms

Small gain theorem G If G is stable and G < 1 then the closed loop system is stable if 1. Which norm should we use?

Norms p-norms for vectors: x p = ( ) 1/p x i p i Most common: p = 1: sum of the absolute values of the elements; p = 2: Euclidian norm of the vector x; p = : the maximum of absolute values. Induced norms maximum singular value. A = sup x =0 Ax 2 x 2 = σ(a)

Singular values Singular value decomposition where A = UΣV, σ 1 σ 2 Σ = U U = I, V V = I... Rn n σn assuming σ 1 σ 2... σ n 0. (A A)V = (UΣV ) UΣV V = VΣU UΣ = VΣ 2, which means that Σ 2 are the eigenvalues of A A. Numerically, this is not a good method to compute Σ.

Singular values Also, [ 0 ] [ A U U A 0 V V [ UΣ UΣ = VΣ VΣ ] [ 0 UΣV = ] [ U U VΣU 0 V V ] [ ] [ ] U U Σ 0 = V V 0 Σ ] and, consequently, ±Σ are the eigenvalues of [ 0 ] A A 0

Singular values and LMIs We can also use a Linear Matrix Inequality (LMI) to find the maximum singluar value. Find the minimum γ for which [ ] γi A A 0. γi Here 0 means positive semidefinite (all eigenvalues 0). Also, min{γ : γi A A 0} = σ 2 (A) γ

Representation of dynamic systems State-space representation ẋ = Ax + Bu y = Cx + Du Transfer function G(s) = C(sI A) 1 B + D (1) Sometimes we write this as [ ] [ ẋ A B = y C D or [ ] A B G = C D ] [ x u ] (2) (3)

Serial and parallel connection Serial connection: Parallel connection: [ ] A1 B G 1 = 1 C 1 D 1 G 1 G 2 = G 1 + G 2 = [ ] A2 B G 2 = 2 C 2 D 2 A 1 B 1 C 2 B 1 D 2 0 A 2 B 2 C 1 D 1 C 2 D 1 D 2 A 1 0 B 1 0 A 2 B 2 C 1 C 2 D 1 + D 2 (4) (5) (6)

Lyapunov equaions The linear equation A T X + XA + Q = 0 (7) is a Lyapunov equation. It has a unique solution if A och A do not have any common eigenvalues; for instance if A has all its eigenvalues in the left hand plane. In Matlab we can solve the equation using X = lyap (A, Q);

Observability gramian L o The observability gramian is defined by Regard the following system (u = 0): {ẋ = Ax A T L o + L o A + C T C = 0. (8) y = Cx Introduce V o = x T L o x and take its time derivative V o = ẋ T L o x + x T L o ẋ = x T A T L o x + x T L o Ax = x T (A T L o + L o A) x = x T C T Cx = y T y = y 2 2 }{{} C T C

Observability gramian L o Integrate V o T 0 T V o dt = V o (x(t)) V o (x(0)) = y(t) 2 2 dt. 0 If we assume that the system is stable and T then y(t) approaches zero. Thus V o (x(0)) = x T (0)L o x(0) = T 0 y(t) 2 2 dt The gramian, L o, includes information about how much a certain state becomes visible (as energy) in the output signal.

Controllabilty gramian L c The controllability gramian is defined by AL c + L c A T + BB T = 0. (9) Introduce V c (x) = x T Lc 1 x. This is a measure of the smallest energy needed in the input to obtain a certain state x. As before, take the derivative of V c with ẋ = Ax + Bu: V c (x) = ẋ T Lc 1 x + x T Lc 1 ẋ = (Ax + Bu) T Lc 1 x + x T Lc 1 (Ax + Bu) = u T B T Lc 1 x + x T Lc 1 Bu + x T (A T Lc 1 + Lc 1 A) x }{{} = (B T L 1 c Lc 1 BB T Lc 1 x + u) T (B T L 1 x + u) + u T u u T u c

Controllability gramian L c The control law u opt = B T Lc 1 x gives the smallest control energy. The energy needed for reaching a certain state (starting in x( ) = 0) is given by u 2 = V c (x) + u + B T L 1 c x 2 (10) Thus, u 2 V c (x) = x T L 1 c x, with equality when u = u opt.

Balancing The representation of a system is not unique. [ ] A B G = C D Using a similarity transformation, it can also be written as [ ] [ Â ˆB TAT 1 ] TB G = = Ĉ ˆD CT 1 D (11) (12) The transformation also affects the gramians ˆL o = T T L o T 1 (13) and ˆL c = TL c T T (14)

Balancing Choose T in a clever way such that σ 1 σ 2 ˆL o = ˆL c = Σ =... σn (15) Then σ 2 i = λ i (ˆL o ˆL c ) = λ i (L o L c ). (16) To obtain this factor L o och L c, for instance using Cholesky decomposition: L o = Uo T U o and L c = Uc T U c. Then compute UΣV T = U o Uc T using singular value decomposition, where UU T = VV T = I. The transformation matrix is given by T = Σ 2 1 U T U o.

An example of balancing 5 0 1 1 G(s) = (s + 1)(s + 5) = 1 1 0 = 0 1 0 with Σ = diag [σ 1, σ 2 ] = diag [0.1124, 0.0124]. 0.5946 1.336 0.3656 1.336 5.405 0.3656 0.3656 0.3656 0 In Matlab this is done by g = tf(1,[1 1])*tf(1,[1 5]); [gb,sig] = balreal(g); Here σ 2 is small compared to σ 1. If we remove the corresponding state, x 2, the H error is limited by 2σ 2 : G Ĝ 1 2σ 2, Ĝ 1 (s) = [ 0.5946 0.3656 0.3656 0 ] (17)

An example of balancing Singular Values 10 15 20 G G 1 G G 1 2σ 2 25 Singular Values (db) 30 35 40 45 50 55 60 10 1 10 0 10 1 10 2 Frequency (rad/sec)

Model reduction Model reduction can be done by truncating the states that have the smallest Hankel singular values: G n Ĝ r 2 n i=r+1 σ i (18) This upper bound can be halved by using a different scheme that also modifes the D matrix: σ r+1 G n Ĝ r n σ i (19) i=r+1

With modified D optimal Hankel Singular Values 10 15 G G 1 G G 1 =σ 2 20 25 Singular Values (db) 30 35 40 45 50 55 60 10 1 10 0 10 1 10 2 Frequency (rad/sec)

H 2 norm e G z Let e = δ (dirac pulse) or white noise Energy in the output: ẋ = Ax + Bδ(t 0) }{{} x=b V(x) = x T (0)L o x(0) = B T L o B = G 2 2 = CL c C T

H 2 norm e G z In MIMO case G 2 2 = tr B T L o B = tr CL c C T The H 2 norm can be computed by solving a linear matrix equation.

H norm H norm is a measure of the maximum gain of a stable system, G, over all frequencies, ω. That is to say L norm also applies to unstable systems. G = max G(jω) (20) ω H norm cannot be computed direct (as the H 2 norm), but we can test if G < γ.

H norm [ A B G(s) = C D ] = D + C(sI A) 1 B Assume that G < γ. If G is SISO then Φ(jω) = γ 2 σ 2 (G(jω)) = γ 2 G(jω) G(jω) > 0, for all ω. If γ G then Φ(jω) has at least one zero for ω R.

H norm where This gives Φ(s) = γ 2 G (s)g(s) = γ 2 G( s) T G(s) G (s) = [ A T C T B T D T A 0 B Φ(s) = C T C A T C T D D T C B T γ 2 I D T D Requirement 1: γ > σ(d) = λ max (D T D). ] If Φ(s) has a zero on the imaginary axis then Φ 1 (s) must have a pole on the imaginary axis.

Inverse of systems We need to compute the inverse of a state space system. The inverse of a system can be written as [ ] 1 [ G 1 A B A BD 1 C BD 1 (s) = = C D D 1 C D 1 ] since...

Inverse of systems G 1 (s)g(s) = I I 0 = 0 I 0 0 0 I A BD 1 C BD 1 C BD 1 D 0 A B D 1 C D 1 C D 1 D A BD 1 C BD 1 C B 0 A B D 1 C D 1 C I A BD 1 C A BD 1 C 0 0 A B = D 1 C D 1 C I A BD 1 C 0 0 = 0 A B I D 1 C 0 I I I 0 0 I 0 0 0 I I I 0 0 I 0 0 0 I

H norm The inverse of the system can be written as ] 1 [ A BD 1 C BD 1 = C [ A B ] D D 1 C D 1 Let H be the A matrix of the inverse of Φ(s): [ A 0 H = C T C A T [ = ] [ A + BR 1 D T C C T C + C T DR 1 D T C B C T D ] ( γ 2 I D D) T 1 [ D T C B T ] } {{ } R BR 1 B T A T C T DR 1 B T If H has eigenvalues on the imaginary axis then γ G. ]

Exemple of the H norm Let G(s) = 1 1+0.1s+s 2 : γ eig H 10 ±1j, ±0.9950j 10.1 ±0.0066 ± 0.9975j 10.0125 ±ɛ ± 0.9974j 1/(s 2 +s/10+1) 30 20 10 Singular Values (db) 0 10 20 30 40 10 1 10 0 10 1 Frequency (rad/sec)

Induced norms The H norm can also be described as an induced L 2 or l 2 norm. For a discrete-time system, if γ > G then max u T t=0 ( ) y T (t)y(t) γ 2 u T (t)u(t) < 0 for the system [ x(t + 1) y(t) ] [ A B = C D ] [ x(t) u(t) ].

Induced norms Introduce a Lyapunov function, V(x) = x T Px, where P = P T 0 is positive definite: = = = T ( ) y T (t)y(t) γ 2 u T (t)u(t) t=0 T t=0 T t=0 T t=0 ( ) y T (t)y(t) γ 2 u T (t)u(t) + V(x(T + 1)) V(x(0)) + V(x(0)) V(x(T + 1)) ( ) y T (t)y(t) γ 2 u T (t)u(t) + V(x(t + 1)) V(x(t)) + V(x(0)) V(x(T + 1)) [ x(t) u(t) ] T ([ A T PA P A T PB B T PA B T PB γ 2 I + V(x(0)) V(x(T + 1)) ] [ C T + D T ] [ C D ] ) [ x(t) u(t) ] T

LMIs Assure that [ A T PA P A T PB B T PA B T PB γ 2 I ] [ C T + D T ] [ C D ] 0 by choosing an appropriate matrix P = P T 0. The smallest γ that we can find such a P for gives the upper bound of G. This is a linear matrix inequality (LMI), which is a convex problem.

LMIs For continuous-time systems [ PA + A T P PB B T P γ 2 I ] [ C T + D T ] [ C D ] 0 The smallest γ for which we can find such a P = P T 0 gives the upper bound of G.