The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals

Similar documents
The Role of Exosystems in Output Regulation

A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems

Reduced Order Internal Models in Robust Output Regulation

Lecture 13: Internal Model Principle and Repetitive Control

On the continuity of the J-spectral factorization mapping

A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc

QUANTITATIVE L P STABILITY ANALYSIS OF A CLASS OF LINEAR TIME-VARYING FEEDBACK SYSTEMS

DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS

Krylov Techniques for Model Reduction of Second-Order Systems

Stability Margin Based Design of Multivariable Controllers

Robust Internal Model Control for Impulse Elimination of Singular Systems

Controller Design for Robust Output Regulation of Regular Linear Systems

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31

Product distance matrix of a tree with matrix weights

Model Reduction for Unstable Systems

Matrix Equations in Multivariable Control

APPLICATIONS FOR ROBOTICS

Parametric Nevanlinna-Pick Interpolation Theory

An asymptotic ratio characterization of input-to-state stability

Chapter 2. Classical Control System Design. Dutch Institute of Systems and Control

MULTILOOP PI CONTROLLER FOR ACHIEVING SIMULTANEOUS TIME AND FREQUENCY DOMAIN SPECIFICATIONS

On the Stabilization of Neutrally Stable Linear Discrete Time Systems

CONTROL DESIGN FOR SET POINT TRACKING

Robust SPR Synthesis for Low-Order Polynomial Segments and Interval Polynomials

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.

Partial Fractions and the Coverup Method Haynes Miller and Jeremy Orloff

i=1 β i,i.e. = β 1 x β x β 1 1 xβ d

Control of linear systems subject to time-domain constraints with polynomial pole placement and LMIs

(Continued on next page)

Time Response Analysis (Part II)

Maximizing the Closed Loop Asymptotic Decay Rate for the Two-Mass-Spring Control Problem

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Stability and composition of transfer functions

Denis ARZELIER arzelier

STAT 7032 Probability Spring Wlodek Bryc

7 Asymptotics for Meromorphic Functions

Introduction and Preliminaries

1 Math 241A-B Homework Problem List for F2015 and W2016

P -adic root separation for quadratic and cubic polynomials

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Basic Feedback Analysis & Design

Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions

Analysis of SISO Control Loops

Set, functions and Euclidean space. Seungjin Han

Course 212: Academic Year Section 1: Metric Spaces

CDS Solutions to Final Exam

APPENDIX C: Measure Theoretic Issues

Information Structures Preserved Under Nonlinear Time-Varying Feedback

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

arxiv: v1 [math.na] 25 Sep 2012

Robustness Analysis and Controller Synthesis with Non-Normalized Coprime Factor Uncertainty Characterisation

NP-hardness of the stable matrix in unit interval family problem in discrete time

MATHS 730 FC Lecture Notes March 5, Introduction

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Proof: The coding of T (x) is the left shift of the coding of x. φ(t x) n = L if T n+1 (x) L

Zeros and zero dynamics

Lebesgue Measure on R n

(x, y) = d(x, y) = x y.

Design Methods for Control Systems

EECE 460 : Control System Design

Fuzzy control of a class of multivariable nonlinear systems subject to parameter uncertainties: model reference approach

Structured Stochastic Uncertainty

Lecture 2. FRTN10 Multivariable Control. Automatic Control LTH, 2018

Transform Solutions to LTI Systems Part 3

A GENERALIZATION OF THE YOULA-KUČERA PARAMETRIZATION FOR MIMO STABILIZABLE SYSTEMS. Alban Quadrat

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of

EL2520 Control Theory and Practice

UNCERTAINTY MODELING VIA FREQUENCY DOMAIN MODEL VALIDATION

JUHA KINNUNEN. Harmonic Analysis

6.241 Dynamic Systems and Control

Chapter 9 Robust Stability in SISO Systems 9. Introduction There are many reasons to use feedback control. As we have seen earlier, with the help of a

On the simultaneous diagonal stability of a pair of positive linear systems

Linear Codes, Target Function Classes, and Network Computing Capacity

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

MAE 143B - Homework 9

Parametrization of All Strictly Causal Stabilizing Controllers of Multidimensional Systems single-input single-output case

CONSIDER the linear discrete-time system

Recursive definitions on surreal numbers

Non commutative Khintchine inequalities and Grothendieck s theo

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Simultaneous global external and internal stabilization of linear time-invariant discrete-time systems subject to actuator saturation

Control of Electromechanical Systems

A SHORT INTRODUCTION TO BANACH LATTICES AND

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Robust Tuning of Power System Stabilizers Using Coefficient Diagram Method

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003

A State-Space Approach to Control of Interconnected Systems

ECEN 605 LINEAR SYSTEMS. Lecture 20 Characteristics of Feedback Control Systems II Feedback and Stability 1/27

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Elementary linear algebra

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Stabilization of Distributed Parameter Systems by State Feedback with Positivity Constraints

Definable Extension Theorems in O-minimal Structures. Matthias Aschenbrenner University of California, Los Angeles

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2014

Transcription:

The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals Timo Hämäläinen Seppo Pohjolainen Tampere University of Technology Department of Mathematics PO Box 69 FIN-33101 Tampere Finland hamalai@cctutfi SeppoPohjolainen@cctutfi Keywords: Infinite Dimensional Systems Low-Gain ControlRobust ControlCD-AlgebraOptimalTuning Abstract In this paper we consider the asymptotically optimal tuning of the parameters of a robust controller for distributed parameter systems DPS The controller consists of a positive scalar gain ε and polynomial matrices K s =0 n The cost function is the H -norm of the error between the reference signal and the measured output signal It is shown that as ε 0 the optimization problem reduces to a finite number of semi-infinite min-max optimization problems each problem depends only on a single polynomial matrix K s 1 Introduction In this paper the tuning of finite-dimensional robust multivariable controllers for stable infinite-dimensional systems in the Callier-Desoer algebra CD-algebra will be discussed The following robust regulation problem will be considered: Given a stable plant in the CD-algebra and reference and disturbance signals of the form 0 1 j=0 a 0j t j + =1 1 j=0 a j t j sinω t + φ a j R find a low order finite-dimensional controller so that the outputs asymptotically trac the reference signals asymptotically reject the disturbance signals and the closed loop system is stable and robust with respect to a class of perturbations in the plant In a previous paper [1] the authors have shown that a controller of the form ε m K s iω /ε C ε s = s iω m = n the K s are polynomial matrices with deg K s < m solves the robust regulation problem provided that the 1 positive scalar gain ε is small enough and the matrices K s satisfy certain stability conditions see Eq 5 and also Eq 3 The stability conditions leave a great deal of freedom in the choice of the matrices K s Hence it is desirable to find values for the controller parameters ε and K s that are optimal in some sense In this paper we find an asymptotically as ε 0 globally optimal selection for the polynomial matrices K s when the cost function is the H -norm of the error between the reference signal and the output signal More precisely we wish to minimize the norm of the error signal over all frequencies and all reference and disturbance signals with bounded amplitudes Including the reference and disturbance signal amplitudes in the cost function allows some of the amplitudes a j in 1 to be zero In practice this means that the choice of the polynomial matrices remains optimal despite the presence or absence of particular sinusoidal or polynomial components of the reference and disturbance signals This is useful if it is desired to trac or reject signals which contain a variable subset of the frequencies ω or the powers t j It is shown that as ε 0 the optimization problem reduces to a finite number of semi-infinite min-max optimization problems one for each frequency each problem depends only on a single polynomial matrix K s The only information required of the plant is the value of the plant transfer matrix at the frequency ω Thus the matrices K s can be tuned with input-output measurements made from the open loop plant without nowledge of the plant model If the coefficients a are constant the semi-infinite optimization problems can be solved in closed form In this case the optimal polynomial matrices are constants K which can be expressed in terms of the values of the plant transfer matrix at the frequency ω The asymptotically optimal values are near optimal for small positive values of the scalar gain ε Hence if the quality of the control is not satisfactory and a model for the plant is available there is a good starting point for numerical optimization methods

Notation R and C are the fields of real and complex numbers and C + is the open right half-plane The complex conjugate of a complex number z is denoted by z F[s] is the ring of polynomials in the indeterminate s and coefficients from the field F and deg ps is the degree of the polynomial ps F[s] If ps is a polynomial matrix or vector deg ps is the maximum degree of the elements of ps Rs is the field of rational functions with real coefficients in the indeterminate s F p m is the class of p m matrices with elements in the set F F p is the same as F p 1 I p is the p p identity matrix and A is the conjugate transpose of the matrix A The spectrum of a linear operator A is denoted by σa The norms of x C n and A C m n are defined by x = n i=1 x i and A = σ max A =the largest singular value of A The smallest singular value of A is denoted by σ min A In the normed space X we denote the open ball with center x and radius r by B X x; r or more simply by Bx; r if the space X is clear from the context A polynomial is stable if all its zeros are in the open left half-plane The subset of stable elements of the Callier-Desoer algebra ˆB0 is denoted by  0 For the definition of the CD-algebra see [] The norm of P  0 p m is P = sup ω R P iω 3 Problem Formulation 31 Assumptions We assume that the plant P has m inputs p outputs and that it is stable ie P  0 p m In addition the plant is assumed to satisfy the condition P iω =P iω ω R Assumption is only needed to guarantee that the controller is real ie has elements in Rs The reference signal r and the disturbance signal w are assumed to be of the form 1 Expanding the Laplace transforms of r and w into partial fractions we get rs = ws = = n m = n To simplify notation we have defined r j s iω j r j C p w j s iω j w j C m ω 0 =0 and ω = ω m = m 3 for = n n 3 Previous Results In a previous paper [1] the following problem was solved: Problem 1 Given the feedbac configuration in Fig 1 find a controller C ε such that w r e C ε u Figure 1: The structure of the closed loop system with stable plant P reference signal r disturbance signal w and controller C ε 1 C ε is finite-dimensional has low-order and robustly in the sense of graph topology [3] stabilizes the closed loop system The outputs asymptotically trac the reference signals and asymptotically reject the disturbance signals 3 There exists an ε > 0 such that 1 and hold for every ε 0ε 4 Apart from the parameter ε the controller s parameters can be tuned using the information that can be measured from the original stable plant with input-output measurements The solution to Problem 1 is given by the following theorem proved in [1] Theorem 1 Let the controller C ε be defined by C ε s = K s = = n 1 ε m K s iω /ε s iω m 4 P K l s l K l C m p Then there exists an ε > 0 such that for every ε 0ε the controller C ε is a solution to Problem 1 provided that the polynomial matrices K s satisfy the stability conditions dets m I p + P iω K s Furthermore Problem 1 has a solution iff Let us define is stable for = nn 5 ran P iω =p = nn 6 Q s =s m I p +Piω K s The controller is real if y = nn 7 K l = K l for =0n l =0m 1 8

For = 0 Eq 8 implies that K 0l R m p for l = 0 m 0 1 It follows from condition that if K s satisfies condition 5 and the coefficients of K s satisfy Eq 8 then K s also satisfies 5 Hence only K 0 s K 1 s K n s need be assigned In the sequel we assume that the plant satisfies condition 6 so that Problem 1 has a solution 33 The Optimization Problem In this subsection the optimization problem for the polynomial matrices K s will be formulated Let the transfer matrix from the reference signal to the error signal be Hs = I p +PsC ε s 1 9 Let us also define e j and e by e j s = Hs r j Psw j s iω j e s = e j s Then the error signal es can be written as es =Hs rs Psws = e s and = = n e j s To simplify notation let = n K =K l m 1 C m p m r =r j m Cp m w =w j m Cm m K =K n = n C m p N r=r n = n C p N w=w n = n C m N N = n = n m Let b Kl b rj and b wj be nonnegative real parameters For = n nwe define the constraint sets S K = { K l m 1 det Q s is stable and K l b Kl for l =0m 1 } S r = { r j m S w = { w j m and in terms of these S K = S K n S Kn S r = S r n S rn S w = S w n S wn } r j b rj for j =1m } w j b wj for j =1m A natural cost function would be JKε = sup sup e = sup w S w r S r sup sup r S r w S w ω R eiω However consider the case n =0 Then 1 PsK e 0j s = I p + εm0 0 s/ε r0j s m0 s j P sw 0j s j =s m0 I p +ε m0 PsK 0 s/ε 1 s m0 j r 0j P sw 0j Clearly e 0j has a pole at zero when ε =0 Hence for small ε the main contribution to e 0j comes from a neighbourhood of zero If we now mae a change of variable s = ε s and assume that ε s is small we have e 0j s = s m0 1 sm0 j I p +Pε sk 0 s ε j r 0j P ε sw 0j s m0 1 sm0 j I p + P0K 0 s ε j r 0j P 0w 0j This heuristic consideration shows that because of the term ε j in the denominator we should replace e j s with ε j e j s Therefore we define e ε and e ε by and JKεby e ε s = ε j e j s e ε s = = n JKε = sup sup e ε = sup w S w r S r e ε s sup sup r S r w S w ω R The objective function is now defined to be e ε iω JK = lim JKε 10 Now the optimization problem can be formulated as Problem Given the cost function 10 find J 0 = inf JK 11 K S K 4 Solution of the Optimization Problem In this section we solve the optimization problem formulated in the previous section Lemma Let ε 0 1 be such that the intervals B = B R ω ; ε are nonoverlapping for = n nand let B c be the complement of n = n B in R Then

a The error signal e ε satisfies rw sup e ε iω =0 ω B c b For any = n nand any l we have rw sup e εl iω =0 1 ω B Proof Because each J K depends on only one K it follows from Lemma 3 that J 0 = inf JK = K S K = max inf nn inf K S K K S K J K = max J K nn max nn J 0 c For any = n nwe have rw sup e ε iω = J K 13 ω B J K = sup sup r w ω R Q iω 1 iω m j r j Piω w j and Q was defined in Eq 7 The next lemma shows that as ε 0 the optimization problem can decomposed into n +1simple subproblems each depending on only one polynomial matrix K s Lemma 3 Let J K be as in Lemma Then the cost function JK can be written as JK = lim ε JKε = max J K nn Proof Let B and B c be as in Lemma Then { sup e ε iω = max ω R sup e ε iω ω B n sup e ε iω sup e ε iω ω B n ω B c Combining parts b and c of Lemma we get rw sup e ε iω = J K ω B } 14 = nn This together with part a of Lemma and decomposition 14 proves the lemma Now the main result of the paper is an easy consequence of Lemma 3 Theorem 4 The minimum value J 0 ofjkis J 0 = J 0 = max nn J 0 inf J K K S K and J K was defined in Lemma The minimization of each J has to be done numerically The definition of J shows that we have a min-max problem Because of the constraint det Q s 0for all s C + the problem is also semi-infinite In a previous paper [4] the authors have considered the case m =1 ie when the reference and disturbance signals have simple poles In this case the optimal values J 0 can be computed in closed form Theorem 5 If m =1 then the minimum value J 0 ofj is J 0 = max nn 1 b K which is reached with the choice b r σ min P iω + b w K opt = b K P iω P iω P iω 1/ for = n n 5 Conclusion In this paper the optimal tuning of the polynomial-matrix parameters of a robust controller for stable plants in the CDalgebra has been discussed The optimization is done in the frequency domain using a cost function which is maximum of the error signal over all frequencies and all reference and disturbance signals with bounded amplitudes The amplitude bounds for the reference and disturbance signals and the bounds for the coefficents of the polynomial matrices are freely assignable and can be used as design parameters The optimization problem reduces to a finite number of semi-infinite min-max problems one for each frequency The only information required of the plant is the value of the plant transfer matrix at the frequency ω To the authors nowledge the main results are new even for finitedimensional systems Future wor includes finding a tuning method for the parameter ε

Appendix To prove Lemma we first need to prove Lemmas 6 and 7 For the proofs we need the following definitions U s = Q s s+1 m s iω U ε s =U ε = s iω m I p + P iω K ε s s iω + ε m 15 s iω Q ε s =Q ε = 1 s iω m I p + P iω K ε s 16 ε m K ε s =ε m K s iω /ε In [1] it is shown that U and U ε are unimodular We also have the bound Uε iω 1 U 1 for ε>0and ω R Lemma 6 If s iω εand ε 0 1 then there is a constant M K > 0 such that K ε s s iω m M K ε Proof From the definition of K ε s we get K ε s ε m s iω m = s iω m K s iω /ε ε m s iω m = 1 K l 1 K l s iω l ε l ε m l s iω m l 1 ε K l ε m l 1 1 ε K l Hence we can tae M K = m 1 K l Lemma 7 For s = iω and ω B the transfer matrix Hs can be written in the form Hs =I p +F ε su ε s 1 s iω m s iω + ε m 17 for sufficiently small ε>0and some constant M > 0 we have the bound F ε iω M ε for all ω B 18 1 M ε Proof Let s = iω and ω B Then I p + P sc ε s = I p + P iω K ε s P s P iω K ε s s iω m + s iω m K εl s + P s s iω l m l = l s iω m I p + P iω K ε s+a ε s s iω m A ε s = Ps Piω K ε s +s iω m K εl s P s s iω l m l l 19 Substituting s iω m I p + P iω K ε s =s iω + ε m U ε s from Eq 15 into Eq 19 results in I p + P sc ε s = s iω + ε m s iω m U ε s I p + T ε s 0 1 T ε s = s iω + ε m U ε s 1 A ε s = U εs 1 P s P iω K ε s s iω + ε m + s iω m U ε s 1 P s K εl s s iω + ε m s iω l m l l = U ε s 1 P s P iω G s iω /ε + s iω m U ε s 1 P s K εl s s iω + ε m s iω l m l l and G s = K s s+1 m Using the inequality iω iω 1 1 iω iω + ε the bound G iω iω /ε G and Lemma 6 we get T ε iω U 1 P iω P iω G + ε U 1 P n l M Kl

Because P Â 0 p m it is differentiable at iω and bounded on the imaginary axis Therefore there is a constant M P > 0 such that P iω P iω M P iω iω for every ω R Hence in B we have T ε iω M ε M = M P U 1 G + U 1 P n l M Kl Thus for sufficiently small positive ε we can expand I p + T ε iω 1 into a Neumann series I p + T ε iω 1 = I p + 1 n T ε iω n n=1 = I p + F ε iω and we have the bound F ε iω M ε n = M ε 1 M ε n=1 which proves 18 Taing the inverse of 0 gives 17 Now we are ready to prove Lemma Proof of Lemma a An arbitrary ω B c satisfies ω ω εfor = n n Hence from Lemma 6 we have for any ω B c P iωc ε iω ε P iω n = n M K M c ε n M c = P = n M K Clearly for sufficiently small ε we have P iωc ε iω < 1 Hence Hiω can be expanded into a Neumann series giving the bound Hiω We also have the bound e ε iω ε j e j iω 1 1 M c ε ε j Hiω rj + Piω w j ω ω j Hiω ε ε j 1 b rj + P b wj Hiω εnc Thus N c = brj + P b wj e ε iω = n = n e ε iω Hiω εnc ε 1 M c ε n = n N c 3 Because the right-hand side of 3 approaches zero as ε 0 and does not depend on r j w j or ω the result follows b If l we have e εl s =I p +F ε su ε s 1 s iω m s iω + ε m m l ε j r lj P sw lj s iω l j Let ω B be arbitrary From the inequalities ω ω l εand 1 and the bound 18 we get e εl iω 1+ F ε iω U ε iω 1 εncl ε U 1 N cl 1 M ε 4 N cl was defined in Eq Because the righthand side of 4 approaches zero as ε 0 and does not depend on r lj w lj or ω the claim follows c Using Eqs 15 and 16 we have and hence U ε s 1 s iω + ε m = 1 ε m Q ε s 1 e ε s =I p +F ε s U εs 1 s iω m s iω + ε m ε j r j Psw j s iω j =I p +F ε sq ε s 1 s iω m j ε m r j Psw j j

Let us define Y j s =Q s 1 s m j L ε s = F ε sy j s iω /ε r j Psw j Y j s iω /εps Piω w j Then e ε iω can be written in the form e ε s = Y j s iω /ε r j Piω w j + L ε s It is clear that the elements of Y j s are strictly proper Hence for each ω R and ε > 0 we have Y j iω iω /ε Y j Thus for ω B we have the bound εm Y j L ε iω brj + b wj P 1 M ε References [1] T Hämäläinen and S Pohjolainen A finite-dimensional robust controller for systems in the CD-algebra To be published in IEEE Transactions on Automatic Control in March 000 [] F M Callier and C A Desoer An algebra of transfer functions for distributed linear time-invariant systems IEEE Transactions on Circuits and Systems vol CAS- 5 pp 651 66 Sept 1978 Corrections: CAS-6 p 360 1979 [3] M Vidyasagar Control System Synthesis: A Factorization Approach Cambridge Massachusetts: MIT Press 1985 [4] T Hämäläinen and S Pohjolainen On the asymptotically optimal tuning of robust controllers for systems in the CD-algebra Submitted to SIAM Journal on Control and Optimiztion + b wj Y j Piω P iω Because P Â 0 p m it is continuous at iω Therefore Hence lim sup P iω P iω =0 ω B sup L ε iω =0 ω B r w Maing the change of variable ω ω /ε = ωwe see that sup Y j iω iω /ε r j Piω w j ω B r w = sup ω <1/ Y j i ω r j Piω w j ε r w = sup sup Y j i ω r j Piω w j ω R r w Using the inequalities e ε iω Y j iω iω /ε r j Piω w j + L ε iω e ε iω Y j iω iω /ε r j Piω w j L ε iω the result follows