FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Q-parametrization (Youla) L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization: Analytic approach L-L4 Controller optimization: Numerical approach L Youla parameterization, Internal Model Control L3 Synthesis by convex optimization L4 Controller simplification L5 Summary controlled variables z measurements y Plant Controller distubances w control inputs u Idea for lecture -4: The choice of controller generally corresponds to finding Q(s), to get desirable properties of the map from w to z: z P zw (s) P zu (s)q(s)p yw (s) w Once Q(s) is determined, a corresponding controller is derived. Lecture 3: Synthesis by Convex Optimization Example: Spring-mass System Example: Spring-mass system u d d M m Most of this lecture is based on source material from Boyd, Vandenberghe and coauthors. See http://www.control.lth.se/education/engineeringprogram/frtn.html b Lecture 3: Synthesis by Convex Optimization Lecture 3: Synthesis by Convex Optimization. Position of the first mass, d 6 Control Signal, u(t) 4.8.6.4 -. -4 -. 3 4 5 6-6 3 4 5 6 The step response is not within its upper and lower bounds. The step input stays within its amplitude bound u(t) 6.
Lecture 3: Synthesis by Convex Optimization Lecture 3: Synthesis by Convex Optimization Sensitivity Function S - Introduction to convex optimization - - Frequency (rad/s) Most of this lecture is based on source material from Boyd, Vandenberghe and coauthors. See http://www.control.lth.se/education/engineeringprogram/frtn.html The sensitivity does not satisfy the magnitude bound S.3 Least-squares Linear program (LP) minimize Ax b solving least-squares problems analytical solution: x = (A T A) A T b reliable and efficient algorithms and software computation time proportional to n (A R n ); less if structured a mature technology minimize c T x d subject to Gx h convex problem with affine objective and constraint functions feasible set is a polyhedron using least-squares least-squares problems are easy to recognize P x c a few standard techniques increase flexibility (e.g., including weights, adding regularization terms) Introduction 5 Convex optimization problems 4 7 Linear programming Convex optimization problem solving linear programs minimize c T x subject to a T i x b i, no analytical formula for solution reliable and efficient algorithms and software i =,..., m computation time proportional to n m if m n; less with structure a mature technology using linear programming not as easy to recognize as least-squares problems a few standard trics used to convert problems into linear programs (e.g., problems involving l - or l -norms, piecewise-linear functions) minimize f (x) subject to f i (x) b i, i =,..., m objective and constraint functions are convex: f i (αx βy) αf i (x) βf i (y) if α β =, α, β includes least-squares problems and linear programs as special cases Introduction 6 Introduction 7 solving convex optimization problems no analytical solution reliable and efficient algorithms computation time (roughly) proportional to max{n 3, n m, F, where F is cost of evaluating f i s and their first and second derivatives almost a technology using convex optimization often difficult to recognize many trics for transforming problems into convex form surprisingly many problems can be solved via convex optimization Brief history of convex optimization theory (convex analysis): ca9 97 algorithms 947: simplex algorithm for linear programming (Dantzig) 96s: early interior-point methods (Fiacco & McCormic, Diin,... ) 97s: ellipsoid method and other subgradient methods 98s: polynomial-time interior-point methods for linear programming (Karmarar 984) late 98s now: polynomial-time interior-point methods for nonlinear convex optimization (Nesterov & Nemirovsi 994) applications before 99: mostly in operations research; few in engineering since 99: many new applications in engineering (control, signal processing, communications, circuit design,... ); new problem classes (semidefinite and second-order cone programming, robust optimization) Introduction 8 Introduction 5
Examples on R convex: affine: ax b on R, for any a, b R exponential: e ax, for any a R powers: x α on R, for α or α powers of absolute value: x p on R, for p negative entropy: x log x on R concave: affine: ax b on R, for any a, b R powers: x α on R, for α logarithm: log x on R Examples on R n and R m n affine functions are convex and concave; all norms are convex examples on R n affine function f(x) = a T x b norms: x p = ( n i= x i p ) /p for p ; x = max x examples on R m n (m n matrices) affine function m n f(x) = tr(a T X) b = A ij X ij b spectral (maximum singular value) norm i= j= f(x) = X = σ max (X) = (λ max (X T X)) / Convex functions 3 3 Convex functions 3 4 Convex optimization problem standard form convex optimization problem minimize f (x) subject to f i(x), i =,..., m a T i x = bi, i =,..., p f, f,..., f m are convex; equality constraints are affine problem is quasiconvex if f is quasiconvex (and f,..., f m convex) Quadratic program (QP) minimize (/)x T P x q T x r subject to Gx h P S n, so objective is convex quadratic minimize a convex quadratic function over a polyhedron f(x ) x P Convex optimization problems 4 Second-order cone programming Semidefinite program (SDP) minimize f T x subject to A i x b i c T i x d i, F x = g (A i R ni n, F R p n ) i =,..., m with F i, G S minimize c T x subject to x F x F x n F n G inequality constraint is called linear matrix inequality (LMI) Newton s method Barrier method for constrained minimization given a starting point x dom f, tolerance ǫ >. repeat. Compute the Newton step and decrement. x nt := f(x) f(x); λ := f(x) T f(x) f(x).. Stopping criterion. quit if λ / ǫ. 3. Line search. Choose step size t by bactracing line search. 4. Update. x := x t x nt. minimize f (x) subject to f i (x) =,..., m Ax=b approximation via logarithmic barrier minimize f (x) (/t) m i= log( f i(x)) subject to x () x () f(x () ) p 5 5 5 3 4 5 an equality constrained problem for t >, (/t) log( u) is a smooth approximation of I approximation improves as t 5 5 3 u Interior-point methods 4 3
Outline Scheme for numerical optimization of Q Given some fixed set of basis function φ (s),...,φ N (s), we will search numerically for matrices Q,..., Q N such that the closed loop transfer matrix G zw (s) satisfies given specifications when Controller optimization using Youla parametrization G zw (s)= P zw (s) P zu (s)q(s)p yw (s) and Q(s)= N Q φ (s) = Once Q(s) has been determined, we will recover the desired controller from the formula C(s)= [ I Q(s)P yu (s) ] Q(s) It is possible to choose the sequence φ (s),φ (s),φ (s),... such that every stable Q can be approximated arbitrarily well. Hence, in principle, every convex control design problem can be solved this way. But, what specifications give a convex design problem? Pulse response parameterization Mini-problem We will use an intuitively simple parametrization of Q(s) where each parameter Q represents a point on the corresponding impulse response in time domain. Which specifications are convex constraints on Q?. Stability of the closed loop system.8.6.4. Q 5. Lower bound on step response from w i to z j at time t i 3. Upper bound on step response from w i to z j at time t i 4. Lower bound on Bode amplitude from w i to z j at frequency ω i 5. Upper bound on Bode amplitude from w i to z j at frequency ω i 6. Interval bound on Bode phase from w i to z j at frequency ω i Lower bound on step response Upper bound on step response.5.5.5.5 5 5 5 5 5 5 The step response depends linearly on Q, so every time t with a lower bound gives a linear constraint. Every time t with an upper bound also gives a linear constraint. Upper bound on Bode amplitude Lower bound on Bode amplitude Bode Magnitude Diagram Bode Magnitude Diagram G a (iω) - G a (iω) G b (iω) - G b (iω) - Frequency (rad/sec) - Frequency (rad/sec) An amplitude bound G(iω i ) <c is a quadratic constraint. An lower bound G(iω i ) is a non-convex quadratic constraint. This should be avoided in optimization. 4
Synthesis by convex optimization Outline A general control synthesis problem can be stated as a convex optimization problem in the variables Q,..., Q m. The problem has a quadratic objective, with linear and quadratic constraints: Minimize subject to Q(iω) { { Pzw(iω) Pzu(iω) Q φ (iω) P yw(iω) dω quadratc objective step response w i z j is smaller than f ij at time t linear constraints step response w i z j is bigger than ij at time t Bode magnitude w i z j is smaller than h ij at ω quadratic constraints Examples revisited Once the variables Q,..., Q m have been optimized, the controller is obtained as C(s)= [ I Q(s)P yu (s) ] Q(s) Example DC-motor DC-servo with time domain bounds w z C(s) s(s) z Input step disturbance Reference step w.4.35.3.4..5 The transfer matrix from(w, w ) to(z, z ) is P PC G zw (s)= PC PC C PC PC with P(s)= s(s). We will choose C(s) to minimize trace G zw (iω)g zw (iω) dω subject time-domain bounds. DC-servo with time domain bounds..5..5 -.5 -. 3 4 5 6 7 8 9.8.6.4. 3 4 5 6 7 8 9 What if we remove the upper bound on the response to input disturbances? Summary Input step disturbance Reference step.8.6.4..8.6.4. 3 4 5 6 7 8 9.4..8.6.4. 3 4 5 6 7 8 9 There are efficient algorithms for convex optimization, e.g. Linear programming (LP) Quadratic programming (QP) Second order cone programming (SOCP) Semi-definite programming (SDP) The Youla parametrization allows us to use these algorithms for control synthesis Resulting controllers have high order. Order reduction will be studied in the next lecture. The integral action in the controller is lost, just as in lecture! Course outline L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization: Analytic approach L-L4 Controller optimization: Numerical approach L Youla parameterization, Internal Model Control L3 Synthesis by convex optimization L4 Controller simplification 5