State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Similar documents
EECS C128/ ME C134 Final Wed. Dec. 15, am. Closed book. Two pages of formula sheets. No calculators.

Course Outline. Higher Order Poles: Example. Higher Order Poles. Amme 3500 : System Dynamics & Control. State Space Design. 1 G(s) = s(s + 2)(s +10)

D(s) G(s) A control system design definition

Linear State Feedback Controller Design

5. Observer-based Controller Design

EE C128 / ME C134 Final Exam Fall 2014

Suppose that we have a specific single stage dynamic system governed by the following equation:

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

CONTROL DESIGN FOR SET POINT TRACKING

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

MODERN CONTROL DESIGN

Topic # Feedback Control

Outline. Classical Control. Lecture 1

Chapter 3. State Feedback - Pole Placement. Motivation

1 (30 pts) Dominant Pole

Topic # Feedback Control Systems

6 OUTPUT FEEDBACK DESIGN

Control Systems. Design of State Feedback Control.

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Zeros and zero dynamics

Control Systems Design

EECS C128/ ME C134 Final Wed. Dec. 14, am. Closed book. One page, 2 sides of formula sheets. No calculators.

Topic # Feedback Control Systems

Homework Solution # 3

Controls Problems for Qualifying Exam - Spring 2014

Exam. 135 minutes, 15 minutes reading time

Control System Design

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

Pole placement control: state space and polynomial approaches Lecture 2

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Autonomous Mobile Robot Design

Problem Set 4 Solution 1

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators.

Introduction to Feedback Control

EE221A Linear System Theory Final Exam

Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Mechanical Engineering Dynamics and Control II Fall 2007

EE C128 / ME C134 Fall 2014 HW 9 Solutions. HW 9 Solutions. 10(s + 3) s(s + 2)(s + 5) G(s) =

Topic # Feedback Control Systems

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Extensions and applications of LQ

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

Intro. Computer Control Systems: F9

Coordinated Tracking Control of Multiple Laboratory Helicopters: Centralized and De-Centralized Design Approaches

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

1 Steady State Error (30 pts)

Intermediate Process Control CHE576 Lecture Notes # 2

Denis ARZELIER arzelier

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31

Robotics. Control Theory. Marc Toussaint U Stuttgart

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

SAMPLE SOLUTION TO EXAM in MAS501 Control Systems 2 Autumn 2015

Transfer function and linearization

CDS 101/110a: Lecture 8-1 Frequency Domain Design

Robust Control 5 Nominal Controller Design Continued

TRACKING AND DISTURBANCE REJECTION

EEE582 Homework Problems

Lecture 4 Continuous time linear quadratic regulator

Systems Analysis and Control

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Topic # Feedback Control Systems

Topic # Feedback Control Systems

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

SYSTEMTEORI - ÖVNING 5: FEEDBACK, POLE ASSIGNMENT AND OBSERVER

Australian Journal of Basic and Applied Sciences, 3(4): , 2009 ISSN Modern Control Design of Power System

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Automatic Control II Computer exercise 3. LQG Design

CDS 101: Lecture 5-1 Reachability and State Space Feedback

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

Introduction to Nonlinear Control Lecture # 4 Passivity

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Classify a transfer function to see which order or ramp it can follow and with which expected error.

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

Introduction to Modern Control MT 2016

Richiami di Controlli Automatici

Control Systems I Lecture 10: System Specifications

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Control of Electromechanical Systems

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

ECEn 483 / ME 431 Case Studies. Randal W. Beard Brigham Young University

Optimal Polynomial Control for Discrete-Time Systems

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

5HC99 Embedded Vision Control. Feedback Control Systems. dr. Dip Goswami Flux Department of Electrical Engineering

Design Methods for Control Systems

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. III Controller Design - Boris Lohmann

Outline. 1 Linear Quadratic Problem. 2 Constraints. 3 Dynamic Programming Solution. 4 The Infinite Horizon LQ Problem.

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

School of Mechanical Engineering Purdue University. ME375 Feedback Control - 1

ECEEN 5448 Fall 2011 Homework #4 Solutions

MTNS 06, Kyoto (July, 2006) Shinji Hara The University of Tokyo, Japan

Control of Single-Input Single-Output Systems

Transcription:

Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state space description Markus Kottmann, markus.kottmann@hsr.ch Date September 23, 29

Advanced Control, State Regulator MSE Preliminary Remarks The path to be followed in this and the next document is the following: The first point to be discussed concerns the design of regulators which make use of the state vector x to control a plant. It is assumed that the plant is controllable. The next point concerns the estimation of the state vector x in case that it is not completely measured. To design such estimators (observers) it is assumed that the plant is observable. The third theme concerns the cooperation between the regulator and the observer. Instead of using the true state-vector, the regulator then utilizes the estimated state-vector. Assumed that the two separate designs of regulator and observer met their specific requirements are these requirements still fulfilled if the regulator and the observer are used together? The assumptions about the plant to be controlled are as follows: Preferably, the plant is controllable and observable. If the plant lacks one or both of the above-mentioned properties (see the examples in Fig. 1), then the non-controllable or non-observable parts can be removed from the model our methods will work for the remaining model which is controllable and observable. a) b) U 1 s+1 2 s+1 c) U X 1 X 1 X 2 Y X 1 1 Y U + s+1 + X 2 s 2 s+1 1 s 2 1 s 2 X 2 Y Figure 1: Examples of systems that are not controllable If the removed parts are e.g. unstable or badly damped then the control system will not be very successful. In that case, additional actuators or sensors are needed. The plant is assumed to be a low-pass system with a feed-through matrix D =. The case D is not difficult it just produces larger equations and diagrams. In the sequel, the plant often is given as a SISO-system. Of course, that is not a restriction. In principle, it is easier to control a plant with more sensors and more actuators. 2 Büchi Roland, Markus Kottmann September 29, 29

MSE Advanced Control, State Regulator 1 Feedback of the Full State Vector Figure 2: Control structure with feedforward term and full state feedback In Fig. 2, the control system has two components: the feedback part K, and the feedforward part K vf. The state feedback K is used to determine the dynamics of the closed-loop system, while the prefilter K vf is used to guarantee the static gain (y = r ) 1. Note that the error signal e = r y which is used in classical output feedback does not appear in this control structure. In the feedback signal u R, the state variables x 1 (t), x 2 (t)... x n (t) are weighted by the coefficients k 1, k 2,..., k n. x 1 u R = k 1 x 1 + k 2 x 2 +... + k n x n = [ ] x 2 k 1 k 2... k n. = Kx x n The control signal u is The matrix equations are u = w u R = K vf r K x ẋ = A x + B (K vf r Kx) = A x BKx + BK vf r ẋ = (A BK) x + BK vf r and y = C x The whole system can be described by a new set of matrices A g, B b, C g A g = A BK, B g = BK vf, C g = C 1 The prefilter, as used here, is just a gain. More sophisticated prefilters might be used to shape the dynamics of Y (s)/r(s). September 29, 29 Büchi Roland, Markus Kottmann 3

Advanced Control, State Regulator MSE The I/O-description becomes G(s) = Y (s) R(s) = C g(si A g ) 1 B g = C(sI A + BK) 1 BK vf For the moment we assume that K is known. Then the static gain can be adjusted by setting G() = 1 or K vf = (C( A + BK) 1 B) 1, resp. This method is sensitive to parameter variations, so in section 4 we will discuss an alternative which includes integral action. Sections 2 and 3 deal with the question how to calculate K. 2 Pole Placement According to Definition B of controllability, the controllability of (A, B) ensures that the eigenvalues λ 1... λ n of A g = A BK can be placed arbitrarily by proper choice of K. The eigenvalues of A g are given by the roots of det(λi A + BK). To achieve some desired eigenvalues λ 1... λ n the following polynomials must coincide: det(λi A + BK) = (λ λ 1 )(λ λ 2 )... (λ λ n ). The question which arises is about the choice of good pole locations. Some considerations are: Obviously, the poles must be chosen in the left half plane for stability reasons. Transients of the signals should decay towards zero faster than some C e σ 1. The transients should be well damped, e.g. ζ > 1/ 2. Forcing transients to decay fast towards needs energy. Therefore, an upper bound σ 2 > σ is meaningful. ζ = 1/ (2) Im s Re ζ = 1/ (2) σ 2 σ 1 Figure 3: Qualitative sketch of region of good pole locations While the placement of poles is simple for low order systems, it gets somehow unsatisfying when handling systems of order e.g. 4 or higher. 4 Büchi Roland, Markus Kottmann September 29, 29

MSE Advanced Control, State Regulator 3 Optimal Control An alternative way to determine K makes use of the fundamental idea of optimality. Changing the value of K results in a different behavior of the control system. If the behavior of the control system is judged quantitatively, it is convenient to have a scalar performance index (or cost function) J = f(k). Normally, better performance corresponds to smaller values of J. The optimal value of K then can be found by minimizing J, e.g. by varying K systematically. In control theory, systems that are adjusted to provide a minimum performance index are called optimal control systems. The function f often is defined based on some intermediary variables such as the state vector x or the input u J = f(k) = t f g(x, u)dt where, of course, x and u depend on K. Since it is useless to look at the values of x and u at some specific time only, the index J is formulated as an integral over time. For theoretical considerations, it is convenient to set the final time t f to infinity. Quadratic terms for the function g are prevalent on one hand these are meaningful measures, and on the other hand they often yield nice analytical results. Figure 4: Regulator problem Consider the following regulator problem, see Fig. 4. The initial state of the system is x() = x. The intention is to take x towards zero as fast as possible by using state feedback. If the norm of x is taken, x = x 2 1 + x 2 2 +... + x 2 n = [ ] x 1 x 2... x n x 1 x 2. x n = xt x September 29, 29 Büchi Roland, Markus Kottmann 5

Advanced Control, State Regulator MSE and integrated over time we get the performance index J = t f x T x dt. In the following derivation the time horizon is set to infinity, and a more general weighting scheme for the state variables is used with a symmetric, positive semidefinite matrix Q: J = x T Qx dt (1) Note that the general form of the performance index incorporates a term containing u to weigh the control energy. This will be discussed in section 3.1. To obtain the minimum value of J, we follow [1] and postulate the existence of an exact differential with a constant symmetric matrix P such that d dt (xt P x) = x T Qx where P is to be determined. Applying the product rule for differentiation, d dt (xt P x) = ẋ T P x + x T and substituting ẋ = (A BK) x, we get }{{} H P }{{} x + x T P ẋ d dt (xt P x) = (Hx) T P x + x T P (Hx) = x T H T P x + x T P Hx = x T (H T P + P H) x = x T Qx }{{} Q which is the exact differential we are seeking. Substituting that expression in (1) results in the performance index J = d dt (xt P x)dt = x T P x = ( xt ()P x()) = x T ()P x() In the evaluation of the limit at t =, we have assumed that the system is stable and hence x( ) =, as desired. Therefore to minimize the performance index J, we consider the two equations: J = The design steps are then as follows: (x T Qx)dt = x T ()P x() and (H T P + P H) = Q 1. Determine the matrix P, where H and Q are known. 2. Minimize J = x T ()P x() by adjusting one ore more unspecified system parameters. 6 Büchi Roland, Markus Kottmann September 29, 29

MSE Advanced Control, State Regulator 3.1 Linear Quadratic Regulator (LQR) In a more general case, we also consider the control energy of u. A method suitable for computer calculation is stated without proof in the following. Consider the uncompensated MIMO system ẋ = Ax + Bu with feedback The performance index is u = Kx. J = (x T Qx + u T Ru)dt (2) With Q and R both positive definite, it can be shown that (2) is minimized if K = R 1 B T P To calculate the symmetric n n matrix P, an algebraic matrix Riccati (ARE) equation must be solved: A T P + P A P BR 1 B T P = Q. Since the ARE is a quadratic equation, it has multiple solutions. Equation (2) demands for the (only) solution P which is positive definite. The resulting optimal controller is called the linear quadratic regulator (MATLAB command: lqr). The condition about Q can be relaxed: if Q = C T C is positive semidefinite and if (A, C) is observable, then P and K can be calculated in the same way. The reason is that x must be observable through the virtual output y = Cx. 3.2 Performance of the Linear Quadratic Regulator If the control loop is opened at the input u of the plant (see Fig. 4) then the openloop transfer function is given by G (s) = K(sI A) 1 B. It can be shown, that G (jω) has at least a phase margin of ±6 and a gain margin of [.5... [ on each channel. This makes the LQ-regulator a considerably robust controller. 4 State Regulator Including Integral Part In this section we discuss the problem of designing a compensator that provides an asymptotic tracking of a constant reference input r(t) = r with zero steady-state error. From classical control it is known that this can be achieved if the open-loop is at least of type 1, i.e. if it has at least one open integrator. This idea is formalized here by introducing an internal model of the reference input in the compensator. September 29, 29 Büchi Roland, Markus Kottmann 7

Advanced Control, State Regulator MSE Figure 5: Block diagram of a regulator with integral part In Fig. 5 the tracking error e is defined as and its time derivative yields e = y r, ė = ẏ (note the sign!) {}}{ ṙ = ẏ = Cẋ Defining two intermediate variables, z and w, as a new system results: ˆx z = ẋ and w = u, Â [{}} ]{ [{}} ]{ [{}}{ ė C e = ż A z ˆx ] + ˆB {[ }}{ B ] û {}}{ w (3) If the system (Â, ˆB) is controllable, then some feedback gain ˆK which stabilizes (3) can be designed e.g. by pole placement or using the LQR method û = ˆK ˆx. Since the error e is a state variable it converges towards zero, i.e. the requirement of an asymptotic tracking with zero steady-state error is fulfilled. Splitting the feedback term yields [ ] û = w = ˆK e ˆx = [K 1 K 2 ] = K z 1 e K 2 z and by integration the final control law u(t) = K 1 t e(τ)dτ K 2 x(t) which is illustrated in Fig. 5. 8 Büchi Roland, Markus Kottmann September 29, 29

Bibliography [1] Richard C. Dorf and Robert H. Bishop. Modern Control Systems. Pearson, 28. [2] William S. Levine, editor. The Control Handbook. The Electrical Engineering Handbook Series. CRC Press/IEEE Press, 1996. September 29, 29 Büchi Roland, Markus Kottmann 9

Advanced Control, State Regulator MSE Solved Exercises 1 Pole Placement Given is a system in state space description [ ] [ ] 1 A =, B = a K s, C = [ c 1 ], D = a = 4, K s = 8, c 1 = 6.4 For each of the following cases, find a controller u = k x + k vf u such that the closed loop system has a unitary static gain and the poles p 1 and p 2 : 1. p 1 = p 2 = 15 2. p 1,2 = 15 ± 15j 3. p 1 = p 2 = 5 1 Büchi Roland, Markus Kottmann September 29, 29

MSE Advanced Control, State Regulator 2 Optimal Control 1 Consider the control system shown below. The state vector is x = [x 1 open-loop system is unstable, therefore state feedback is introduced. x 2 ] T. The Figure 6: Block diagram of a LQ regulator The performance index is J = (x T Qx)dt. Additional conditions are: Q = I and k 1 = k 2 = k and u < 1 at initial conditions x() = 1. Find the plant s matrices A, B, C and D. 2. Find the matrices H and P. 3. Find k by minimizing the performance index J. [ 1 ] September 29, 29 Büchi Roland, Markus Kottmann 11

Advanced Control, State Regulator MSE 3 Optimal Control 2 Given is a scalar system ẋ = ax + bu with a simplified algebraic Riccati equation: 2ap 1 r b2 p 2 + q = 1. Find a controller u = kx by minimizing the performance index J J = (qx 2 + ru 2 )dt, (q, r > ) 2. Find a controller u = kx, which stabilizes the system using minimal control energy (q = ). 12 Büchi Roland, Markus Kottmann September 29, 29

MSE Advanced Control, State Regulator 4 LQ-Regulator of an Inverted Pendulum Consider an inverted pendulum, where the input signal u(t) is a force, applied to the cart, and the output signal y(t) is the position of the cart. The state vector [x 1, x 2, x 3, x 4 ] T is assumed to be [y, ẏ, θ, θ] T. The parameters are given by M = 1kg, m =.1kg, g = 1m/s 2, l = 1m. Figure 7: inverted pendulum Thus, the following system description can be found: 1 A = 1.11 1, B = 1.11, C = [ 1 ], D = 11.1 1.11 1. Compute some optimal control regulators using Matlab command lqr and plot step responses of the closed-loop systems at different weights of control energy r. Find also suitable prefilters K vf. 2. For the same weights r of the control energy, plot the Bode diagram of the closed-loop systems and discuss the results. 3. Discuss the robustness of the system. September 29, 29 Büchi Roland, Markus Kottmann 13

Advanced Control, State Regulator MSE Solutions 1 Solution Pole Placement 14 Büchi Roland, Markus Kottmann September 29, 29

MSE Advanced Control, State Regulator 2 Solution Optimal Control (1) September 29, 29 Büchi Roland, Markus Kottmann 15

Advanced Control, State Regulator MSE 3 Solution Optimal Control (2) 16 Büchi Roland, Markus Kottmann September 29, 29

MSE Advanced Control, State Regulator 4 Solution LQ-Regulator of an Inverted Pendulum September 29, 29 Büchi Roland, Markus Kottmann 17

Advanced Control, State Regulator MSE Step Response 1.2 1 Amplitude.8.6.4 r =.1 r = 1 r = 1.2 -.2 5 1 15 2 25 3 35 Time (sec) Bode Diagram Magnitude (db) -5-1 r =.1 r = 1 r = 1-15 36 Phase (deg) 27 18 9 1-2 1-1 1 1 1 1 2 1 3 Frequency (rad/sec) 1 Nyquist Diagram Openloop 8 6 4 r =.1 r = 1 r = 1 2-2 -6-4 -2 2 4 18 Büchi Roland, Markus Kottmann September 29, 29