Intermediate Process Control CHE576 Lecture Notes # 2 B. Huang Department of Chemical & Materials Engineering University of Alberta, Edmonton, Alberta, Canada February 4, 2008
2
Chapter 2 Introduction to digital control 2.1 Introduction References: Ljung, L., Automatica, 24, 4, pp. 573-583, 1988 Morari M., Chem Eng Prog, Oct., pp. 60-67, 1988 Palmor, Z.J., and R., Shinnar, I& EC Proc Des Dev, 18, 1, pp 8-30, 1979. controller design process identification v r e u y G c G p Once the process has been adequately identified, the control engineer s job is to synthesize a process controller which exploits the process characteristics to meet the control objectives. 1
2 CHAPTER 2. INTRODUCTION TO DIGITAL CONTROL Design of any process controller must consider: Process characteristics structure (e.g. order, dead-time,...) parameters (e.g. gain, time constants,...) Control objectives setpoint tracking (servo) disturbance rejection (regulatory) variance of outputs Controller structure & tuning structure selection initial tuning and simulation fine tuning Implementation issues limits (actuator, process variables,...) hardware / software platforms operator interfaces 2.2 Digital PID Controllers References: Kuo, B.C., Digital Control Systems, HRW, 1980. Ogata,K., Discrete-Time Control Systems, Prentice-Hall, 1995 Clarke, D.W., Trans. Instr. Meas. Contr., 6, 6, pp. 305-316, 1985 Jutan, A., Can. J. Chem. Eng., 67, June, pp. 485-493, 1989. Zervos, C., P.R. Belanger, G.A. Dumont, Automatica, 24, 2, 165-175, 1988. 2.2.1 Digital PID Recall from your introductory control courses that the continuous time (analog) Proportional-Integral-Derivative (PID) controller had the form: t u(t) = K c e(t) }{{} + 1 de(t) e(ζ)dζ + T D + u R (2.1) T i 0 }{{}}{{} P I D
2.2. DIGITAL PID CONTROLLERS 3 For fast sampling, if we approximate the integral term with a rectangular sum and the derivative term with a backward difference, this controller can be written in its Position Form. u(t) = K c e }{{} t + T t s e i + T D (e t e t 1 ) + u R (2.2) T i T i=0 s }{{}}{{} P I D (The final controller form depends upon the approximations used for the integral and derivative terms.) This is called the Position Form since it directly calculates the actuator (e.g. valve) position. It is simple to implement, but there are some difficulties: we must know the initial control action (u R ) since integral action requires that we maintain the accumulated error, the summation term can become large when persistent errors are present (reset or integral wind-up). An alternative is the Velocity Form of the digital PID controller. This is developed as follows: [ ] u t = K c e t + T t s e i + T D (e t e t 1 ) + u R T i T s u t 1 = K c [ i=0 e t 1 + T t 1 s e i + T D (e t 1 e t 2 ) T i T s i=0 ] + u R Subtracting the second equation from the first gives the controller in velocity form as [ u t = K c (e t e t 1 ) + T s e t + T ] D (e t 2e t 1 + e t 2 ) T i T s Finally we get the discrete-time transfer function as G c (q 1 ) = U(n) E(n) [ = K c 1 + T s 1 T i 1 q 1 + T ] D (1 q 1 ) T s If the actuator accepts incremental changes, then u t can be directly output. However, when the actuator needs a position signal, either the Position Form of the PID controller should be used or the incremental changes ( u t ) must be summed. This requires an additional piece of code be added to the control algorithm: u t = u t 1 + u t
4 CHAPTER 2. INTRODUCTION TO DIGITAL CONTROL There are two major implementation issues we will discuss for the digital PID controller, before moving on to controller tuning. The first deals with controllers which have integral action and output actuator position. In this situation problems can arise when the actuator reaches one of its physical limits. 2.2.2 Anti-Reset Wind-Up Reference: Segall, N.L, P.A. Taylor, I&EC Proc. Des. Dev., 25, pp. 495-498, 1986. Consider the situation where an actuator is operating near one of its physical limits and a disturbance enters the process which causes the controller to attempt to drive the actuator past this limit. (Note that we will not limit our discussions here to only PID controllers). t 0 a disturbance enters process and causes an error. The controller responds and actuator becomes saturated. t 0 to t 1 persistent error is accumulated by controller causing integral windup t 1 another disturbance enters the process. t 1 to t 2 controller does not seem to respond to this new disturbance because of the enlarged integral term. No action is taken until this integral term is reduced. t > t 2 controller works to eliminate the error.
2.2. DIGITAL PID CONTROLLERS 5 The problem here is that once the actuator saturates, the persistent error continues to be accumulated by the integration term in the controller. The actuator will remain saturated until an error in the opposite direction persists for a significant amount of time. The solution to this integral wind-up is to halt integration when the actuator is saturated due to integral action. (Proportional and derivative actions are always implemented to the fullest extent possible since they only depend on e t & e t 1 ). Position Form Algorithm: calculate Proportional and Derivative actions [ u P D = K c e t + T ] D (e t e t 1 ) + u R T s calculate Integral action calculate total control action 2.2.3 Derivative Kick u I = u 0 I + K c T s T i e t u t = u P D + u I The derivative term is included in a controller to provide compensation for trends in the output variables (e.g. ramps, sinusoids). Since the term is defined as: de(t) d[r(t) y(t)] = = dr(t) dy(t) it operates not just on trends in the output variables y(t), but also on setpoint changes r(t). In the process industries, such setpoint changes are often implemented as step changes. Note that the derivative of a step function is an impulse function. Thus, implementing the derivative term as it is usually presented can cause the controller to output a large impulse called a derivative kick. Such large, short duration changes are generally considered undesirable. As an alternative, since the objective of the derivative term is elimination of trends in the process output variables, consider the modified PID controller: Continuous Time u t = K c [ e t + 1 T i t i=0 ] dy(t) e ( ζ)dζ T D + u R In the velocity form [ u t = K c (e t e t 1 ) + T s e t T ] D (y t 2y t 1 + y t 2 ) T i T s
6 CHAPTER 2. INTRODUCTION TO DIGITAL CONTROL Note: In some industrial settings, when the Velocity form of the digital PID controller is used, proportional kick is also considered undesirable. In these cases, providing the controller has integral action, the proportional term: (e t e t 1 ) can be replaced by: (y t 1 y t ) 2.2.4 Controller Tuning There are two alternatives: using continuous-time approaches (e.g. Ziegler-Nichols, Cohen-Coon, Fertik,...), direct digital synthesis. Continuous-Time Methods There is a major difference between the continuous-time PID controller and its digital implementation, caused by sampling and the Zero-Order Hold. This Zero-Order Hold causes time-delay that is equivalent to one half of the sampling period. This suggests that we can use continuous-time tuning approaches, but we should modify the amount of dead-time in the process model: θ = θ + t 2 Marlin (1995) gives a tuning procedure for digital PID controllers of: 1. obtain a continuous-time process model, (2.3) 2. determine the sample period for your system (often it is fixed), 3. use a standard tuning method with the augmented dead-time, 4. implement these initial settings and fine tune. Remember, any method you use to determine controller tuning parameters only gives you guidelines for these settings. A major portion of the control engineers job is to tune the controller for a specific implementation. Reference: Marlin, T.E., PROCESS CONTROL: Designing Processes and Control Systems for Dynamic Performance, McGraw-Hill, 1995. Direct digital synthesis The closed-loop transfer function between the output and the set-point can be expressed as Y (z) R(z) = G c(z)g(z) 1 + Gc(z)G(z) (2.4)
2.2. DIGITAL PID CONTROLLERS 7 Note that in the discrete time formulation, the transfer function G(z) contains the hold device along with the actuator and measurement device. Now the controller design problem can be stated as follows: Find a controller G c (z) that yields the desired closed-loop response ( ) Y R. So solution for G d c gives G c (z) = 1 ( Y ) R d G(z) 1 ( ) (2.5) Y R There are a number of direct synthesis algorithms. Here we will discuss one of those namely the Dahlin s method. Dahlin s Method Dahlin s method specifies that the closed-loop performance of the system behaves similarly to a continuous first order process with time delay. ( ) Y R d d = 1 λs + 1 e LS (2.6) where λ and L are the time constant and the time delay of the closed-loop transfer function, respectively. Selecting L = θ = N t (the process time delay), the discrete form of eqn(2.6) with a zero order hold is ( ) Y (1 A)z N 1 = R d 1 Az 1 (2.7) where A = e t/λ. Substitution of the expression for the desired closed-loop response in the controller synthesis formula gives the general form of Dahlin s control algorithm as G DC = (1 A)z N 1 1 Az 1 (1 A)z N 1 1 G(z) (2.8) Special case: when G(z) is a first order plus time delay transfer function Dahlin s controller is G(z) = K (1 a 1)z N 1 1 a 1 z 1 (2.9) G DC = 1 A 1 a 1 z 1 1 Az 1 (1 A)z N 1 K(1 a 1 ) (2.10)
8 CHAPTER 2. INTRODUCTION TO DIGITAL CONTROL 2.3 Linear quadratic control 2.3.1 Introduction Pole placement and linear quadratic control methods are convenient and effective way to design advanced controllers to control complex processes such as integrating process, unstable process etc. One of natural ways to design a digital controller is to design the controller in the continuous-time domain first and then transfer the controller to the discrete form as we did for the PID controller. Some design techniques such as pole placement and linear quadratic control are considerably simple in the continuous time and therefore, it is of advantage to design the controller in the continuous time domain first and then transfer it to the discrete controller. The basic characteristic of the transient response of a closed-loop system is closely related to the location of the closed-loop poles. If the system has a variable loop gain, then the location of the closed-loop poles depends on the value of the loop gain chosen. It is important, therefore, that the designer know how the closed-loop poles move in the s plane as the loop gain is varied. From the design viewpoint, in some systems simple gain adjustment may move the closed-loop poles to desired locations. Then the design problem may become the selection of an appropriate gain value. If the gain adjustment alone does not yield a desired result, addition of a compensator to the system will become necessary. The design of this additional compensator can be greatly simplified if all states of the system are measurable. However, if not all states of the system are measurable, then under certain conditions, the states may be estimated from the output measurements. The process of estimating the states from the output measurements is known as the design of the state observer. Once all states are available, then the poles of the closed-loop poles may be placed to any locations by simple state feedback control under certain conditions, known as controllability condition. This type of advanced designs is conveniently performed under the State Space framework. 2.3.2 State space model Example The process under consideration is the simple cylindrical liquid tanks in series shown in Figure 2.1. Liquid flows in the first tank at the rate F i and it is permitted to flow out at a possibly different flowrate F 1, which, in turn, flows into the second tank. The outlet flow rate of the second tank is F 2 which can also be different from F 1. If we assume that outlet flows from the two tanks are directly proportional to their liquid levels, and that the constant of proportionality is the valve resistance c 1 and c 2 for the two tanks respectively, then from the mass balance, we can show the following model: A 1 dh 1 A 2 dh 2 = F i c 1 h 1 = c 1 h 1 c 2 h 2
2.3. LINEAR QUADRATIC CONTROL 9 F i h 1 F 1 h 2 F 2 Figure 2.1: Two tanks in series These equations can be writhen as a standard form as [ dh1 dh 2 dh 1 dh 2 c 1 A 2 = c 1 A 1 h 1 + 1 A 1 F i = c 1 A 2 h 1 c 2 A 2 h 2 The two differential equations can also be written in a matrix form: ] [ c 1 ] [ ] [ A = 1 0 1 ] h1 + A 1 F h 2 0 i c2 A 2 This equation is called State Space representation of the process model and is often denoted as ẋ = Ax + Bu where x is denoted as the state of the system and is writhen as [ ] h1 x = If the level of the second tank is the output variable i.e. y = h 2, then one more equation can be written as y = [ 0 1 ] x h 2 The general form of this output equation is 2.3.3 Pole placement y = Cx Given the state space system (either multivariable or single input-output): ẋ = Ax + Bu
10 CHAPTER 2. INTRODUCTION TO DIGITAL CONTROL The stability of the open-loop system is determined by the eigenvalues of matrix A. The eigenvalues of A are in fact the poles of the system transfer function if the system is expressed in the transfer function model. The MATLAB function ss can be used to convert a transfer function model to a state space model; tf can be used to convert a state space model to a transfer function model. We know that the location of the system poles determines the dynamic behavior of the process. Therefore, we want to design the eigenvalues of the system such that they are all in the desired locations. Let u = Kx, i.e. we consider a proportional feedback control with the state x as the measured variables. Then the closed-loop system can be written as ẋ = Ax bkx = (A bk)x The closed-loop system s eigenvalues are now given by the eigenvalues of the matrix A bk. By varying the matrix K, one can move the eigenvalues to the desired locations. We call this technique as pole placement. Now the question is whether we can arbitrarily move the eigenvalues to the locations desired. The answer is that if the following matrix has rank n, then we can! Co = [ B AB A 2 B A n B ] where n is the dimension of the state x. This matrix is also known as the controllability matrix. This rank test is known as the controllability test. The MATLAB function ctrb can be used to find the controllability matrix Co. Then another MATLAB function rank can be used to determine the rank of the controllability matrix. Once it is determined that the system is controllable, we can always use the proportional state feedback control law u = Kx to place the closed-loop system eigenvalues (poles) to arbitrary locations. The MATLAB command K = place(a, B, p) can be used to calculate the matrix K for a given poles location vector p, where p is a vector of dimension n. 2.3.4 Linear Quadratic control Although the pole placement technique is convenient for control design, it is often difficult to determine what locations we should place the poles. Modern control theory such as optimal control provides us with the desired location of the poles. Linear quadratic control (also know as LQ or LQR control) is to minimize the following objective function by finding the matrix K: J = 0 (x T Qx + u T Ru) where Q is an n n symmetric matrix known as weighting matrix. It can be chosen as diagonal. The magnitude of the diagonal element determines importance of the corresponding state. For example, if we want to control the third state
2.3. LINEAR QUADRATIC CONTROL 11 x 3 well, then we need to give it more weighting than other states. Similarly, R is a weighting matrix of the input with the dimension of the input number. For example, if the system has two inputs, then R is an 2 2 symmetric matrix. The magnitude of the diagonal element of R determines the cost of the corresponding control variable. The control law u = Kx which minimizes the objective function can be found by solving a Ricatti equation A T S + SA SBR 1 B T S + Q = 0 for S and then solving K = R 1 B T S for K. The MATLAB function [K, S, e] = lqr(a, B, Q, R) can be used to calculate the optimal control law K. 2.3.5 Inverse pendulum control problem Many readers are familiar with the objective of balancing a broom (or rod) on the tip of one s finger. Common experience indicates that this is a difficult control task. Many universities around the world have built inverse-pendulum systems to demonstrate control issues. The reason that the problem is interesting from a control perspective is that it illustrates many of the difficulties associated with real-world control problems. For example, the model is very similar to that used in rudder roll stabilization of ship. A simplified figure of the inverse pendulum is shown in Figure 2.2 with the following parameter definitions. M: mass of the cart, 0.5 kg m: mass of the pendulum, 0.5kg l: length to pendulum center of mass, 1m F: force applied to the cart x: cart position coordinate θ: pendulum angle from vertical A state space equation can be derived as A = 0 1 0 0 mg 0 0 M 0 0 0 0 1 0 0 (M+m)g Ml 0 ; B = 0 1 M 0 1 Ml ; C = [ 1 0 0 0 ]
12 CHAPTER 2. INTRODUCTION TO DIGITAL CONTROL Figure 2.2: Inverse pendulum where the states are defined as follows: x 1 (t) = x(t) x 2 (t) = ẋ(t) x 3 (t) = θ(t) x 4 (t) = θ(t) The question is to design stabilizing controllers using pole placement and optimal LQ design technique respectively.