2.1 Differenceequationmodels Calculating responses in difference equation models Discretizing continuous-time models 13

Size: px
Start display at page:

Download "2.1 Differenceequationmodels Calculating responses in difference equation models Discretizing continuous-time models 13"

Transcription

1 Contents I Discrete-time systems 8 1 Discrete-time signals 9 2 Difference equations Differenceequationmodels Calculating responses in difference equation models Discretizing continuous-time models Discretizationsmethods Discretizing a simulator of a dynamic system Discretizing a signal filter DiscretizingaPIDcontroller Computerbasedcontrolloop Development of discrete-time PID controller Some practical features of the PID controller Selecting the sampling time of the control system Discrete-time state space models General form of discrete-time state space models

2 IA5106 Autmatiseringsteknikk og praktisk modellering Linear discrete-time state space models Discretization of linear continuous-time state space models Linearization of nonlinear state space models Calculating responses in discrete-time state space models Calculatingdynamicresponses Calculatingstaticresponses The z-transform Definition of the z-transform Properties of the z-transform z-transformpairs Inverse z-transform z-transfer functions Introduction How to calculate z-transferfunctions From z-transfer function to differenceequation Fromstatespacemodeltotransferfunction Calculating time responses for z-transfer function models Statictransferfunctionandstaticresponse Polesandzeros Frequency response of discrete-time systems 40 8 Stability analysis of discrete-time systems Definitionofstabilityproperties Stability analysis of transfer function models

3 IA5106 Autmatiseringsteknikk og praktisk modellering Stabilityanalysisofstatespacemodels Stability analysis of feedback(control)systems Definingthefeedbacksystem Pole placement based stability analysis Nyquist s stability criterion for feedback systems Nyquist sspecialstabilitycriterion Stability margins: Gain margin and phase margin Stability margin: Maximum of sensitivity function Stability analysis in a Bode diagram II Stochastic variables and systems 69 9 Stochastic variables and systems Introduction Stochasticsignals Realizations of stochastic processes Probability distribution of a stochastic variable The expectation value and the mean value Variance.Standarddeviation Auto-covariance. Cross-covariance Whiteandcolourednoise Whitenoise Colourednoise Propagation of mean value and co-variance through systems Introduction... 80

4 IA5106 Autmatiseringsteknikk og praktisk modellering Propagation of mean value and co-variance through staticsystems Propagation of mean value and co-variance through statespacesystems III Estimation of model parameters and states Estimation of model parameters Introduction Parameter estimation of static models with the LS method Thestandardregressionmodel TheLSproblem TheLSsolution Criterion for convergence of estimate towards the true value Model validation: How to compare and select among severalestimates? Parameter estimation of dynamic models with the LS method Good excitation is necessary! Howtoselectapropermodel? Differentialequationmodels s-transferfunctionmodels Black-boxmodeling State estimation with Kalman Filter Introduction Observability...111

5 IA5106 Autmatiseringsteknikk og praktisk modellering TheKalmanFilteralgorithm Estimating parameters and disturbances with Kalman Filter State estimators for deterministic systems: Observers KalmanFilterorObserver? IV Model-based control Testing a model-based control system for robustness Feedforward control Introduction Designing feedforward control from differential equation models Designing feedforward control from experimental data Dead-time compensator (Smith predictor) Feedback linearization Introduction Derivingthecontrolfunction Case1:Allstatevariablesarecontrolled Case 2: Not all state variables are controlled LQ (Linear Quadratic) optimal control Introduction ThebasicLQcontroller LQcontrollerwithintegralaction Model-based predictive control 181

6 IA5106 Autmatiseringsteknikk og praktisk modellering 6 A Skogestad s method for PID tuning 186

7 Forord Dette kompendiet inngår i pensum for emnet IA5106 Automatiseringsteknikk og praktisk modellering ved Høgskolen i Telemark høsten Det forutsettes at leseren har grunnleggende kunnskaper om komplekse tall, differensiallikninger, tidskontinuerlige tilstandsrommodeller, Laplacetransformasjonen, s-transferfunksjoner, samt frekvensrespons for tidskontinuerlige systemer. Det er ikke tillatt å kopiere kompendiet. Siv.ing. Finn Haugen TechTeach Skien, august

8 Part I Discrete-time systems 8

9 Chapter 1 Discrete-time signals Assume that an AD-converter (analog-digital) at discrete points of time converts an analog signal y a (t), which can be a voltage signal from a temperature or speed sensor, to an equivalent digital signal, y d (t k ),inthe form of a number to be used in operations in the computer, see Figure 1.1. t AD-converter Continuous-time, with sampling analog signal y a (t) y d (t k ) t k Discrete-time, digital signal f s [Hz] = 1/h Figure 1.1: Sampling. h is the time step between the samplings, or the sampling interval. (The AD-converter is a part of the interface between the computer and the external equipment, e.g. sensors.) As indicated in Figure 1.1 the resulting discrete-time signal is a sequence or a series of signal values defined in discrete points of time. h is the time step between the samplings, or the sampling interval. Figure 1.2 shows this signal in more detail. The discrete points of time may be denoted t k where k is an integer time index. The time series can be written in various ways: {x(t k )} = {x(kh)} = {x(k)} = x(0), x(1), x(2),... (1.1) 9

10 10 y k = y(kh) 2,0 1,5 1,0 0,5 0, k t k = t [s] h=0.2 Figure 1.2: Discrete-time signal To make the notation simple, we can write the signal in one of the following ways: x(t k ) (1.2) x(kh) (1.3) x(k) (1.4) x k (1.5) In the example above, the discrete-time signal originated from sampling of a continuous-time signal. However, discrete-time signals exists in many other circumstances, for example, the output signal from a discrete-time (computer-based) signal filter, for example a lowpass filter, the output from a discrete-time (computer-based) controller which controls a physical process, the response in a dynamic system as calculated by a (computer-based) simulator.

11 Chapter 2 Difference equations 2.1 Difference equation models The basic model type of continuous-time dynamic systems is the differential equation. Analogously, the basic model type of discrete-time dynamic systems is the difference equation. Here is an example of a linear second order difference equation with u as input variable and y as output variable: y(t k+2 )+a 1 y(t k+1 )+a 0 y(t k )=b 0 u(t k ) (2.1) which may be written somewhat simpler as y(k +2)+a 1 y(k +1)+a 0 y(k) =b 0 u(k) (2.2) where a i and b j are coefficients of the difference equation, or model parameters. Note that this difference equation has unique coefficients since the coefficient of y(k +2)is 1. One equivalent form (2.2) is y(k)+a 1 y(k 1) + a 0 y(k 2) = b 0 u(k 2) (2.3) where there are no time delayed terms (no negative time indexes), only time advanced terms (positive or zero time indexes). This form can be obtained from (2.2) by increasing each time index in (2.2) by 2. In most cases we want to write the difference equation as a formula for the output variable. In our example the formula for the output y(k) can be obtained by solving for y(k) from (2.3): y(k) = a 1 y(k 1) a 0 y(k 2) + b 0 u(k 2) (2.4) 11

12 12 (2.4) says that the output y(k) is given as a linear combination of the output one time step back in time, y(k 1), the output two time steps back in time, y(k 2), and the input two time steps back in time, u(k 2). 2.2 Calculating responses in difference equation models For example, (2.4) is a formula for calculating dynamic (time-varying) responses in the output, y(k). The formula must be calculated once per time step, and it can be implemented in a While loop or a For loop in a computer program. Assume as an example that y(1), y(0) and u(0) are zero. Then (2.4) gives and so on. y(2) = a 1 y(1) a 0 y(0) + b 0 u(0) (2.5) y(3) = a 1 y(2) a 0 y(1) + b 0 u(1) (2.6) y(4) = a 1 y(3) a 0 y(2) + b 0 u(2) (2.7) The static response which is the (steady-state) response of the system when all variables are assumed to have constant values can be calculated from the static version of the difference equation. The static version is found by neglecting all time-dependencies in the difference equation, and setting y(k) =y s, y(k 1) = y s etc. where subindex s is for static. For example, the static version of (2.4) is y s = a 1 y s a 0 y s + b 0 u s (2.8) Thestaticresponseis y s = b 0 1+a 1 + a 0 u s (2.9)

13 Chapter 3 Discretizing continuous-time models 3.1 Discretizations methods Typical applications of difference equation models are computer-based implementations of simulators, signal filters, controllers. Often the simulator model, the filter, or the controller, is originally given as a continuous-time model in the form of a differential equation or a Laplace transfer function. The obtain a corresponding differential equation (ready for implementation in a computer program), you have to discretize the continuous-time model. This approximation can be made in many ways. My experience is that in most applications it is sufficient to apply one of the following (simple) methods, which are based on approximations of the time derivatives of the differential equation: (Euler s) Forward differentiation method, whichiscommonly used in developing simple simulators. (Euler s) Backward differentiation method, whichiscommonly used in discretizing simple signal filters and industrial controllers. 13

14 14 The Forward differentiation method and the Backward differentiation method will be explained in detail below. But you should at least have heard about some other methods as well, see below [3]: Zero Order Hold (ZOH) method: It is assumed that the system has a zero order hold element on the input side of the system. This is the case when the physical system is controlled by a computer via a DA converter (digital to analog). Zero order hold means that the physical input signal to the system is held fixed between the discrete points of time. The discretization method is relatively complicated, and in practice you have to use a computer tool (e.g. MATLAB or LabVIEW) to do the job. Tustin s method : This method is based on an integral approximation where the integral is interpreted as the area between the integrand and the time axis, and this area is approximated with trapezoids. (The Euler s methods approximates this area with a rectangle.) Tustin s method with frequency prewarping, or Bilinear transformation: This is the Tustin s method but with a modification so that the frequency response of the original continuous-time system and the resulting discrete-time system have exactly the same frequency response at one or more specified frequencies. Some times you want to go the opposite way transform a discrete-time model into an equivalent continuous-time model. Such methods will however not be described in this document. The Forward differentiation method is somewhat less accurate than the Backward differentiation method, but it is simpler to use. Particularly, with nonlinear models the Backward differentiation method may give problems since it results in an implicit equation for the output variable, while the Forward differentiation method always gives an explicit equation (the nonlinear case will be demonstrated in an example). Figure 3.1 illustrates both the Forward and the Backward differentiation methods. The Forward differentiation method can be seen as the following approximation of the time derivative of a time-valued function which here is denoted x: Forward differentiation method: ẋ(t k ) x(t k+1) x(t k ) (3.1) h h is the time step, i.e. the time interval between two subsequent points of time. The name Forward differentiation method stems from the x(t k+1 ) term in (3.1).

15 15 x x(t k+1 ) x(t k ) Exact slope, x(t k ) x(t k ) x(t k-1 ) Slope with Backward Differensiation method Slope with Forward Differensiation method t k-1 t k t k+1 t h h Figure 3.1: The Forward differentiation method and the Backward differentiation method The Backward differentiation method is based on the following approximation of the time derivative: Backward differentiation method: ẋ(t k ) x(t k) x(t k 1 ) h (3.2) Thename Backwarddifferentiation method stems from the x(t k 1 ) term in (3.2), see Figure 3.1. The examples in the subsequent sections demonstrate the application of the Forward differentiation method and the Backward differentiation method. It is also demonstrated how to get an equivalent differential equation from an original transfer function model or an integral equation (the time derivatives of the differential equation is then approximated with the Forward differentiation method or the Backward differentiation method).

16 3.2 Discretizing a simulator of a dynamic system 16 A simulator of a dynamic system, e.g. a motor, liquid tank, a ship etc., must of course be based on the mathematical model of the system. Typically, the model is in the form of a nonlinear differential equation model. The Forward differentiation method may be used to discretize such nonlinear models. As an example, let us discretize the following nonlinear model: ẋ(t) = K 1 p x(t)+k2 u(t) (3.3) where u is the input, x is the output, and K 1 and K 2 are parameters. 1 We will now derive a simulator algorithm or formula for x(t k ).Letusfirst try applying the Backward differentiation method with time step h to the time derivative in (3.3): x(t k ) x(t k 1 ) h = K 1 p x(tk )+K 2 u(t k ) (3.4) x(t k ) appears on both sides of (3.4). We say that x(t k ) is given implicitly not explicitly by (3.4). Solving for for x(t k ) in this implicit equation is possible, but a little difficult because of the nonlinear function (the square root). (If the difference equation was linear, it would be much easier to solve for x(t k ).) In other cases, nonlinear functions may cause big problems in solving for the output variable, here x(t k ). Since we got some problems with the Backward differentiation method in this case, let us in stead apply the Forward differentiation method to the time derivative of (3.3): x(t k+1 ) x(t k ) h = K 1 p x(tk )+K 2 u(t k ) (3.5) Solving for x(t k+1 ) is easy: x(t k+1 )=x(t k )+h h p i K 1 x(tk )+K 2 u(t k ) Reducing each time index by one and using the simplifying notation x(t k )=x(k) etc. finally gives the simulation algorithm: h p i x(k) =x(k 1) + h K 1 x(k 1) + K2 u(k 1) (3.6) (3.7) 1 This can be the model of a liquid tank with pump inflow and valve outflow. x is the level. u is the pump control signal. The square root stems from the valve.

17 17 In general it is important that the time-step h of the discrete-time function is relatively small, so that the discrete-time function behaves approximately similar to the original continuous-time system. For the Forward differentiation method a (too) large time-step may even result in an unstable discrete-time system! For simulators the time-step h should be selected according to h 0.1 (3.8) λ max Here λ max is the largest of the absolute values of the eigenvalues of the model, which is the eigenvalues of the system matrix A in the state-space model ẋ = Ax + Bu. For transfer function models you can consider the poles in stead of the eigenvalues (the poles and the eigenvalues are equal for most systems not having pol-zero cancellations). If the model is nonlinear, it must be linearized before calculating eigenvalues or poles. In stead of, or as a supplementary using However, you may also use a trial-and-error method for choosing h (or f s ): Reduce h until there is a negligible change of the response of the system if h is further reduced. If possible, you should use a simulator of your system to test the importance of the value of h before implementation. 3.3 Discretizing a signal filter Alowpassfilter is used to smooth out high frequent or random noise in a measurement signal. A very common lowpass filter in computer-based control systems is the discretized first order filter or time-constant filter. You can derive such a filter by discretizing the Laplace transfer function of the filter. A common discretization method in control applications is the Backward differentiation method. We will now derive a discrete-time filter using this method. The Laplace transform transfer function also denoted the continuous-time transfer function of a first order lowpass filter is H(s) = y(s) u(s) = 1 T f s +1 = 1 s ω b +1 = 1 s 2πf b +1 (3.9) Here, u is the filter input, and y is the filter output. T f [s] is the time-constant. ω b is the filter bandwidth in rad/s, and f b is the filter bandwidth in Hz. (In the following, the time-constant will be used as the filter parameter since this is the parameter typically used in filter implementations for control systems.)

18 18 Cross-multiplying in (3.9) gives Resolving the parenthesis gives (T f s +1)y(s) =u(s) (3.10) T f sy(s)+y(s) =u(s) (3.11) Taking the inverse Laplace transform of both sides of this equation gives the following differential equation (because multiplying by s means time-differentiation in the time-domain): T f ẏ(t)+y(t) =u(t) (3.12) Let us use t k to represent the present point of time or discrete time: T f ẏ(t k )+y(t k )=u(t k ) (3.13) Substituting the time derivative by the Backward differentiation approximation gives y(t k ) y(t k 1 ) T f + y(t k )=u(t k ) (3.14) h Solving for y(t k ) gives y(t k )= whichiscommonlywrittenas T f T f + h y(t k 1)+ h T f + h u(t k) (3.15) y(t k )=(1 a) y(t k 1 )+au(t k ) (3.16) with filter parameter a = h T f + h (3.17) which has a given value once you have specified the filter time-constant T f and the time-step h is given. (3.16) is the formula for the filter output. It is ready for being programmed. This filter is denoted the exponentially weighted moving average (EWMA) filter, but we can simply denote it a first order lowpass filter. It is important that the filter time-step h is considerably smaller than the filter time-constant T f, otherwise the filter may behave quite differently from the original continuous-time filter (3.9) from which it is derived. A rule of thumb for the upper limit of h is h T f 5 (3.18)

19 Discretizing a PID controller Computer based control loop Figure 3.2 shows a control loop where controller is implemented in a computer. The computer registers the process measurement signal via an AD converter (from analog to digital). The AD converter produces a numerical value which represents the measurement. As indicated in the block diagram this value may also be scaled, for example from volts to percent. The resulting digital signal, y(t k ), is used in the control function, which is in the form of a computer algorithm or program calculating the value of the control signal, u(t k ). r(t k ) e(t k ) u(t k ) u(t) y(t) h y(t k ) t k =kh r(t k ) e(t k ) y(t k ) t k Discrete-time PID controller u(t k ) t k Scaling and DA-converter with signal holding u(t) t v Process y t t k AD-converter and scaling h Sampling Measured y(t) t Sensor Figure 3.2: Control loop where the controller function is implemented in a computer The control signal is scaled, for example from percent to milliamperes, and sent to the DA converter (from digital to analog) where it is held constant during the present time step. Consequently the control signal becomes a staircase signal. The time step or the sampling interval, h [s], is usually small compared to the time constant of the actuator (e.g. a valve) so the actuator does not feel the staircase form of the control signal. A typical value of h in commercial controllers is 0.1 s.

20 Development of discrete-time PID controller The starting point of deriving the discrete-time PID controller is the continuous-time PID (proportional + integral + derivate) controller: u(t) =K p e(t)+ K p T i Z t 0 edτ + K p T d ė f (t) (3.19) where u is the controller output (the control variable), e is the control error: e(t) =r(t) y(t) (3.20) where r is the reference or setpoint, and y is the process measurement. e f is the filtered control error. It is the output of the following lowpass filter: e f (s) = 1 e(s) (3.21) T f s +1 where T f is the filter time-constant, which is typically selected as where typically a =0.1. T f = at d (3.22) We will now derive a discrete-time formula for u(t k ), the value of the control signal for the present time step. The discretization can be performedinanumberofways.probablythesimplestwayisasfollows: Differentiating both sides of (3.19) gives 2 u(t) =K p ė(t)+ K p T i e(t)+k p T d ë f (t) (3.23) Applying the Backward differentiation method (3.2) to u, ė, andë f gives u(t k ) u(t k 1 ) h = K p e(t k ) e(t k 1 ) h + K p ė f (t k ) ė f (t k 1 ) e(t k )+K p T d T i h (3.24) Applying the Backward differentiation method on ė f (t k ) and ė f (t k 1 ) in (3.24) gives u(t k ) u(t k 1 ) h e(t k ) e(t k 1 ) = K p h (3.25) + K p e(t k ) T i (3.26) e f (t k ) e f (t k 1) h e f (t k 1 ) e f (t k 2 ) h +K p T d h 2 Thetimederivativeofanintegralistheintegrand. (3.27)

21 21 Solving for u(t k ) finally gives the discrete-time PID controller: u(t k ) = u(t k 1 )+K p [e(t k ) e(t k 1 )] + K ph T i e(t k ) (3.28) + K pt d h [e f(t k ) 2e f (t k 1 )+e f (t k 2 )] (3.29) The discrete version of the filter (3.21) can be derived as described in Section 3.3. The discrete-time PID controller algorithm (3.28) is denoted the absolute or positional algorithm. Automation devices typically implements the incremental or velocity algorithm. because it has some benefits. The incremental algorithm is based on splitting the calculation of the control value into two steps: 1. First the incremental control value u(t k ) is calculated. 2. Then the total or absolute control value is calculated with u(t k )=u(t k 1 )+ u(t k 1 ). Thus, the incremental PID algorithm is u(t k ) = K p [e(t k ) e(t k 1 )] + K ph e(t k ) T i (3.30) + K pt d h [e f(t k ) 2e f (t k 1 )+e f (t k 2 )] (3.31) u(t k ) = u(t k 1 )+ u(t k ) (3.32) The summation (3.32) implements the (numerical) integral action of the PID controller. The incremental PID control function is particularly useful if the actuator is controlled by an incremental signal. A step-motor is such an actuator. The motor itself implements the numerical integration (3.32). It is (only) u(t k ) that is sent to the motor Some practical features of the PID controller A practical PID controller must have certain features to be functional: Integrator anti windup: Large excitations of the control system, typically large disturbances or large setpoint changes, may cause the

22 22 control signal to reach its maximum or minimum limits with the control error being different from zero. The summation in (3.32), which is actually a numerical integration, will then cause u to increase (or descrease) steadily this is denoted integral windup so that u may get a very high (or low) value. When the excitations are back to normal values, it may take a very long time before the large value of u is integrated back to a normal value (i.e. within 0 100%), causing the process output to deviate largely from the setpoint. Preventing the windup is (not surprisingly) denoted anti windup, and it can realized as follows: 1. Calculate an intermediate value of the control variable u(t k ) according to (3.32), but do not send this value to the DA (Digital-to-Analog) converter. 2. Check if this intermediate value is greater than the maximum value u max (typically 100%) or less than the minimum value u min (typically 0%). If it exceeds one of these limits, set u(t k ) in (3.32) to zero. 3. Write u(t k ) to the DA converter. Bumpless transfer: Suppose the controller is switched from automatic to manual mode, or from manual to automatic mode (this will happen during maintenance, for example). The transfer between modes should be bumpless, ideally. Bumpless transfer can be realized as follows: Bumpless transfer from automatic to manual mode: In manual mode it is the manual (or nominal) control signal u 0 that controls the process. We assume here that the control signal u(t k ) in (3.32) has a proper value, say u good, so that the control error is small, immediately before the switch to manual mode. To implement bumpless transfer, set u 0 equal to u good at the switching moment. Bumpless transfer from manual to automatic mode: While the controller is in manual mode, u(t k ) in (3.32) is set (forced) to zero, and both u(t k ) and u(t k 1 ) are set to u 0,themanual control value (which may be changed during the manual mode, of course). After the switching from manual to automatic mode allow u(t k ) to be calculated according to (3.30) (hence, do not force u(t k ) to be zero any longer).

23 Selecting the sampling time of the control system The DA converter (digital to analog) which is always between the discrete-time control function and the continuous-time process to be controlled, implements holding of the calculated control signal during the time-step (sampling interval). This holding implies that the control signal is time delayed approximately h/2, see Figure 3.3. The delay influences the u Original discrete-time signal h/2 The signal held fixed The original signal time-delayed h/2 = average stair-formed signal t Figure 3.3: The DA-converter holds the calculated control signal throyghout the sampling interval, thereby introducing an approximate time-delay of h/2. stability of the control loop. Suppose we have tuned a continuous-time PID controller, and apply these PID parameters on a discrete-time PID controller. Then the control loop will get reduced stability because of the approximate delay of h/2. As a rule of thumb (this can be confirmed from a frequency response based analysis), the stability reduction is small and tolerable if the time delay is less than one tenth of the response-time of the control system as it would have been with a continuous-time controller or a controller having very small sampling time: h 2 T r 10 (3.33) which gives h T r (3.34) 5 Theresponsetimeisherethe63%risetimewhichcanbereadoff from the setpoint step response. For a system the having dominating time constant T, the response-time is approximately equal to this time constant.

24 Chapter 4 Discrete-time state space models 4.1 General form of discrete-time state space models The general form of a discrete-time state space model is x(k +1)=f[x(k), u(k)] (4.1) y(k) =g[x(k), u(k)] (4.2) where x is the state variable, u is the input variable which may consist of control variables and disturbances (in this model definition there is no difference between these two kinds of input variables). y is the output variable. f and g are functions linear or nonlinear. x(k +1)in (4.1) means the state one time-step ahead (relative to the present state x(k)). Thus, the state space model expresses how the systems state (variables) and output variables evolves along the discrete time axis. The variables in (4.1) (4.2) may actually be vectors, e.g. x 1 x 2 x =. x n (4.3) where x i is a (scalar) state variable, and if so, f and/or g are vector evaluated functions. 24

25 Linear discrete-time state space models A special case of the general state space model presented above is the linear state space model: x(k +1)=Ax(k)+Bu(k) (4.4) y(k) =Cx(k)+Du(k) (4.5) where A is the transition matrix, B is the input gain matrix, C is the output gain matrix or measurement gain matrix and D is the direct output gain matrix (in most cases, D =0). 4.3 Discretization of linear continuous-time state space models You need to discretize a continuous-time state space model in the following situations: In design av Kalman Filter state estimator, cf. Chapter 11 In design of state space model based controller, as optimal control There are several principles of discretization of continuous-time state space models: Discretization based on having a zero order hold element on the input of the system which is the case when the physical process is controlled by a computer via a DA converter (digital to analog). Using a integral numerical approximation, typically Euler Forward method, Euler Backward method or Tustin method. The simplest, the most commonly used, and the most flexible method is the Euler s Forward method, so only this method will be descibed here: Given this continuous-time state space model ẋ(t) =f c [x(t), u(t)] (4.6) Applying the Euler Forward approximation to the time derivative gives x(t k+1 ) x(t k ) = f c [x(t k ),u(t k )] (4.7) h or x(t k+1 )=x(t k )+hf c [x(t k ),u(t k )] (4.8)

26 4.4 Linearization of nonlinear state space models 26 The formulas for linearizing nonlinear discrete-time state space models are presented without derivation below. They can be derived in the same way as for linearizing nonlinear continuous-time models [1]. In the formulas below a second order system is assumed. I guess it is clear how the formulas can be generalized to higher orders. Given the following discrete-time nonlinear state space model x 1 (k +1) x 2 (k +1) =f 1 [x 1 (k), x 2 (k), u 1 (k), u 2 (k)] =f 2 [x 1 (k), x 2 (k), u 1 (k), u 2 (k)] (4.9) y 1 (k) y 2 (k) = g 1 [x 1 (k), x 2 (k), u 1 (k), u 2 (k)] = g 2 [x 1 (k), x 2 (k), u 1 (k), u 2 (k)] where f 1 and/or f 2 are nonlinear functions. The corresponding linear model, which defines the system s dynamic behaviour about a specific operating point, is (4.10) x 1 (k +1) = f 1 x 1 op x 1 (k)+ f 1 x 2 op x 2 (k)+ f 1 u 1 op u 1 (k)+ f 1 u 2 op u 2 (k) x 2 (k +1) = f 2 x 1 x 1 (k)+ op f 2 x 2 x 2 (k)+ op f 2 u 1 u 1 (k)+ op f 2 u 2 u 2 (k) op (4.11) y 1 (k) = g1 x 1 op x 1 (k)+ g1 x 2 op x 2 (k)+ g1 u 1 op u 1 (k)+ g1 u 2 op u 2 (k) y 2 (k) = g2 x 1 x 1 (k)+ op g2 x 2 x 2 (k)+ op g2 u 1 u 1 (k)+ op g2 u 2 u 2 (k) op (4.12) or x(k +1)=A x(k)+b u(k) (4.13) where y(k) =C x(k)+d u(k) (4.14) x(k) = x 1 (k). x 2 (k) (4.15)

27 27 and similarly for u(k) and y(k). The system matrices are 1 A = B = C = D = f 1 x 1 f 1 x 2 f 2 x 1 f 2 x 2 f 1 u 1 f 1 u 2 f 2 u 1 f 2 u 2 g 1 x 1 g 1 x 2 g 2 x 1 g 2 x 2 g 1 u 1 g 1 u 2 g 2 u 1 g 2 u 2 op = op = op = op = f x T (4.16) op f u T (4.17) op g x T (4.18) op g u T (4.19) op In the formulas above the subindex op is for operating point, which is a particular set of values of the variables. Often, the operating point is assumed to be an equilibrium(or static) operating point, which means that all variables have constant values there. Example 1 Linearization Given the following non-linear state-space model: x 1 (k +1) = ax 1 (k)+bx 1 (k)x 2 (k)+cx 1 (k)u 1 (k) (4.20) {z } f 1 x 2 (k +1) = dx 2 (k) (4.21) {z } f 2 y 1 (k) = x 1 (k) (4.22) {z } g 1 The system matrices of the corresponding linear model are f 1 f 1 x 1 x 2 a + bx 2 (k)+cu 1 (k) bx 1 (k) A = = 0 d op f 2 x 1 f 2 x 2 1 Partial derivative matrices are denoted Jacobians. (4.23) op

28 28 B = f 1 u 1 f 2 u 1 op = cx 1 (k) 0 (4.24) op C = h g1 x 1 g 1 x 2 i op = 1 0 (4.25) D = h g1 u 1 i op = 0 (4.26) [End of Example 1] 4.5 Calculating responses in discrete-time state space models Calculating dynamic responses Calculating responses in discrete-time state space models is quite easy. The reason is that the model is the algorithm! For example, assume that Euler s forward method has been used to get the following discrete-time state space model: x(k) =x(k 1) + hf(k 1) (4.27) This model constitutes the algorithm for calculating the response x(k) Calculating static responses The static response is the response when all input variables have constant values and all output variables have converged to constant values. Assume the following possibly nonlinear state space model: x(k +1)=f 1 [x(k),u(k)] (4.28) where f 1 is a possibly nonlinear function of x and u. Letuswritex s and u s for static values. Under static conditions (4.28) becomes x s = f 1 [x s,u s ] (4.29) which is an algebraic equation from which we can try to solve for unknown variables. If the model is linear: x(k +1)=Ax(k)+Bu(k) (4.30)

29 29 a formula of the steady-state solution can be calculated as follows. The steady-state version of (4.30) is x s = Ax s + Bu s (4.31) Solving for x s gives x s =(I A) 1 Bu s (4.32) Note that only asymptotically stable systems have a feasible static operating point. Stability analysis of discrete-time systems is described in Section 8.

30 Chapter 5 The z-transform 5.1 Definition of the z-transform The z-transform of discrete-time signals plays much the same role as the Laplace transform for continuous-time systems. The z-transform of the discrete-time signal {y(k)}, orjusty(k), isdefined as follows: X Z{y(k)} = y(k)z k (5.1) For simplicity, I will use the symbol y(z) for Z{y(k)} when it can not be misunderstood. Strictly, a different variable name should be used, for example Y (z). k=0 Example 2 z-transform of a constant Assume that the signal y(k) has constant value A. This signal can be regarded a step of amplitude A at time-step 0. z-transforming y(k) gives y(z) = X y(k)z k = k=0 X Az k = k=0 A Az = 1 z 1 z 1 (5.2) [End of example 2] 30

31 Properties of the z-transform Below are the most important properties of the z-transform. These properties can be used when calculating the z-transform of composite signals. Linearity: k 1 y 1 (z)+k 2 y 2 (z) k 1 y 1 (k)+k 2 y 2 (k) (5.3) Time delay: Multiplication by z n means time delay of n time-steps: z n y(z) y(k n) (5.4) Time advancing: Multiplication by z n means time advancing by n time-steps: z n y(z) y(k + n) (5.5) 5.3 z-transform pairs Below are several important z-transform pairs showing discrete-time functions and their corresponding z-transforms. The time functions are defined for k 0. Unity impulse at time-step k: δ(k) z k (5.6) Unity impulse at time-step k =0: δ(0) 1 (5.7) z Unity step at time-step k =0: 1 (5.8) z 1 Time exponential: a k z (5.9) z a Example 3 z-transformation of a composite signal Given the following discrete-time function: y(k) =Ba k n (5.10) (which is a time delayed time exponential). The inverse z-transform of y(k) can be calculated using (5.9) together with (5.3) and (5.4). The result becomes y(z) =Bz n z z a = B z1 n (5.11) z a [End of example 3]

32 Inverse z-transform Inverse z-transformation of a given z evaluated function, say Y (z), is calculating the corresponding time function, say y(k). The inverse transform may be calculated using a complex integral 1, but this method is not very practical. Another method is to find a proper combination of precalculated z-transformation pairs, possibly in combination with some of the z-transform properties defined above. In most cases where you need to calculate a time signal y(k), its z-transform Y (z) stems from a transfer function excited by some discrete-time input signal. You may then calculate y(k) by first transfering the transfer function to a corresponding difference equation, and then calculating y(k) iteratively from this difference equation as explained in Section y(k) = 1 2πj of Y (z).[3] Y (z)z k dz, where the integration path must be in the area of convergence z

33 Chapter 6 z-transfer functions 6.1 Introduction Models in the form of difference equations can be z-transformed to z-transfer functions, which plays the same role in discrete-time systems theory as s transfer functions do in continuous-time systems theory. More specific: The combined model of systems in a serial connection can be found my simply multiplying the individual z-transfer functions. The frequency response can be calculated from the transfer function. The transfer function can be used to represent the system in a simulator or in computer tools for analysis and design (as SIMULINK, MATLAB or LabVIEW) 6.2 How to calculate z-transfer functions As an example we will calculate the z-transfer function from input u to output y for the difference equation (2.4), which is repeated here: y(k) = a 1 y(k 1) a 0 y(k 2) + b 0 u(k 2) (6.1) First we take the z-transform of both sides of the difference equation: Z{y(k)} = Z{ a 1 y(k 1) a 0 y(k 2) + b 0 u(k 2)} (6.2) 33

34 34 Using the linearity property (5.3) and the time delay property (5.4) (6.2) can be written as and Z{y(k)} = Z{a 1 y(k 1)} Z{a 0 y(k 2)} + Z{b 0 u(k 2)} (6.3) which can be written as y(z) = a 1 z 1 y(z) a 0 z 2 y(z)+b 0 z 2 u(z) (6.4) y(z)+a 1 z 1 y(z)+a 0 z 2 y(z) =b 0 z 2 u(z) (6.5) or 1+a1 z 1 + a 0 z 2 y(z) =b 0 z 2 u(z) (6.6) y(z) = b 0 z 2 1+a 1 z 1 + a 0 z {z 2 u(z) (6.7) } H(z) b 0 = z 2 + a 1 z 1 u(z) (6.8) + a {z 0 } H(z) where H(z) is the z-transfer function from u to y. Hence, z-transfer functions can be written both with positive and negative exponents of z From z-transfer function to difference equation In the above Section we derived a z-transfer function from a difference equation. We may go the opposite way to derive a difference equation from a given z-transfer function. Some applications of this are Deriving a filtering algorithm from a filtering transfer function Deriving a control function from a given controller transfer function Deriving a simulation algorithm from the transfer function of the system to be simulated 1 In signal processing theory transfer functions ares usually written with negative exponents of z, while in control theory they are usually written with positive exponents.

35 35 The procedure will be illustrated via a concrete example. Assume given the following transfer function: H(z) = b 0 z 2 + a 1 z + a 0 = y(z) u(z) (6.9) We start by cross multiplying (6.9): z 2 + a 1 z + a 0 y(z) =b0 u(z) (6.10) which can be written as z 2 y(z)+a 1 zy(z)+a 0 y(z) =b 0 u(z) (6.11) Taking the inverse transform of the above expression gives z 2 y(z) + a {z } 1 zy(z) {z } y(k+2) a 1 y(k+1) + a 0 y(z) {z } a 0 y(k) = b 0 u(z) {z } b 0 u(k) (6.12) Reducing each of the time indexes by 2 yields y(k)+a 1 y(k 1) + a 0 y(k 2) = b 0 u(k 2) (6.13) Usually it is practical to have the output variable alone on the left side: y(k) = a 1 y(k 1) a 0 y(k 2) + b 0 u(k 2) (6.14) 6.4 From state space model to transfer function Given the following linear discrete-time state space model: x(k +1)=A d x(k)+b d u(k) (6.15) y(k) =C d x(k)+d d u(k) (6.16) We will now calculate a general formula for the z-transfer function H(z) from input u to output y. (Ifu and/or y are vectors, the transfer function we calculate is actually a transfer function matrix.) Taking the z-transform of (6.15) yields zx(z) =zix(z) =A d x(z)+b d u(z) (6.17) where I is the identity matrix having the same order as the number of state variables. Solving for x(z) gives x(z) =(zi A d ) 1 B d u(z) (6.18)

36 36 Combining with (6.16) gives the following formula(s) for the transfer function: H(z) = y(z) u(z) = C d(zi A d ) 1 B d + D d (6.19) adj(zi A d ) = C d det(zi A d ) B d + D d (6.20) which is the formula for H(z). adj means adjoint, and det means determinant. (zi A d ) is denoted the characteristic matrix of the system. Example 4 From state-space model to transfer function Given the following discrete-time state space model: 1 h 0 x(k +1)= x(k)+ u(k) (6.21) 0 1 h {z } {z } A d B d y(k) =[10] x(k)+ [0] u(k) (6.22) {z} {z} C d D d (This model is actually the discrete-time state space model corresponding to Euler Forward discretization of the following continuous-time state space model: ẋ = x + u (6.23) 0 0 {z } A c 1 {z } B c y =[10] x + [0] u (6.24) {z} {z} C c D c which is a state-space model representation of the differential equation ÿ = u with y = x 1. This model is a so-called double integrator since the output y is obtained by integrating the input u twice.) We will now calculate the z-transfer function from u to y of (6.21), (6.22) using (6.20). We start by calculating the characteristic matrix (zi A d ): z 0 1 h z 1 h (zi A d )= = (6.25) 0 z z 1 {z } {z } zi A d which gives adj (zi A d )=adj z 1 h 0 z 1 = z 1 h 0 z 1 (6.26)

37 37 and z 1 h det (zi A d )=det 0 z 1 The transfer function becomes y(z) u(z) = z 2 2z +1 (6.27) = D d (zi A d ) 1 B d = D d adj(zi A d ) det(zi A d ) B d (6.28) = [1 0] 1 z 1 h z 2 2z +1 0 z 1 h2 2 h (6.29) = h2 2 z +1 z 2 2z +1 (6.30) [End of Example 4] 6.5 Calculating time responses for z-transfer function models Assume given a transfer function, say H(z), with input variable u and output variable y. Then, y(z) =H(z)u(z) (6.31) If u(z) is given, the corresponding time response in y canbecalculatedin several ways: 1. By finding a proper transformation pair in Section 5.3, possibly combined with some of the z-transform properties in Section By deriving a differential equation corresponding to the transfer function and then calculating y(k) iteratively according to the difference equation. The procedure of deriving a differential equation corresponding to a given transfer function is explained in Section 6.3, and the calculation of time responses for a difference equation is describedinsection Static transfer function and static response ThestaticversionH s of a given transfer function H(z) will now be derived. Using the static transfer function the static response can easily be

38 38 calculated. Assume that the input variable u is a step of amplitude U. The stationary response can be found using the final value theorem: lim s k = lim(z 1)y(z) z 1 (6.32) = lim(z 1)H(z)u(z) z 1 (6.33) = lim(z 1)H(z) zu z 1 z 1 (6.34) = H(1)U (6.35) Thus, we have the following static transfer function: H s = y s u s =lim z 1 H(z) =H(1) (6.36) Using the static transfer function the static response can be calculated by y s = H s U (6.37) Example 5 Static transfer function Let us consider the following transfer function: H(z) = y(z) u(z) = az z (1 a) (6.38) which is the transfer function of the lowpass filter (3.16) which is repeated here: y(k) =(1 a) y(k 1) + au(k) (6.39) The corresponding static transfer function is H s = y s a =limh(z) =lim u s z 1 z 1 1 (1 a) z 1 = a =1 (6.40) 1 (1 a) 1 Thus, y s = H s u s = u s (6.41) Can we find the same correspondence between u s and y s from the difference equation (6.39)? Setting y(k) =y(k 1) = y s and u(k) =u s gives y s =(1 a) y s + au s (6.42) giving which is the same as (6.40). [End of Example 5] y s u s = a =1 (6.43) 1 (1 a)

39 Poles and zeros Poles and zeros of z-transfer functions are defined in the same way as for s transfer functions: The zeros of the transfer function are the z-roots of numerator polynomial, and the poles are the z-roots of the denominator polynomial. One important application of poles is stability analysis, cf. Section 8. Example 6 Poles and zeros Given the following z-transfer function: H(z) = (z b) (z a 1 )(z a 2 ) (6.44) The poles are a 1 and a 2, and the zero is b. [End of Example 6]

40 Chapter 7 Frequency response of discrete-time systems As for continuous-time systems, the frequency response of a discrete-time system can be calculated from the transfer function: Given a system with z-transfer function H(z). Assume that input signal exciting the system is the sinusoid u(t k )=U sin(ωt k )=U sin(ωkh) (7.1) where ω is the signal frequency in rad/s. It can be shown that the stationary response on the output of the system is y(t k ) = Y sin(ωkh + φ) (7.2) = UAsin(ωkh + φ) (7.3) A z } { = U H(e jωh ) sin ωt k +argh(e jωh ) (7.4) {z } {z } φ Y where H(e jωh ) is the frequency response which is calculated with the following substitution: H(e jωh )=H(z) z=e jωh (7.5) where h is the time-step. The amplitude gain function is The phase lag function is A(ω) = H(e jωh ) (7.6) φ(ω) =argh(e jωh ) (7.7) 40

41 41 A(ω) and φ(ω) can be plotted in a Bode diagram. Figure 7.1 shows as an example the Bode plot of the frequency response of the following transfer function (time-step is 0.1s): H(z) = b 0, 0952 = z a z 0, 9048 Note that the plots in Figure 7.1 are drawn only up to the Nyquist frequency which in this case is (7.8) Figure 7.1: Bode plot of the transfer function (7.8). ω N =31.4 rad/sisthe Nyquist frequency (sampling time h is 0.1s). ω N = ω s 2 = 2π/h = π 2 h = π =10π 31.4 rad/s (7.9) 0.1 The plots are not drawn (but they exist!) above the Nyquist frequency because of symmetry of the frequency response, as explained in the following section. Example 7 Calculating the frequency response manually from the z-transfer function

42 42 Given the z-transfer function The frequency response becomes H(z) = b z a (7.10) H(e jωh ) = = = = = b e jωh (7.11) a b (7.12) cos ωh + j sin ωh a b (7.13) (cos ωh a) + jsin ωh {z } {z } Re Im b (7.14) q(cos ωh a) 2 +(sinωh) 2 e j arctan[(sin ωh)/(cos ωh a)] b q e j[ arctan( sin ωh (cos ωh a) 2 +(sinωh) 2 cos ωh a)] (7.15) The amplitude gain function is A(ω) = H(e jωh ) = b q(cos ωh a) 2 +(sinωh) 2 (7.16) and the phase lag function is φ(ω) =argh(e jωh )= arctan µ sin ωh cos ωh a [rad] (7.17) [End of Example 7] Even for the simple example above, the calculations are cumbersome, and prone to errors. Therefore you should use some computer tool for calculating the frequency response, as MATLAB s Control System Toolbox or LabVIEW s Control Design Toolkit. Symmetry of frequency response It can be shown that the frequency response is symmetric as follows: H(e jωh ) and arg H(e jωh ) are unique functions in the frequency interval [0,ω N ] where ω N is the Nyquist frequency. In the following intervals [mω s, (m +1)ω s ] (m is an integer) the functions are mirrored as indicated in Figure 7.2 which has a logarithmic frequency axis. (The Bode plots in this section are for the transfer function (7.8).) The symmetry appears

43 43 Figure 7.2: Bode plots of frequency response of (7.16). The frequency axis is logarithmic. clearer in the Bode plots in Figure 7.3 where the frequency axis is linear. Due to the symmetry of the frequency response, it is strictly not necessary to draw more of frequency response plots than of the frequency interval [0,ω N ].

44 Figure 7.3: Bode plots of frequency response of (7.17). The frequency axis is linear to make the symmetries if the frequency responses clearer. 44

45 Chapter 8 Stability analysis of discrete-time systems 8.1 Definition of stability properties Assume given a dynamic system with input u and output y. The stability property of a dynamic system can be defined from the impulse response 1 of asystemasfollows: Asymptotic stable system: The steady state impulse response is zero: lim y δ(k) =0 (8.1) k Marginally stable system: The steady state impulse response is different from zero, but limited: 0 < lim k y δ(k) < (8.2) Unstable system: The steady state impulse response is unlimited: lim y δ(k) = (8.3) k The impulse response for the different stability properties are illustrated in Figure 8.1. (The simulated system is defined in Example 8.) 1 An impulse δ(0) is applied at the input. 45

46 46 Figure 8.1: Impulse response and stability properties 8.2 Stability analysis of transfer function models In the following we will base the analysis on the following fact: The transfer function is the z-transformed impulse response. Here is the proof of this fact: Given a system with transfer function H(z). Assume that the input u is an impulse, which is a signal having value 1 at time index k =0 and value zero at other points of time. According to (5.7) u(z) =1.Then the z-transformed impulse response is (as stated). y(z) =H(z)u(z) =H(z) 1=H(z) (8.4) Now, we proceed with the stability analysis of transfer functions. The impulse response y δ (k), which defines the stability property of the system, is determined by the poles of the system s poles and zeros since the impulse responses is the inverse z-transform of the transfer function: y δ (k) =Z 1 {H(z)} (8.5)

47 47 Consequently, the stability property is determined by the poles and zeros of H(z). However, we will soon see that only the poles determine the stability. We will now derive the relation between the stability and the poles by studying the impulse response of the following system: H(z) = y(z) u(z) = bz (8.6) z p Thepoleisp. Do you think that this system is too simple as a basis for deriving general conditions for stability analysis? Actually, it is sufficient because we can always think that a given z-transfer function can be partial fractionated in a sum of partial transfer functions or terms each having one pole. Using the superposition principle we can conclude about the stability of the original transfer function. In the following, cases having of multiple (coinciding) poles will be discussed, but the results regarding stability analysis will be given. The system given by (8.6) has the following impulse response calculated below. It is assumed that the pole in general is a complex number which may be written on polar form as p = me jθ (8.7) where m is the magnitude and θ the phase. The impulse response is ½ ¾ bz y δ (k) = Z 1 (8.8) z p ½ ¾ = Z 1 p 1 pz 1 (8.9) ) X = Z (b 1 p k z k (8.10) k=0 = bp k (8.11) = b m k e jkθ (8.12) From (8.12) we see that it is the magnitude m which determines if the steady state impulse response converges towards zero or not. From (8.12) we can now state the following relations between stability and pole placement (the statements about multiple poles have however not been derived here): Asymptotic stable system: All poles lie inside (none is on) the unit circle, or what is the same: all poles have magnitude less than 1.

48 48 Marginally stable system: One or more poles but no multiple poles are on the unit circle. Unstable system: At least one pole is outside the unit circle. Or: There are multiple poles on the unit circle. The stability areas in the complex plane are shown in Figure 8.2. Unit circle Im j Pole area of instability (outside unit circle) Pole area of asymptotic stability 1 Re Figure 8.2: The different stability property areas of the complex plane Let us return to the question about the relation between the zeros and the stability. We consider the following system: H 1 (z) = y(z) b(z c) = =(z c)h(z) (8.13) u(z) z p where H(z) is it the original system (without zero) which were analyzed above. The zero is c. H 1 (z) can be written as The impulse response of H 1 (z) becomes H 1 (z) = bz z p + bc z p (8.14) = H(z) cz 1 H(z) (8.15) y δ1 (k) =y δ (k) cy δ (k 1) (8.16) where y δ (k) is the impulse response of H(z). We see that the zero does not influence wether the steady state impulse response converges towards to zero or not. We draw the conclusion that the zeros of the transfer function do not influence the stability of the system.

49 49 Example 8 Stability analysis of discrete-time system The three responses shown in Figure 8.1 are actually the impulse responses in three systems each having a transfer function on the form y(z) u(z) = H(z) = b 1z + b 0 z 2 + a 1 z + a 0 (8.17) The parameters of the systems are given below: 1. Asymptotically stable system: b 1 =0.019, b 0 =0.0190, a 1 = and a 0 = Thepolesare z 1, 2 =0.94 ± j0.19 (8.18) They are shown in Figure 8.3 (the zero is indicated by a circle). The poles are inside the unity circle. 2. Marginally stable system: b 1 =0.020, b 0 =0.020, a 1 = 1.96 and a 0 =1.00. Thepolesare z 1, 2 =0.98 ± j0.20 (8.19) They are shown in Figure 8.3. The poles are on the unity circle. 3. Unstable system: b 1 =0.021, b 0 =0.021, a 1 = 2.04 and a 0 =1.08. The poles are z 1, 2 =1.21 ± j0.20 (8.20) They are shown in Figure 8.3. The poles are outside the unity circle. [End of Example 8] 8.3 Stability analysis of state space models Assume that the system has the following state space model: x(k +1)=Ax(k)+Bu(k) (8.21) y(k) =Cx(k)+Du(k) (8.22)

50 50 Figure 8.3: Example 8: Poles (and zeros) for the three systems each having different stability property In Section 6.4 we derived a formula for the transfer function from u(z) to y(z). It is repeated here: H(z) = y(z) u(z) = C(zI A) 1 B + D (8.23) adj(zi A) = C det(zi A) B + D (8.24) The poles are the roots of the characteristic equation: det(zi A) =0 (8.25) The stability property is determined by the placement of the poles in the complex plane. Therefore, to determine the stability property of a state

51 51 space model, we may just compute the poles of the system, and conclude about the stability as explained in Section 8.2. The equation (8.25) actually defines the eigenvalues of A, eig(a). The eigenvalues are the z-solutions to Therefore, the poles are equal to the eigenvalues, and the relation between stability properties and eigenvalues are the same relation as between stability properties and poles, cf. Section 8.2. Example 9 Stability analysis of a state-space model Given the following state-space model: x(k +1)= x(k) {z } A 1 1 u(k) (8.26) y(k) = 1 0 x(k)+ 0 u(k) (8.27) It can be shown that the eigenvalues of A are 0.7 and 0.8. Bothliesinside the unit circle, and hence the system is asymptotically stable. [End of Example 9] 8.4 Stability analysis of feedback (control) systems Although it is assumed here that the feedback system is a control system, the stability analysis methods described can be applied to any (linear) feedback system Defining the feedback system Stability analysis is important in control theory. Figure 8.4 shows a block diagram of a feedback control system. To analyze the stability of the feedback system (loop) it is sufficient to study an extracted part of the block diagram shown in Figure 8.4. Figure 8.5 shows this extracted part and how it can be made into a compact block diagram. The stability analysis will be based this compact block diagram. It has one single

52 52 Process disturbance Process d(s) Setpoint y SP (z) H sm (z) Model of sensor with scaling Setpoint in measurement unit y msp (z) Control error e m (z) Hc (z) Controller Measurement Control variable u(z) y m (z) H u (z) H d (z) Actuator transfer function H s (z) Sensor (measurement) with scaling Disturbance transfer function y(z) Process output variable Figure 8.4: Feedback control system where the subsystems are represented by transfer functions transfer function block containing the loop transfer function, L(z), which is the product of the transfer functions in the loop: L(z) =H c (z)h u (z)h s (z) = H {z } c (z)h p (z) (8.28) H p(z) The closed loop transfer function is the transfer function from input (setpoint) y msp to output y m (process measurement), and it denoted the tracking transfer function: T (z) = L(z) 1+L(z) = n L (z) d L (z) 1+ n L(z) d L (z) = n L (z) d L (z)+n L (z) (8.29) where n L (z) and d L (z) are the numerator and denominator polynomials of L(z), respectively. The stability of the feedback system is determined by the stability of T (z) Pole placement based stability analysis The characteristic polynomial of the tracking transfer function (8.29) is c(z) =d L (z)+n L (z) (8.30)

53 53 y msp H p (z) H c (z) u H u (z) y H s (z) y m Making compact L(z) y msp H c (z) u H p (z) y m Making compact y msp L(s) y m Figure 8.5: Converting an extracted part of the detailed block diagram in Figure 8.4 into a compact block diagram. L(z) is the loop transfer function. Hence, the stability of the control system is determined by the placement of the roots of (8.30) in the complex plane. Example 10 Pole based stability analysis of feedback system Assume given a control system where the P controller H c (z) =K p (8.31) controls the process (which is actually an integrating process) H p (z) = K ih (8.32) z 1 We assume that K i =1and h =1. The loop transfer function becomes L(z) =H c (z)h p (z) = K p z 1 = n L(z) d L (z) (8.33)

54 54 We will calculate the range of values of K p that ensures asymptotic stability of the control system. The characteristic polynomial is, cf. (8.30), c(z) =d L (z)+n L (z) =z 1+K p (8.34) Thepoleis p =1 K p (8.35) The feedback system is asymptotically stable if p is inside the unity circle or has magnitude less than one: p = 1 K p < 1 (8.36) which is satisfied with 0 <K p < 2 (8.37) Assume as an example that K p =1.5. Figure 8.6 shows the step response in y m for this value of K p. Figure 8.6: Example 10: Step resonse in y m.thereisastepiny msp. [End of Example 10] Nyquist s stability criterion for feedback systems The Nyquist s stability criterion is a graphical tool for stability analysis of feedback systems. The traditional stability analysis based on the frequency

55 55 response of L(z) in a Bode diagram is based on Nyquist s stability criterion. In the following the Nyquist s stability criterion for discrete-time systems will be described briefly. Fortunately, the principles and methods of the stability analysis are much the same as for continuous-time systems [2]. In Section we found that the characteristic polynomial of a feedback system is c(z) =d L (z)+n L (z) (8.38) The poles of the feedback system are the roots of the equation c(z) =0. These roots are the same as the roots of the following equation: d L (z)+n L (z) d L (z) =1+ n L(z) d L (z) 1+L(z) =0 (8.39) which we denote the characteristic equation. In the discrete-time case, as in the continuous-time case, the stability analysis is about determining the number of unstable poles of the feedback system. Such poles lie outside the unit circle in the z plane. (8.39) is the equation from which the Nyquist s stability criterion will be derived. In the derivation we will use the Argument Variation Principle: Argument Variation Principle: Given a function f(z) where z is a complex number. Then f(z) is a complex number, too. As with all complex numbers, f(z) has an angle or argument. If z follows a closed contour Γ (gamma) in the complex z-plane which encircles a number of poles and a number of zeros of f(z), see Figure 8.7, then the following applies: argf(z) = 360 (number of zeros minus number of poles of f(z) inside Γ) Γ (8.40) where arg Γ f(z) means the change of the angle of f(z) when z has followed Γ once in positive direction of circulation. For the purpose of stability analysis of feedback systems, we let the function f(z) in the Argument Variation Principle be f(z) =1+L(z) (8.41) The Γ contour must encircle the entire complex plane outside the unit circle in the z-plane, so that we are certain that all poles and zeros of 1+L(z) are encircled. From the Argument Variation Principle we have (below UC is abbreviation of unit circle):

56 56 B Γ contour A 1 Im(z) Unit circle Positive direction of circulation Stable pole area Re(z) Infinite radius A 2 Unstable pole area P OL poles of open loop system outside unit circle P CL poles of closed loop system outside unit circle Figure 8.7: In the Nyquist s stability criterion for discrete-time systems, the Γ-contour in the Argument Variation Principle [?] must encircle the whole area outide the unit circle. The letters A and B identifies parts of the Γ-contour (cf. the text). arg Γ [1 + L(z)] = arg d L (z)+n L (z) Γ d L (z) = 360 (number of roots of (d L + n L ) outside UC 2 (8.42) minus number roots of d L outside UC) (8.43) = 360 (number poles of closed loop system outside UC minus number poles of open system outside UC) = 360 (P CL P OL ) (8.44) By open system we mean the (imaginary) system having transfer function L(z) =n L (z)/d L (z), i.e., the original feedback system with the feedback broken. The poles of the open system are the roots of d L (z) =0. Finally, we can formulate the Nyquist s stability criterion. But before we do that, let us recall what we are after, namely to be able to determine the number poles P CL of the closed loop system outside the unit circle. These

57 57 poles determines whether the closed loop system (the control system) is asymptotically stable or not. If P CL =0theclosedloopsystemis asymptotically stable. Nyquist s Stability Criterion: Let P OL be the number of poles of the open system outside the unit circle, and let arg Γ [1 + L(z)] be the angular change of the vector [1 + L(z)] as z have followed the Γ contour once in positive direction of circulation. Then, the number poles P CL of the closed loop system outside the unit circle, is P CL = arg Γ [1 + L(z)] P OL (8.45) If P CL =0, the closed loop system is asymptotically stable. Let us take a closer look at the terms on the right side of (8.45): P OL are the number of the roots of d L (z), and there should not be any problem calculating that number. What about determining the angular change of the vector 1+L(z)? Figure 8.8 shows how the vector (or complex number) 1+L(z) appears in a Nyquist diagram for a typical plot of L(z). A Nyquist diagram is simply a Cartesian diagram of the complex plane in which L is plotted. 1+L(z) is the vector from the point ( 1, 0j), which is denoted the critical point, to the Nyquist curve of L(z). MoreabouttheNyquistcurveofL(z) Let us take a more detailed look at the Nyquist curve of L as z follows the Γ contour in the z-plane, see Figure 8.7. In practice, the denominator polynomial of L(z) has higher order than the numerator polynomial. This implies that L(z) is mapped to the origin of the Nyquist diagram when z =, which corresponds to contour B in Figure 8.7. The unit circle, contour A 1 plus contour A 2 in Figure 8.7, constitutes (most of) the rest of the Γ contour. There, z has the value z = e jωh (8.46) Consequently, the loop transfer function becomes L(e jωh ) when z is on the unit circle. For z on A 1 in Figure 8.7, ω is positive, and hence L(e jωh ) is the frequency response of L. A consequence of this is that we can determine the stability property of a feedback system by just looking at the frequency response of the loop transfer function, L(e jωh ).Forz on A 2 in Figure 8.7 ω is negative, and the frequency response has a pure mathematical meaning. From general properties of complex functions, L(e j( ω)h ) = L(e j(+ω)h ) (8.47)

58 58 Curve A 2 is mapped to here Im L(z) Negative ω The critical point 1 + L(z) 1 0 Curve B is mapped to origo Re L(z) Nyquist curve of L(z) Positive ω Decreasing ω Curve A 1 is mapped to here Figure 8.8: Typical Nyquist curve of L(z). The vector 1+L(z) is drawn. and arg L(e j( ω)h )= arg L(e j(+ω)h ) (8.48) Therefore the Nyquist curve of L(z) for ω<0 will be identical to the Nyquist curve for ω>0, but mirrored about the real axis. Thus, we only need to know how L(e jωh ) is mapped for ω 0. The rest of the Nyquist curve then comes by itself! Actually we need not draw more of the Nyquist curve (for ω>0) than what is sufficient for determining if the critical point is encircled or not. If L(z) has poles on the unit circle, i.e. if d L (z) has roots in the unit circle, we must let the Γ contour pass outside these poles, otherwise the function 1+L(z) is not analytic on Γ. Assume the common case that L(z) contain a pure integrator (which may be the integrator of the PID controller). This implies that L(z) contains 1/(z 1) as factor. We let the Γ contour pass just outside the point z =1in such a way that the point is not encircled by Γ. It may be shown that this passing maps z onto an infinitely large semicircle encircling the right half plane in Figure 8.8.

59 Nyquist s special stability criterion In most cases the open system is stable, that is, P OL =0. (8.45) then becomes P CL = arg Γ[L(z)] 360 (8.49) This implies that the feedback system is asymptotically stable if the Nyquist curve does not encircle the critical point. This is the Nyquist s special stability criterion or the Nyquist s stability criterion for open stable systems. The Nyquist s special stability criterion can also be formulated as follows: The feedback system is asymptotically stable if the Nyquist curve of L has the critical point on its left side for increasing ω. Another way to formulate Nyquist s special stability criterion involves the following characteristic frequencies: Amplitude crossover frequency ω c and phase crossover frequency ω 180. ω c is the frequency at which the L(e jωh ) curve crosses the unit circle, while ω 180 is the frequency at which the L(e jωh ) curve crosses the negative real axis. In other words: and L(e jωch ) =1 (8.50) arg L(e jω 180h )= 180 (8.51) See Figure 8.9. Note: The Nyquist diagram contains no explicit frequency axis. We can now determine the stability properties from the relation between these two crossover frequencies: Asymptotically stable closed loop system: ω c <ω 180 Marginally stable closed loop system: ω c = ω 180 Unstable closed loop system: ω c >ω Stability margins: Gain margin and phase margin An asymptotically stable feedback system may become marginally stable if the loop transfer function changes. The gain margin GM and the phase margin PM [radians or degrees] are stability margins which in their own ways expresses how large parameter changes can be tolerated before an asymptotically stable system becomes marginally stable. Figure 8.10 shows the stability margins defined in the Nyquist diagram. GM is the

60 60 L(e jω180h ) Im L(z) j Unit circle 1 0 Re L(z) L(e jωch ) Decreasing ω Positive ω Figure 8.9: Definition of amplitude crossover frequency ω c and phase crossover frequency ω 180 (multiplicative, not additive) increase of the gain that L can tolerate at ω 180 before the L curve (in the Nyquist diagram) passes through the critical point. Thus, L(e jω 180h ) GM =1 (8.52) which gives 1 GM = L(e jω180h ) = 1 Re L(e jω180h (8.53) ) (The latter expression in (8.53) is because at ω 180, Im L =0so that the amplitude is equal to the absolute value of the real part.) Ifweusedecibelastheunit(likeintheBodediagramwhichwewillsoon encounter), then GM [db] = L(e jω180h ) [db] (8.54) The phase margin PM is the phase reduction that the L curve can tolerate at ω c before the L curve passes through the critical point. Thus, arg L(e jω ch ) PM = 180 (8.55) which gives PM = 180 +argl(e jωch ) (8.56)

61 61 L(e jω180h ) Im L(z) 1/GM j Unit circle 1 PM 0 Re L(z) L(e jωch ) Figure 8.10: Gain margin GM and phase margin PM defined in the Nyquist diagram We can now state as follows: The feedback (closed) system is asymptotically stable if GM > 0 db =1and PM > 0 (8.57) This criterion is often denoted the Bode-Nyquist stability criterion. Reasonable ranges of the stability margins are 2 6 db GM 4 12 db (8.58) and 30 PM 60 (8.59) The larger values, the better stability, but at the same time the system becomes more sluggish, dynamically. If you are to use the stability margins as design criterias, you can use the following values (unless you have reasons for specifying other values): GM db and PM 45 (8.60) For example, the controller gain, K p, can be adjusted until one of the inequalities becomes an equality. 3 It can be shown that for PM 70, the damping of the feedback system approximately corresponds to that of a second order system with relative damping factor ζ PM 100 (8.61) 3 But you should definitely check the behaviour of the control system by simulation, if possible.

62 62 For example, PM =50 ζ =0.5. Example 11 Stability analysis in Nyquist diagram Given the following continuous-time process transfer function: with parameter values H p (s) = y m(z) u(z) = K ³ 2 s ω 0 +2ζ s ω 0 +1 e τs (8.62) K =1; ζ =1; ω 0 =0.5 rad/s; τ =1s (8.63) The process is controlled by a discrete-time PI-controller having the following z-transfer function, which can be derived by taking the z-transform of the PI control function (??), ³ K p 1+ T h i z K p H c (z) = z 1 (8.64) where the time-step (or sampling interval) is h =0.2s (8.65) Tuning the controller with the Ziegler-Nichols closed-loop method [2] in a simulator gave the following controller parameter settings: K p =2.0; T i =5.6s (8.66) To perform the stability analysis of the discrete-time control system H p (s) is discretized assuming zero order hold (using MATLAB or LabVIEW). The result is z H pd (z) = z z z 10 (8.67) The loop transfer function is L(z) =H c (z)h pd (z) (8.68) Figure 8.11 shows the Nyquist plot of L(z). From the Nyquist diagram we read off ω 180 =0.835 rad/s (8.69) and Re L(e jω 180h )= (8.70)

63 63 Figure 8.11: Example 11: Nyquist diagram of L(z) which gives the following gain margin, cf. (8.53), GM = The phase margin can be found to be 1 Re L(e jω180h ) = 1 =1.79 = 5.1 db (8.71) PM =35 (8.72) Figure 8.12 shows the step response in y m (unity step in setpoint y msp ). [End of Example 11] Stability margin: Maximum of sensitivity function The sensitivity (transfer) function S is frequently used to analyze feedback control systems in various aspects. [2] It is S(z) = 1 1+L(z) (8.73) where L(z) is the loop transfer function. S has various interpretations: One is being the transfer function from setpoint y msp to control error e in

64 64 Figure 8.12: Example 11: Step response in y m (unity step in setpoint y msp ) the block diagram in Figure 8.4: S(z) = e m(z) y msp (z) (8.74) ABodeplotofS(z) can be used in frequency response analysis of the setpoint tracking property of the control system. Another interpretation of S is being the ratio of the z-transform of the control error in closed en loop control and the control error in open loop control, when this error is caused by an excitation in the disturbance d, cf. Figure 8.4. Thus, S(z) = e disturb(z) closed loop system e disturb (z) open loop system (8.75) (8.75) expresses the disturbance compensation property of the control system. Back to the stability issue: A measure of a stability margin which is alternative to the gain margin and the phase margin is the minimum distance from the L(e jωh ) curve to the critical point. This distance is 1+L(e jωh ), see Figure So, we can use the minimal value of 1+L(e jωh ) as a stability margin. However, it is more common to take the inverse of the distance: Thus, a stability margin is the maximum value of 1/ 1+L(e jωh ). And since 1/[1 + L(z)] is the sensitivity function S(z), then S(e jωh represents a stability margin. Reasonable values are in ) max the range dB S(e jωh ) db (8.76) max

65 65 Im L(z) 1 0 Re L(z) 1+L min = S max L Figure 8.13: The minimum distance between the L(e jωh ) curve and the critical point can be interpreted as a stability margin. This distance is 1+L min = S max. If you use S(e jωh ) max as a criterion for adjusting controller parameters, you can use the following criterion (unless you have reasons for some other specification): S(e jωh ) max =2.0 6 db (8.77) Example 12 S max as stability margin SeeExample11.Itcanbeshownthat S max =8.9 db =2.79 = = 1 1+L min (8.78) [End of Example 12] Frequency and period of the sustained oscillations There are sustained oscillations in a marginally stable system. The frequency of these oscillations is ω c = ω 180.This can be explained as follows: In a marginally stable system, L(e j(±ω 180)h )=L(e j(±ω c)h )= 1. Therefore, d L (e j(±ω 180)h )+n L (e j(±ω 180)h )=0, which is the characteristic equation of the closed loop system with e j(±ω 180)h inserted for z. Therefore, the system has e j(±ω 180)h among its poles. The system usually

66 66 have additional poles, but they lie in the left half plane. The poles e j(±ω 180)h leads to sustained sinusoidal oscillations. Thus, ω 180 (or ω c ) [rad/s] is the frequency of the sustained oscillations in a marginally stable system. The period of the oscillations becomes T p = 1 ω 180 /(2π) = 2π ω 180 [s] (8.79) This information can be used for tuning a PID controller with the Ziegler-Nichols method: Assume first that the control system is controlled by a P-controller, and that the ultimate gain of the controller is K pu and the ultimate period is T p as found from (8.79). Then we can calculate the parameter values of e.g. a PID-controller (and a PI- and a P-controller) Stability analysis in a Bode diagram It is most common to use a Bode diagram, not a Nyquist diagram, for frequency response based stability analysis of feedback systems. The Nyquist s Stability Criterion says: The closed loop system is marginally stable if the Nyquist curve (of L) goes through the critical point, which is the point ( 1, 0). But where is the critical point in the Bode diagram? The critical point has phase (angle) 180 and amplitude 1 = 0dB. The critical point therefore constitutes two lines in a Bode diagram: The 0dB line in the amplitude diagram and the 180 line in the phase diagram. Figure 8.14 shows typical L curves for an asymptotically stable closed loop system. In the figure, GM, PM, ω c and ω 180 are indicated. Example 13 Stability analysis in Bode diagram See Example 11. Figure 8.15 shows a Bode plot of L(e jωh ). The stability margins are shown in the figure. They are which is in accordance with Example 11. [End of Example 13] GM =5.12dB =1.80 (8.80) PM =35.3 (8.81)

67 67 [db] L 0 db ω c GM ω (logarithmic) [degrees] arg L -180 PM ω 180 Figure 8.14: Typical L curves of an asymptotically stable closed loop system with GM, PM, ω c and ω 180 indicated

68 Figure 8.15: Example 13: Bode plot of L 68

69 Part II Stochastic variables and systems 69

70 Chapter 9 Stochastic variables and systems 9.1 Introduction In practical systems there are signals that vary more or less randomly. For example, measurement signals contain random noise, and process disturbances have some random component. Consequently, control signals and controlled variables, i.e. process output variables, have some random behaviour. The future value of a random signal can not be predicted precisely, i.e. such signals are non-deterministic, while steps, ramps and sinusoids are deterministic. In stead, random signals can be described with statistical measures, typically expectation value and standard deviation or variance (standard deviation is square root of variance). Random signals may be denoted stochastic signals. This chapter shows how random or stochastic signals can be represented mathematically, and how such signals are propagated through static systems and dynamic systems. These representations will be used in Chapter 11 and in Chapter 10 about parameter estimation. 70

71 Stochastic signals Realizations of stochastic processes A stochastic process may be characterized by its mean and standard deviation or variance. The stochastic process can be observed via one or more realizations of the process in the form of a sequence or time-series of samples, say {x(0), x(1),x(2)...}. Another realization of the same stochastic process will certainly show different sample values, but the mean valueandthevariancewillbealmostthesame(thelongertherealization sequence is, the more equal the mean values and the variances will be). Figure 9.1 shows as an example two different realizations (sequences) of the same stochastic process, which in this case is Gaussian (normally) distributed with expectation (mean) value 0 and standard deviation 1. We see that the sequences are not equal Probability distribution of a stochastic variable As known from statistics a stochastic variable can be described by its probability distribution function, PDF. Figure 9.2 shows two commonly used PDFs, the Normal (Gaussian) PDF and the Uniform PDF. With the Normal PDF the probability that the variable has value in the range { σ, +σ} where σ is the standard deviation is approximately 68% (the standard deviation is defined below). With the Uniform PDF the probability that the variable has a value the { A, +A} range is uniform (constant) and the variable can not take any value outside this range. A stochastic process is stationary if the PDF is time independent (constant), or in other words, if the statistical properties are time independent The expectation value and the mean value The expectation value, E(x), ofthestochasticvariablex is the mean (or average) value of x calculated from an infinite number of samples of x. For a limited number N of samples the mean value can be calculated from Mean value: m x = 1 N N 1 X k=0 x(k) (9.1)

72 72 Figure 9.1: Two different realizations of the same stochastic process, which in this case is Gaussian (normally) distributed with expectation value 0 and standard deviation 1. (Created with the Gaussian White Noise function in LabVIEW.) Often these two terms (expectation value and mean value) are used interchangeably. If x is a vector, say x1 (k) x(k) = x 2 (k) then the mean value of x has the form 1 P N 1 N k=0 x 1(k) m x = mx1 m x2 = 1 P N 1 N k=0 x 2(k) (9.2) (9.3)

73 73 Normal (Gaussian) probability distribution function (PDF) x Uniform probability distribution function (PDF) -A A x Figure 9.2: The Normal (Gaussian) PDF and the Uniform PDF Variance. Standard deviation The variance of a stochastic variable is the mean or expected value of the squared difference between the value and its mean value: n Var(x) =E [x(k) m x ] 2o (9.4) The variance can be calculated from a sequence of samples as follows: Var(x) = 1 N 1 X [x(k) m x ] 2 (9.5) N 1 (Statistically it is better to divide by N 1 and not by N since the estimate of the statistical variance becomes unbiased using N 1.) The variance is some times denoted the power of the signal. The standard deviation may give a more meaningful value of the variation of a signal. The standard deviation is the square root of the variance: k=0 σ = p Var(x) (9.6) In many situations σ 2 is used as a symbol for the variance.

74 Auto-covariance. Cross-covariance Sometimes it is useful to express how a stochastic variable, say x(k), the time-index axis (k). This type of variance can be expressed by the auto-covariance : Auto-covariance: R x (L) =E{[x(k + L) m x ][x(k) m x ]} (9.7) where L is the lag. Note that the argument of the auto-covariance function is the lag L. Figure9.3showsR x (L) for a signal x where the covariance decreased as the lag increases (this is typical). As indicated in Figure 9.3 the auto-covariance usually has a peak value at lag L =0. Auto-covariance R x (L) L Figure 9.3: The auto-covariance for a signal x where the covariance decreased as the lag increases (this is typical) If L =0the auto-covariance becomes the variance: R x (0) = E{[x(k +0) m x ][x(k) m x ]} (9.8) = E{[x(k) m x ] 2 } = Var(x) =σ 2 (9.9) In some applications x is a vector, say x1 (k) x(k) = x 2 (k) (9.10) What does the auto-covariance look like in this case? For simplicity, assume that each of the four variables above have zero mean. The

75 75 auto-covariance then becomes R x (L) = E{[x(k + L)][x(k)] T } (9.11) ½ x1 (k + L) = E x1 (k) x x 2 (k + L) 2 (k) ¾ (9.12) E[x1 (k + L)x = 1 (k)] E[x 1 (k + L)x 2 (k)] (9.13) E[x 2 (k + L)x 1 (k)] E[x 2 (k + L)x 2 (k)] If L =0, the auto-covariance becomes E n[x 1 (k)] 2o E[x 1 (k)x 2 (k)] {z } =Var(x R x (0) = 1 ) E[x 2 (k)x 1 (k)] E n[x 2 (k)] 2o (9.14) {z } =Var(x 2 ) Hence, the variances are on the diagonal. The cross-covariance between two different scalar signals, say x and y, is Cross-covariance: R xy (L) =E{[x(k + L) m x ][y(k) m y ]} (9.15) The cross-covariance can be estimated from sequences of sample values of x and y of length N with R xy (L) N 1 L X = S [x(k + L) m x ][y(k) m y ], L =0, 1, 2,... (9.16) k=0 N 1 L X = S [y(k L) m y ][x(k) m x ]=R yx ( L), L = 1, 2,... k=1 where S is a scaling factor which is defined below: (9.17) Raw estimate: S =1 (9.18) Normalized estimate: 1 S = R xy (0) in the raw estimate (9.19) This gives R xy (0) = 1.

76 76 Unbiased estimate: S = 1 (9.20) N L This calculates R xy (L) as an average value of the product [x(k + L) m x ][y(k) m y ]. However, R xy (L) may be very noisy if L is large since then the summation is calculated with only a few additive terms (it is assumed that there are noise or random components in x and/or y). Biased estimate: S = 1 N (9.21) With this option R xy (L) is not an average value of the product [x(k + L) m x ][y(k) m y ] since the sum of terms is divided by N no matter how many additive terms there are in the the summation. Although this makes R xy (L) become biased it reduces the noise in R xy (L) because the noisy terms are weighed by 1/N in stead of 1/(N L). Unless you have reasons for some other selection, you may use biased estimate as the default option. Correlation (auto/cross) is the same as covariance (auto/cross) except that the mean value, as m x, is removed from the formulas. Hence, the cross-correlation is, cf. (9.15), r xy (L) =E{x(k + L)y(k)} (9.22) 9.3 White and coloured noise White noise An important type of stochastic signals are the so-called white noise signals (or processes). White is because on some sense white noise contains equally much of all frequency components, analogously to white light which contains all colours. White noise has zero mean value: Mean value of white noise: m x =0 (9.23) There is no co-variance or relation between sample values at different time-indexes, and hence the auto-covariance is zero for all lags L except for L =0. Thus, the auto-covariance is the pulse function shown in Figure 9.4.

77 77 Auto-covariance of white noise R x (L) Var(x) = = V L Figure 9.4: White noise has auto-correlation function like a pulse function. Mathematically the auto-covariance function of white noise is R x (L) =Var(x)δ(L) =σ 2 δ(l) =Vδ(L) (9.24) Here, the short-hand symbol V has been introduced for the variance. δ(l) is the unit pulse defined as follows: Unit pulse: ½ 1 when L =0 δ(l) = 0 when L 6= 0 (9.25) White noise is an important signal in estimation theory because the random noise which is aways present in measurements, can be represented by white noise. For example, the variance of the assumed white measurement noise is used as an input parameter in the Kalman Filter design, cf. Chapter 11. If you calculate the auto-covariance of a white noise sequence of finite length, the auto-covariance function will not be excactly as the ideal function shown in Figure 9.4, but the main characteristic showing a relatively large value at lag L =0is there. Example 14 White noise Figure 9.5 shows a simulated white noise signal x and its auto-covariance R x (L) (normalized) calculated from the most recent N =50samples of x. 1 The white noise characteristic of the signal is clearly indicated by R x (L). [End of Example 14] 1 Implemented in LabVIEW.

78 78 Figure 9.5: Example 14: Simulated white noise signal x and its auto-covariance R x (L) (normalized) calculated from the most recent N =50samples of x Coloured noise As opposite to white noise, coloured noise does not vary completely randomly. In other words, there is a co-variance between the sample values at different time-indexes. As a consequence, the auto-covariance R x (L) is non-zero for lags L 6= 0. R x (L) will have a maximum value at L =0,and R x (L) will descrease for increasing L. You may generate coloured noise from white noise by sending the white noise through a dynamic system, typically a lowpass filter. Such a system is denoted shaping filter. The output signal of the shaping filter will be coloured noise. You can tune the colour of the coloured noise by adjusting the parameters of the shaping filter. Example 15 Coloured noise Figure 9.6 shows a simulated coloured noise signal x and its

79 79 auto-covariance R x (L) (normalized) calculated from the most recent N =50samples of x. 2 The coloured noise is the output of this shaping filter: x(k +1)=ax(k)+(1 a)v(k) (9.26) which is a discrete-time first order lowpass filter. The filter input v(k) is white noise. The filter parameter is a =0.98. (Ifthefilter parameter is 0.1 the filter performs no filtering, and the output is just white noise.) The Figure 9.6: Example 14: Simulated coloured noise signal x and its autocovariance R x (L) (normalized) calculated from the most recent N =50samples of x. coloured noise characteristic of the signal is shown both in the plot of the 2 Implemented in LabVIEW.

80 80 signal x(k) in the upper diagram of Figure 9.6 and in the auto-covariance R x (L) shown in the lower diagram of Figure 9.6. [End of Example 15] 9.4 Propagation of mean value and co-variance through systems Introduction A stochastic system is a system static (non-dynamic) or dynamic which is excited by a stochastic signal. It is useful in many situations to be able to calculate how stochastic signals are transformed through a system. More presisely, it may be useful to calculate how the mean value and the co-variance propagates through the system Propagation of mean value and co-variance through static systems Given the following static linear system: y(k) =Gv(k)+C (9.27) where v is a stationary stochastic input signal with mean value m v and co-variance R v (L). y is the output of the system. G is the gain of the system, and C is a constant. In a multivariable system G is a matrix and C is a vector, but in the following we will assume that G and C are scalars, which is the most usual case. Let us calculate the mean value and the auto-covariance of the output y. The mean value becomes m y = E[y(k)] = E[] = GE [v(k)] + C (9.28) = Gm v + C (9.29)

81 81 The auto-covariance of the output becomes R y (L) = E{[y(k + L) m y ][y(k) m y ]} (9.30) = E {([Gv(k + L)+C] [Gm v + C]) ([Gv(k)+C] [Gm v + (9.31) C])} = E {(Gv(k + L) Gm v )(Gv(k) Gm v )} (9.32) = E {(G [v(k + L) m v ]) (G [v(k) m v ])} (9.33) = G 2 E {([v(k + L) m v ]) ([v(k) m v ])} {z } =R v(l) (9.34) = G 2 R v (L) (9.35) If the system (9.27) is multivariable, that is, if v and y are vectors, G is a matrix and C is a vector. In this case we will get R y (L) =GR v (L)G T (9.36) Let us sum it up: For a scalar system (9.27): Mean of output of stochastic static system: m y = Gm v + C (9.37) and Co-variance of output of stochastic static system: R y (L) =G 2 R v (L) (9.38) The variance, which is equal to R y (0), becomes Variance of output of stochastic static system: σ 2 y = G 2 σ 2 v (9.39) And the standard deviation becomes Standard deviation of output of stochastic static system: σ y = Gσ v (9.40) Example 16 Mathematical operations to achieve specified output mean and variance Assume that you have signal generator available in a computer tool that can generate a white noise signal v having mean m v =0and variance σ 2 v =1, and that you want to generate a signal y of mean m y = M and

82 82 variance σ 2 y = V. Find proper mathematical operations on v that create this y. The gain G can be calculated from (9.39): s r σ 2 y V G = σ 2 = v 1 = V = σ y (9.41) The constant C can be calculated from (9.37): So, the mathematical operation is C = m y Gm v = M G 0=M (9.42) y(k) =Gv(k)+C = V v(k)+m = σ y v(k)+m (9.43) In words: Multiply the input by the specified standard deviation and add the specified mean value to this result. [End of Example 16] Propagation of mean value and co-variance through state space systems How does stochastic signals propagate through a dynamic systems? We will here assume that the system is a linear state space system: x(k +1)=Ax(k)+Gv(k) (9.44) where v is supposed to be a white random signal (it can be a random process disturbance) with mean value m v and auto-covariance R v (L) =Qδ(L) where Q is the variance. x is the state variable or state vector. Assume that the output of the system is given by y(k) =Cx(k)+w(k) (9.45) where w is white random noise (it can be measurement noise) which is supposed to uncorrelated with v and x. w has mean value m w and auto-covariance R w (L) =Rδ(L) where R is the variance. In most situations we are mainly interested in the stationary (constant) values of the mean values and the co-variances. Therefore, we will assume stationary conditions. This also simplifies the derivation of the formulas. Stationary conditions exist only for asymptotically stable systems, so we actually assume that the model (9.44) is asymptotically stable.

83 83 Mean values As said above, we assume that the mean values are constant. From (9.44) we get Solving for m x gives where I is the identity matrix. m x = E {x(k +1)} (9.46) = E {Ax(k)+Gv(k)} (9.47) = AE {x(k)} + GE {v(k)} (9.48) = Am x + Gm v (9.49) Meanvalueofthestatevariable: m x =(I A) 1 Gm v (9.50) From (9.45) we now get the following expression for the mean value of the output variable: Mean value of the output variable: m y = Cm x + m w = C(I A) 1 Gm v + m w (9.51) Covariances The auto-covariance of x is given by P x = E{[x(k +1) m x ][x(k +1) m x ] T } (9.52) = E{[Ax(k)+Gv(k) Am x Gm v ][Ax(k)+Gv(k) Am x Gm v ] T (9.53) } = E{[A(x(k) m x )][(x(k) m x ) T A T ]} (9.54) +E{[A(x(k) m x )][(v(k) m v ) T G T ]} {z } (9.55) =0 +E{[x T (k)a T m x A T ][G(v(k) m v )]} {z } (9.56) =0 +E{G[v(k) m v ][v(k) m v ] T G T } (9.57) = AP x A T + GQG T (9.58) (9.55) is zero because it assumed that x(k) and v(k) m v are uncorrelated, and (9.56) is zero because it is assumed that m x and v(k) m v are uncorrelated. To sum it up: Stationary auto-covariance of the state variable: P x = AP x A T + GQG T (9.59)

84 84 (9.59) is a discrete Lyapunov equation. (Lyapunov equations can be solved with the dlyap function in MATLAB or LabVIEW/MathScript.) From (9.36) we get the following formula for the stationary auto-covariance of the output variable: Stationary auto-covariance of the output variable: P y = CP x C T + R (9.60) where P x is given by (9.59). Example 17 Standard deviation propagation through a lowpass filter Here is the difference equation of a discrete-time lowpass filter: x(k +1) = ax(k)+(1 a)v(k) (9.61) = Ax(k)+Gv(k) (9.62) where a is a filter parameter, which has to have value between 0 and 1 (to have an asymptotically stable filter). Assume that the input v has standard deviation σ v. Let us denote the standard deviation of the filter output x under stationary conditions with σ x. (9.61) is on the form of (9.44). Therefore the stationary auto-covariance of x is given by (9.59): Solving for P x gives P x = σ 2 x = P x = AP x A T + GQG T (9.63) = a 2 P x +(1 a) 2 σ 2 v (9.64) (1 a)2 1 a 2 σ2 v = (1 a) 2 (1 a)(1 + a) σ2 v (9.65) = 1 a 1+a σ2 v (9.66) Let us define the ratio of the output standard deviation to the input standard deviation as From (9.66) we get Solving for a gives K σ def = σ y σ u (9.67) K σ = σ r y 1 a = σ u 1+a a = 1 K2 σ 1+K 2 σ ³ σy 1 = 1+ σ u 2 (9.68) ³ σy σ u 2 (9.69)

85 85 which can be used as design formula for the lowpass filter: You specify how much the standard deviation is to be reduced through the filter, and then (9.69) gives the value of the filter parameter a. Figure 9.7 shows the front panel of a LabVIEW program implementing the lowpass filtering with the filter designed for random noise attenuation as described above. The actual standard deviation of the input and output Figure 9.7: Example 17: Simulation of a lowpass filter designed for random noise attenuation signals were computed with the Standard Deviation PtByPt.vi in LabVIEW. As seen in the figure the actual ratio of the standard deviations is not much different from the specified ratio. [End of Example 17]

86 Part III Estimation of model parameters and states 86

87 Chapter 10 Estimation of model parameters 10.1 Introduction Mathematical models are the basis of simulators and theoretical analysis of dynamic systems, as control systems. The model can be in form of differential equations developed from physical principles or from transfer function models, which can be regarded as black-box -models which expresses the input-output property of the system. Some of the parameters of the model can have unknown or uncertain values, for example a heat transfer coefficient in a thermal process or the time-constant in a transfer function model. We can try to estimate such parameters from measurements taken during experiments on the system, see Figure Many methods for parameter estimation are available[6]. In this chapter the popular least squares method or the LS-method is described. You will see how the LS-method can be used for parameter estimation of both static models and dynamic models of a given structure. In some cases, for example in analysis and design of feedback control systems and in signal modeling, you may want a black-box model which is a dynamic model in the form of a state-space model or a transfer function with non-physical parameters. Developing such black-box models is briefly described in the final section of this chapter. 87

88 88 Input signal u(t) Physical system Output signal y(t) Parameter estimator for mathematical model Model parameters (parameter vector) Figure 10.1: Estimation of parameters of a mathematical model from timeseries of the input variable and the output variable Parameter estimation of static models with the LS method In the following, the least squares method the LS-method for calculating a solution of a set of linear equations, is briefly described, using terminology suitable for parameter estimation. After the LS-method is presented, a number of examples of parameter estimation follows The standard regression model Assume given the following model: y = ϕ 1 θ ϕ n θ n (10.1) = θ 1 ϕ 1 ϕ 2 ϕ 3. (10.2) {z } ϕ θ n {z } θ = ϕθ (10.3) whichiscalledtheregression model. ϕ i is the regression variable (of known value). ϕ is the regression vector (of known value). y is the

89 89 observed variable (of known value). θ is an unknown parameter vector, and we will use the LS-method to estimate a value of θ. Notethatthe regression model is linear in the parameter vector θ. Assume that we have m corresponding values of y and ϕ. Thenwecan write the following m equations according to the model: y 1 = ϕ 11 θ ϕ 1n θ n = ϕ 1 θ (10.4). y m = ϕ m1 θ ϕ mn θ n = ϕ m θ (10.5) These m stacked equations can more compactly be written as y 1 ϕ 11 ϕ 1n θ 1. = y m ϕ m1 ϕ mn θ n {z } {z } {z } y Φ θ ϕ 1 θ 1 =.. or, even more compact, as ϕ m {z } {z } Φ θ n θ (10.6) (10.7) y = Φθ (10.8) which is a set of equations from which we will calculate or estimate a value of the unknown θ using the LS-method. 1 1 In matematical litterature (10.8) is more often written on the form b = Ax. I have used symbols which are common in the field of system identification.

90 The LS problem We define the equation-error vector or prediction-error vector 2 as the difference between the left side and the right side of (10.8): e = = e 1. e m (10.9) y 1 ϕ 1 θ. y m ϕ m θ (10.10) = y Φθ (10.11) Figure 10.2 illustrates the equation-errors or prediction errors for the case of the model y = ϕθ (10.12) to be fitted to two data points. y y 2 y 1 Figure 10.2: Equation-errors or prediction errors e 1 and e 2 for a simple case The problem is to calculate a value an estimate of the unknown parameter-vector θ so that the following quadratic criterion function, V (θ), 2 The name prediction-error vector is because the term Φθ can be regarded as a prediction of the observed (known) output y.

91 91 is minimized: V (θ) = e e e m (10.13) = e T e (10.14) = (y Φθ) T (y Φθ) (10.15) = y T θ T Φ T (y Φθ) (10.16) = y T y y T Φθ θ T Φ T y + θ T Φ T Φθ (10.17) In other words, the problem is to estimate θ so that the sum of quadratic prediction errors is minimized The LS solution Since V (θ) is a quadratic function of the unknown parameters θ, the minimum value of V (θ) can be calculated by setting the derivative of V with respect to θ equal to zero. This is illustrated in Figure Using the differentiation rules for expressions containing vectors and then setting the derivative equal to zero, yields or dv (θ) dθ =2Φ T Φθ 2Φ T y! =0(vector) (10.18) Φ T Φθ = Φ T y (10.19) (10.19) is called the normal equation. The θ-solution of (10.19) can be found by pre-multiplying (10.19) with (Φ T Φ) 1 (which is called the pseudo-inverse of Φ). The result is θ LS =(Φ T Φ) 1 Φ T y (10.20) which is the LS-solution of (10.8). (All right side terms in (10.8) are known.) Note: To apply the LS-method, the model must be written on the regression model form (10.8), which consists of m (10.3) stacked. Example 18 LS-estimation of two parameters Given the model z = xc + d (10.21)

92 92 Figure 10.3: The LS solution θ LS corresponds to the minimum value of the quadratic function V (θ), and can be calculated by setting the derivative V (θ)/dθ to zero. where the parameters c and d will be estimated from three given sets of corresponding values of z and x, as shown here: 0.8 = 1c + d (10.22) 3.0 = 2c + d (10.23) 4.0 = 3c + d (10.24)

93 93 which can be written on the form = 4.0 {z } y = c d 3 1 {z } {z } Φ θ ϕ 1 ϕ 2 c d ϕ 3 {z } {z } θ Φ (10.25) (10.26) which is on the required regression model form (10.8). We can now calculate the LS-estimate of θ using (10.20): θ LS = (Φ T Φ) 1 Φ T y (10.27) = (10.28) cls = = (10.29) 0.6 d LS [End of Example 18] Example 19 LS-estimation of valve parameter Figure 10.4 shows a valve. We assume that the flow q is given by the model q = K v u p (10.30) where u is the control signal and p isthepressuredrop.thevalve u K v q + p _ Figure 10.4: Valve where the valve parameter K v shall be estimated parameter K v will be estimated from m corresponding values of q, u and p.

94 94 We start by writing the model on the regression model form y = ϕθ: q {z} y = u p K v {z} {z} The stacked regression model (10.8) becomes q 1 u 1 p1. =. [K v ] {z} q m u m pm θ {z } {z } y Φ LS-estimate K vls can now be calculated from (10.20): ϕ θ (10.31) (10.32) θ LS = K vls =(Φ T Φ) 1 Φ T y (10.33) 1 = u 1 p1 u 1 p1 u m pm. (10.34) u m pm q 1 u 1 p1 u m pm. (10.35) q m = q 1u 1 p1 + + q m u m pm u1 p um pm 2 (10.36) [End of Example 19] Criterion for convergence of estimate towards the true value The prediction-error vector is e = y Φθ LS (10.37) It can be shown [6] that the LS-estimate θ LS converges towards the true value θ 0 of the parameter vector as the number m of sets of observations goes to infinity, only if e is white noise. White noise means that the elements of e are random numbers with zero mean value. The opposite of white noise is coloured noise. A constant having value different from zero is an (extreme) example of coloured noise. e becomes coloured if there are systematic equation errors. Such systematic equation errors can be reduced or eliminated by choosing a more accurate model structure, for example by assuming the model y = ax + b in stead of y = ax if the observed values of y contains a constant term b.

95 Model validation: How to compare and select among several estimates? If you are to compare and select the best model among a number of estimated models, you may use some mathematical criterion as the basis of your selection. The criterion function V given by (10.13) can be used. The procedure is to calculate the value of V with the estimated parameter vector θ LS inserted for θ. Ingeneral,the less V, the better is the estimate. V given by (10.13) depends on the number m of observations or equations in the regression model. To compare values of V from different estimates, a normalized criterion should be used, for example where e is calculated using (10.37). V norm = 1 m V (θ LS) = 1 m et e (10.38) You can expect an estimate to give a smaller value of the criterion V if the number of parameters n is increased. This is because increasing n introduces more degreees of freedom in the model. If we increased n to be equal to m (giving equal numbers of unknown parameters and equations) we could achieve perfect modeling and hence V =0! The drawback of choosing a large n is that the random noise in the experimental data will be modeled, causing the model to loose its generality since the noise in one experiment is not equal to the noise in another experiment. A good strategy is to use approximately one half of the logged data (time-series) for model development (estimation) and the rest of the data for validation (calculating V ). In this way you can avoid ending up with a model which includes a noise model. If you must choose among models of a different number of parameters, you can apply the Parsimony Principle: Among a number of proper models, choose the simplest one, i.e, the one having smallest number of parameters, n. Proper models are those models which have normalized criterion functions V of roughly equal value. The parsimony principle expresses that a high number of parameters, n, should be punished. This can be stated mathematically as follows. The FPE index (Final Prediction Error) is defined as [6] FPE = 1+ n m 1 m n V (10.39) The fraction in (10.39) increases as n increases, and concequently, large values of n are punished. So, the proper model is chosen as that model having the smallest value of FPE.

96 Parameter estimation of dynamic models with the LS method The LS-method can be used to estimate unknown parameters in dynamic models, for example the gain K and the time-constant T of a first order transfer function: H(s) = y(s) u(s) = K Ts+1 (10.40) You will soon see concrete examples of how to use the LS-method to estimate dynamic models, but let us first reflect on proper excitation and model validation (cf. the following two sections) Good excitation is necessary! Assume, as an example, that you want to estimate the time-constant T of the transfer function (10.40). You can not estimate T, which is related to thedynamicsofthesystem,ify and u have constant values all the time. Thus, it is necessary that the excitation signal, u(t), has sufficiently rich variation to give the LS-method enough information about the dynamics of the system to produce an accurate estimate of T. Several good excitation signals are described in the following. PRB-signal (Pseudo-Random Binary signal) Figure 10.5 shows a realization of a PRB-signal. This signal can have only one of two possible values, say U and U, with the probability p for the signal to change value from one time-step to the next. You (the user) must choose p (and the signal amplitudes, of course). The less p, themore seldom the signal changes value. p should be chosen small enough for the system to have a chance to react with a significant response before the input value changes. Below is a pseudo-program which realizes a PRB-signal with 1 and 1 as possible values. In the program it is assumed that the rand-function returns a uniformly distributed random signal between 0 and 1. 3 p=0.1; %Probability is set to In MATLAB the function rand returns a random number uniformly distributed between 0 and 1. In LabVIEW the function Random Number does the same.

97 97 Figure 10.5: PRB-signal with probability of shift equal to p =0.1. The timestep of the signal is h =0.1 sec. u=1; while not stop { if rand<p { u=-u; } } Chirp signal See Figure This is a sine wave where the frequency increases linearly with time: u(t) =U sin(2πft) (10.41) where the frequency f [Hz] is f = K f t (10.42) where the frequency coefficient K f is chosen so that a proper frequency range is covered so that the dynamics of the system is excited sufficiently. Up-down-up signal See Figure This signal is simple to generate manually during the experiment. This signal gives in many cases enough excitation for the

98 98 Figure 10.6: The chirp signal is a sine wave with constantly increasing frequency. estimator to calculate accurate parameter estimates, but the period of the signal shift must be large enough to give the system output a chance to approximately stabilize between the steps. Figure 10.7: Up-down-up signal How to select a proper model? To select a proper model, you can apply the principles described on page 95. In the following are described some ways to check that a model candidate is a proper model:

99 99 Check prediction error: The prediction error e given by (10.37) should be white, that is, it should be similar to a random signal. The less coloured prediction error, the better estimate. The colour of the prediction error can be checked by calculating (and plotting) the auto-correlation of e. Compare simulated signal with measured signal: You can simulate using the estimated model and compare the simulated output signal with the logged or measured output signal. Of course, the simulator model and the real system must excited by the same input signal. If possible, this input signal should not be the same as that used for model estimation. As mentioned on page 95, one strategy is to use approximately one half of the logged data (time-series) for model development and the rest of the data for validation (here, simulation). If the error between the simulated and the measured output signals is small, you may conclude that the model gives a good representation of the true system. A large error may come from poor (weak) excitation or a wrong model structure. Frequency response; Poles; Time constants etc.: These quantitiescanbecalculatedfromanestimatedmodel,andtheymust be in accordance with assumed or observed properties of the system. For example, estimated poles should match with the dynamic properties of the system to be modeled. In a practical experiment, it may be sufficient to do only the simulation-test described above Differential equation models Assume as an example the following non-linear a differential equation: Aḣ(t) = K 1p h(t)+k2 u(t) (10.43) (which can be the model describing the level h in a liquid tank with outflow through a valve with fixed opening and inflow via a pump controlled by the signal u). Suppose that K 1 and K 2 will be estimated, while A is known. (10.43) written on regression model form (10.3) becomes Aḣ(t) {z } y = p h(t) u(t) {z } ϕ K1 K 2 {z } θ (10.44)

100 100 Eventual time-delayed signals causes no problems for regression model form. Here is an example of a model with time-delay: ẋ(t) =ax(t)+bu(t τ) (10.45) which written on regression model form (10.3) is ẋ(t) = x(t) u(t τ) a {z} {z } b y ϕ {z } θ (10.46) Time-derivatives, as ḣ(t) in (10.44), must be calculated numerically when used in a regression model. Two simple numerical approximations to the time-derivatives is the Euler s forward method:: ẋ(t k )=ẋ k x k+1 x k (10.47) h where h is the time-step between the discrete points of time, t k and t k 1, and the Euler s backward method:: ẋ k x k x k 1 (10.48) h A more accurate approximation to time-derivatives is the center difference method, which is the average of the two Euler methods described above: ẋ k x k+1 x k h + x k x k 1 h 2 = x k+1 x k 1 2h The center difference method is illustrated in Figure (10.49) Higher order derivatives can be calculated numerically by using the center difference method in cascade, as follows: ẍ(t k )=ẍ k ẋk+1 ẋ k 1 2h = x k+2 x k 2h x k x k 2 2h 2h = x k+2 2x k + x k 2 4h 2 (10.50) Example 20 Estimation of gain and time constant We will estimate the gain K and the time constant T in the following model H(s) = y(s) u(s) = K (10.51) Ts+1

101 101 x Line with slope x k+1 - x k-1 2h Line and tangent have almost same slope x k+1 x k-1 Tangent to x(t)- curve at t k Slope of tangent is x(t k ) t k-1 t k t k+1 t 2h Figure 10.8: The center difference method as an approximation to a timederivative: ẋ(t k ) x k+1 x k 1 2h from time-series of values of y and u. The true system is in the form of a simulator for the system H 0 (s) = y(s) u(s) = s +1 = K 0 T 0 s +1 (10.52) where K 0 =1.5 and T 0 =0.5 are the true parameter values. (The simulator takes here the place of a physical system.) We start by writing (10.51) on the regression model form y = ϕθ: From (10.51) we get (Ts+1)y(s) =Ku(s) (10.53) which inverse transformed becomes T ẋ(t)+x(t) =Ku(t) (10.54)

102 102 which, written on regression model form, becomes x(t) = u(t) ẋ(t) K {z} {z } T y ϕ {z } θ We can use the center difference method (10.49) to approximate ẋ(t k )=ẋ k. (10.55) is then x {z} k = uk x k+1 x k 1 K 2h {z } T y k ϕ k {z } θ (10.55) (10.56) If we assume that there are values for u k and x k for time-indices k =1, 2,,L 1,L. The stacked regression model (10.8) becomes x 2. x L 1 {z } y = u 2 x 3 x 1 2h. u L 1. x L x L 2 2h {z } Φ K T {z } θ (10.57) Here we have used only data which are available. For example, neither x 0 nor x L+1 exist and hence are not used in (10.57). Let us see how this works. Figure 10.9 shows the front panel of a simulator including the LS-estimator. The excitation signal u(t) is adjusted manually as an up-down-up signal. Simulated random noise is added to the simulated output signal x(t). The estimator seems to work well since K is estimated to 1.50 (true value is 1.50) andt is estimated to 1.03 (true value is 1.00). Due to the random noise, which is generated from a different random generator seed each time the simulator is run, the estimates for K and T are not exactly the same in repeated experiments. [End of Example 20] s-transfer function models Transfer function models can be transformed to a regression model form by first finding a differential equation which is equivalent to the transfer function and then writing this differential equation on regression model form. Here is an example: Given the transfer function model H(s) = x(s) u(s) = 1 s + a (10.58)

103 103 Figure 10.9: Example 20: The front panel of a simulator for a first order system including the LS-estimator where a is to be estimated. En equivalent differential equation is found by first cross-multiplying in (10.58) to get (s + a) x(s) = sx(s) + ax(s) = u(s) (10.59) Taking the inverse Laplace transform gives ẋ(t)+ax(t) =u(t) (10.60) which can be written on regression model form as follows: u(t) ẋ(t) {z } y =[x(t)] [a] {z } {z} Here ẋ(t) must be calculated numerically, preferably with the center difference method described above. ϕ θ (10.61)

104 Black-box modeling A black-box model is a dynamic model in the form of a state-space model or a transfer function with non-physical parameters. A black-box model represents the dynamic properties of the system and does not contain explicit information about the physics of the system. Such models are useful in many situations, for example in model-based analysis and design (as controller tuning) of control systems and in signal modeling. It is common to have black-box models in the form of discrete-time models not continuous-time models. This is because discrete-time models are more directly related to the discrete-time nature of the sampled data (time-series) from experiments from which the black-box model is developed. The most basic discrete-time black-box model form is the state-space model x(t k+1 ) = Ax(t k )+Bu(t k )+Ke(t k ) (10.62) y(t k ) = Cx(t k )+Du(t k )+e(t k ) (10.63) with initial state x(0). x(t k ) is the state-vector, u(t k ) is the input-vector, y(t k ) is the output-vector, e(t k ) is the (white or random) noise-vector, and A, B, C, D, andk are coefficient matrices. t k+1 means the time at timestep no. k whichisanintegertime-index.t k+1 = kh where h [sec] is the timestep length or interval. The order n of this model is the number of states, which is the number of elements in the state-vector. From this state-space model a z-transfer function from u til y can be calculated, for example by using the formula (6.19) which is repeated here for convenience: H y,u (z) =C(zI A) 1 B + D (10.64) In (10.64) z is a variable which plays in many respects the same role for discrete-time systems as the Laplace-variable s plays for continuous-time systems. Once you have a discrete-time model a transfer function model or a state-space model, you can analyze it in many ways: Run simulations. Plot frequency response Plot poles or eigenvalues Transform to equivalent continuous-time model to find e.g. time-constants

105 105 Theaboveoperationscanbeexecutedine.g.MATLABorLabVIEW. How can you estimate a black-box model in practice? Several estimation methods are available[6], for example the least squares method described earlier in this chapter. However, I suggest you initially consider to use one of the subspace methods, which is a relatively new class of estimation methods. Subspace methods are effective and easy to use in practice. A black-box state-space model on the form (10.62) (10.63) is estimated from the experimental data. In more detail: All the five matrices in (10.62) (10.63) are estimated, and even the initial state is estimated, from known time-series of the input u and the corresponding output y. Once a state-space model is estimated, a transfer function may be calculated using (10.64). In practical implementations of subspace-methods it is typical that the best order n is indicated to the user so that the user can accept this order or tell the method to use some other order. In some implementations the choice of order n is done automatically (the choice is done by the algorithm, hidden for the user). If so, the modeling is fully automated, and all the user have to do, is to feed the subspace-method or algorithm with the time-series of input u and output y. Still, the user should check if the estimated model is reasonable for example by doing simulations with the model or by studying the frequency response of the model. Note that the black-box state-space model which is estimated, is linear. Therefore, the models should be regarded as a model describing the deviations about an operating point. This means that the input u and the output y should be regarded as deviation variables. Consequently, you should remove trends in these data before feeding them to the estimation algorithm. In most cases it is sufficient to remove the mean value from the data so that the resulting data, which you feed to the estimation algorithm, varies about zero (roughly). If you are quite sure about the order of the model, it is possible to tell the subspace-method which order it should use for the model to be estimated. If the sampling interval is very small relative to time-constants of the process, you should be careful not to tell the estimator to use a high model order. In general, the higher model order, the more difficult to tune a model to fit the data.

106 106 Computer tools for black-box modeling There are many functions in MATLAB (System Identification Toolbox) and in LabVIEW(System Identification Toolkit) for estimation of dynamic models. A subspace method is available in both tools. All these functions can produce state-space models, from which we can get a transfer function model as explained above. Example 21 Automatic black-box modeling This example shows how the n4sid-function in MATLAB s System Identification Toolbox can be used to develop a black-box model. A simulator is implemented (in LabVIEW) for a dynamic system given by the following s-transfer function model (second order with time-delay): K y process (s) = (T 1 s +1)(T 2 s +1) e τs u(s) (10.65) {z } H(s) where K =1, T 1 =1s, T 2 =0.8 s, τ =0.4 s. Uniformly distributed random noise, w, of amplitude 1 is added to y process to give a simulated measurement signal: y(t k )=y process (t k )+w(t k ) (10.66) The sampling (and simulation) interval is h =0.1 s. Time-series of u and y, but with the mean values subtracted, are used in the n4sid-function to estimate a black-box model. Figure shows the time-series. Below is the MATLAB-code which estimates the model. timeseries=[dy du]; %dy and du are MATLAB vectors. %Estimation of model (on an internal theta-format): H_theta_n4sid=n4sid(timeseries); %The th2tf-function gets transfer function data from theta-model: [numerator,denominator]=th2tf(h_theta_n4sid); Ts=0.1; %Sampling interval %Generates z-transfer function LTI-object with tf-function %from Control System Toolbox: H1=tf(numerator,denominator,Ts) The result of the code shown above is the LTI-object (linear, time-invariant) of name H1. An LTI-object is a special object representing

107 107 Figure 10.10: Time-series of process input u and process measurement y with mean values removed. dynamic systems in MATLAB. LTI-objects can be used in many functions in MATLAB, for example step (step response simulation), bode (plots frequency response in a Bode diagram), and pzmap (calculated poles and zeros). LTI-objects can also be used in SIMULINK via the LTI-block. For the data used in this example, the following z-transfer function (H 1 (z) is represented by H1) was created: z H 1 (z) = z 2 (10.67) 1.83z (Although z-transfer functions are not described in this book, it may be interesting here to just see the z-transfer function resulting from the system identification.) Figure shows simulated step responses for the true system, which is given by (10.65) (10.66), and for the estimated model, which is given by (10.67). The step responses do not differ much. The time-delay in (10.65) is not estimated explicitly, but it is represented implicitly by the numerator and denominator of (10.67). The inverse response in the beginning of the step response in Figure seems to express the time-delay. [End of Example 21]

108 108 Figure 10.11: Simulated step responses for the true system, which is given by (10.65) (10.66), and for the estimated model, which is given by (10.67). Estimation is done with the n4sid-function. Choosing among model order alternatives In the n4sid-function you can define a range of model orders. The function will then show you a plot of some singular values as a function of model orders from which you choose a proper order, before the subspace-function estimates the model with that order. Below is the MATLAB-code which gives you the user the opportunity to choose among orders 1 5. H_theta_n4sid=n4sid(timeseries,[1:5]); (ThiscodeisusedinsteadofH_theta_n4sid=n4sid(timeseries), see above.) Figure shows a singular values plot generated by n4sid. The time-series of data are the same as in Example 21. In this case, model order 2 is suggested, because the singular values are significantly less for orders greater than 2. The user can accept this order, or select some other order from the MATLAB-command line.

109 109 Figure 10.12: Singular values plot generated by the n4sid function. Model order 2 is suggested by n4sid, because the singular values are significantly less for orders greater than 2.

110 Chapter 11 State estimation with Kalman Filter 11.1 Introduction This chapter describes how to estimate the values of state variables of a dynamic system. Why can such state estimators be useful? Supervision: State estimates can provide valuable information about important variables in a physical process, for example feed composition to a reactor, flow variations to a separator, etc. Control: In general, the more information the controller has about the process it controls, the better (mor accurate) it can control it. State estimators can be a practical or economical alternative to real measurements. For example, the environmental forces acting on a vessel can be used by the controller to better control the position of the vessel, as in Dynamic Positioning systems. Because the state estimators are kind of sensors implemented in software (computer programs) they are some times denoted soft sensors. This chapter describes the Kalman Filter which is the most important algorithm for state estimation. The Kalman Filter was developed by Rudolf E. Kalman around 1960 [7]. There is a continuous-time version of the Kalman Filter and several discrete-time versions. (The discrete-time versions are immediately ready for implementation in a computer program.) Here the predictor-corrector version of the discrete-time Kalman 110

111 111 Filter will be described. This version seems to be the most commonly used version Observability A necessary condition for the Kalman Filter to work correctly is that the system for which the states are to be estimated, is observable. Therefore, you should check for observability before applying the Kalman Filter. (There may still be other problems that prevent the Kalman Filter from producing accurate state estimates, as a faulty or inaccurate mathematical model.) Observability for the discrete-time systems can be defined as follows [17]: The discrete-time system x(k +1)=Ax(k)+Bu(k) (11.1) y(k) =Cx(k)+Du(k) (11.2) is observable if there is a finite number of time steps k so that knowledge about the input sequence u(0),...,u(k 1) and the output sequence y(0),...,y(k 1) is sufficient to determine the initial state state of the system, x(0). Let us derive a criterion for the system to be observable. Since the influende of input u on state x is known from the model, let us for simplicity assume that u(k) =0. From the model (11.1) (11.2) we get y(0) = Cx(0) (11.3) y(1) = Cx(1) = CAx(0) (11.4). y(n 1) = CA n 1 x(0) (11.5) whichcanbeexpressedcompactlyas C CA x(0) =. CA n 1 {z } M obs y(0) y(1). y(n 1) {z } Y (11.6)

112 112 Let us make a definition: Observability matrix: C CA M obs =. CA n 1 (11.7) (11.6) has a unique solution only if the rank of M obs is n. Therefore: Observability Criterion: The system (11.1) (11.2) is observable if and only if the observability matrix has rank equal to n where n is the order of the system model (the number state variables). The rank can be checked by calculating the determinant of M obs.ifthe determinant is non-zero, the rank is full, and hence, the system is observable. If the determinant is zero, system is non-observable. Non-observability has several concequences: The transfer function from the input variable y to the output variable y has an order that is less than the number of state variables (n). There are state variables or linear combinations of state variables that do not show any response. The steady-state value of the Kalman Filter gain can not be computed. This gain is used to update the state estimates from measurements of the (real) system. Example 22 Observability Given the following state space model: x1 (k +1) 1 a = x 2 (k +1) 0 1 {z } A x1 (k) x 2 (k) y(k) = c 1 0 x1 (k) {z } x 2 (k) C + [0] {z} D {z } B u(k) (11.8) u(k) (11.9)

113 113 The observability matrix is (n =2) c1 0 C M obs = CA 2 1 = = CA c1 0 1 a = c1 0 c 1 ac (11.10) The determinant of M obs is det (M obs )=c 1 ac 1 c 1 0=ac 1 2 (11.11) The system is observable only if ac 1 2 6=0. Assume that a 6= 0which means that the first state variable, x 1, contains some non-zero information about the second state variable, x 2. Then the system is observable if c 1 6=0,i.e.ifx 1 is measured. Assume that a =0which means that x 1 contains no information about x 2. In this case the system is non-observable despite that x 1 is measured. [End of Example 22] 11.3 The Kalman Filter algorithm The Kalman Filter is a state estimator which produces an optimal estimate in the sense that the mean value of the sum (actually of any linear combination) of the estimation errors gets a minimal value. In other words, The Kalman Filter gives the following sum of squared errors: E[e x T (k)e x (k)] = E e x1 2 (k)+ + e xn 2 (k) (11.12) a minimal value. Here, e x (k) =x est (x) x(k) (11.13) is the estimation error vector. (The Kaman Filter estimate is sometimes denoted the least mean-square estimate.) This assumes actually that the model is linear, so it is not fully correct for nonlinear models. It is assumed the that the system for which the states are to be estimated is excited by random ( white ) disturbances ( or process noise) and that the

114 114 measurements (there must be at least one real measurement in a Kalman Filter) contain random ( white ) measurement noise. The Kalman Filter has many applications, e.g. in dynamic positioning of ships where the Kalman Filter estimates the position and the speed of the vessel and also environmental forces. These estimates are used in the positional control system of the ship. The Kalman Filter is also used in soft-sensor systems used for supervision, in fault-detection systems, and in Model-based Predictive Controllers (MPCs) which is an important type of model-based controllers. The Kalman Filter algorithm was originally developed for systems assumed to be represented with a linear state-space model. However, in many applications the system model is nonlinear. Furthermore the linear model is just a special case of a nonlinear model. Therefore, I have decided to present the Kalman Filter for nonlinear models, but comments are given about the linear case. The Kalman Filter for nonlinear models is denoted the Extended Kalman Filter because it is an extended use of the original Kalman Filter. However, for simplicity we can just denote it the Kalman Filter, dropping extended in the name. The Kalman Filter will be presented without derivation. The Kalman Filter presented below assumes that the system model consists of this discrete-time (possibly nonlinear) state space model: x(k +1)=f[x(k), u(k)] + Gw(k) (11.14) and this (possibly nonlinear) measurement model: y(k) = g[x(k), u(k)] + Hw(k) + v(k) (11.15) A linear model is just a special case: x(k +1)=Ax(k)+Bu(k) + Gw(k) (11.16) {z } =f and y(k) =Cx(k)+Du(k) + Hw(k)+v(k) (11.17) {z } =g The models above contains the following variables and functions:

115 115 x is the state vector of n state variables: x 1 x 2 x =. x n u is the input vector of m input variables: u 1 u 2 u =. u m (11.18) (11.19) It is assumed that the value of u is known. u includes control variables and known disturbances. f is the system vector function: f = f 1 () f 2 (). f n () (11.20) where f i () is any nonlinear or linear function. w is random (white) disturbance (or process noise) vector: w 1 w 2 w =. w q (11.21) with auto-covariance R w (L) =Qδ(L) (11.22) where Q (a q q matrix of constants) is the auto-covariance of w at lag L =0. δ(l) is the unit pulse function, cf. (9.25). A standard assumption is that Q Q Q = = diag(q 11,Q 22,,Q nn ) (11.23) Q nn Hence, the number q of process disturbances is assumed to be equal to the number n of state variables. Q ii is the variance of w i.

116 116 G is the process noise gain matrix relating the process noise to the state variables. It is common to assume that q = n, makingg square: G G G = (11.24) Q nn In addition it is common to set the elements of G equal to one: making G an identity matrix: G = G ii =1 (11.25) = I n (11.26) y is the measurement vector of r measurement variables: y 1 y 2 y =. y r (11.27) g is the measurement vector function: g 1 () g 2 () g =. g r () (11.28) where g i () is any nonlinear or linear function. Typically, g is a linear function on the form g(x) =Cx (11.29) where C is the measurement gain matrix. H is a gain matrix relating the disturbances directly to the measurements (there will in addition be an indirect relation because the disturbances acts on the states, and some of the states are measured). It is however common to assume that H is a zero matrix of dimension (r q): H = H rq (11.30)

117 117 v is a random (white) measurement noise vector: v 1 v 2 v =. v r (11.31) with auto-covariance R v (L) =Rδ(L) (11.32) where R (a r r matrix of constants) is the auto-covariance of v at lag L =0. A standard assumption is that R R R = = diag(r 11,R 22,,R rr ) (11.33) R rr Hence, R ii isthevarianceofv i. Note: If you need to adjust the strength or power of the process noise w or the measurement noise v, you can do it by increasing the variances, Q and R, respectively. Here you have the Kalman Filter: (The formulas (11.35) (11.37) below are represented by the block diagram shown in Figure 11.1.) Calculation of Kalman Filter state estimate: 1. This step is the initial step, and the operations here are executed only once. Assume that the initial guess of the state is x init. The initial value x p (0) of the predicted state estimate x p (which is calculated continuously as described below) is set equal to this initial value: Initial state estimate x p (0) = x init (11.34) 2. Calculate the predicted measurement estimate from the predicted state estimate: Predicted measurement estimate: y p (k) =g [x p (k)] (11.35) (It is assumed that the noise terms Hv(k) and w(k) are not known or are unpredictable (since they are white noise), so they can not be used in the calculation of the predicted measurement estimate.)

118 Calculate the so-called innovation process or variable it is actually the measurement estimate error as the difference between the measurement y(k) and the predicted measurement y p (k): Innovation variable: e(k) =y(k) y p (k) (11.36) 4. Calculate the corrected state estimate x c (k) by adding the corrective term Ke(k) to the predicted state estimate x p (k): Corrected state estimate: x c (k) =x p (k)+ke(k) (11.37) Here, K is the Kalman Filter gain. The calculation of K is described below. Note: Itisx c (k) that is used as the state estimate in applications 1. About terminology: The corrected estimate is also denoted the posteriori estimate because it is calculated after the present measurement is taken. It is also denoted the measurement-updated estimate. Due to the measurement-based correction term of the Kalman Filter, you can except the errors of the state estimates to be smaller than if there were no such correction term. This correction can be regarded as a feedback correction of the estimates, and it is well known from dynamic system theory, and in particular control systems theory, that feedback from measurements reduces errors. This feedback is indicated in Figure Calculate the predicted state estimate for the next time step, x p (k +1), using the present state estimate x c (k) and the known input u(k) in process model: Predicted state estimate: x p (k +1)=f[x c (k), u(k)] (11.38) (It is assumed that the noise term Gv(k) is not known or is unpredictable, since it is random, so it can not be used in the calculation of the state estimate.) About terminology: The predicted estimate is also denoted the priori estimate because it is calculated before the present measurement is taken. It is also denoted the time-updated estimate.

119 119 Real system (process) Process noise (disturbances) w(k) (Commonly no connection) Measurement noise v(k) u(k) Known inputs (control variables and disturbances) Process x(k) x(k+1) = f[x(k),u(k)] + Gw(k) (Commonly no connection) State variable (unknown value) Sensor y(k) = g[x(k),u(k)] + Hw(k) + v(k) Measurement variable y(k) (Commonly no connection) Kalman Filter f() System function x p (k+1) 1/z Corrected estimate x c (k) Unit delay x c (k) x p (k) Predicted estimate Applied state estimate Ke(k) g() K y p (k) Measurement function Kalman gain e(k) Innovation variable or process Feedback correction of estimate Figure 11.1: The Kalman Filter algorithm (11.35) (11.38) represented by a block diagram (11.35) (11.38) can be represented by the block diagram shown in Figure The Kalman Filter gain is a time-varying gain matrix. It is given by the algorithm presented below. In the expressions below the following matrices are used: Auto-covariance matrix (for lag zero) of the estimation error of the corrected estimate: P c = R ex c (0) = E n(x m xc )(x m xc ) T o (11.39) 1 Therefore, I have underlined the formula.

120 120 Auto-covariance matrix (for lag zero) of the estimation error of the predicted estimate: n o P d = R exd (0) = E (x m xd )(x m xd ) T (11.40) The transition matrix A of a linearized model of the original nonlinear model (11.14) calculated with the most recent state estimate, which is assumed to be the corrected estimate x c (k): A = f( ) (11.41) x xc (k),u(k) The measurement gain matrix C of a linearized model of the original nonlinear model (11.15) calculated with the most recent state estimate: C = g( ) (11.42) x xc(k),u(k) However, it is very common that C = g(), and in these cases no linearization is necessary. The Kalman Filter gain is calculated as follows (these calculations are repeated each program cycle): Calculation of Kalman Filter gain: 1. This step is the initial step, and the operations here are executed only once. The initial value P p (0) can be set to some guessed value (matrix), e.g. to the identity matrix (of proper dimension). 2. Calculation of the Kalman Gain: Kalman Filter gain: K(k) =P p (k)c T [CP p (k)c T + R] 1 (11.43) 3. Calculation of auto-covariance of corrected state estimate error: Auto-covariance of corrected state estimate error: P c (k) =[I K(k)C] P p (k) (11.44) 4. Calculation of auto-covariance of the next time step of predicted state estimate error: Auto-covariance of predicted state estimate error: P p (k +1)=AP c (k)a T + GQG T (11.45)

121 121 Here are comments to the Kalman Filter algorithm presented above: 1. Order of formulas in the program cycle: The Kalman Filter formulas can be executed in the following order: (11.43), Kalman Filter gain (11.36), innovation process (variable) (11.37), corrected state estimate, which is the state estimate to be used in applications (11.38), predicted state estimate of next time step (11.44), auto-covarience of error of corrected estimate (11.41), transition matrix in linear model (11.42), measurement matrix in linear model (11.45), auto-covarience of error of predicted estimate of next time step 2. Steady-state Kalman Filter gain. If the model is linear and time invariant (i.e. system matrices are not varying with time) the auto-covariances P c and P p will converge towards steady-state values. Consequently, the Kalman Filter gain will converge towards a steady-state Kalman Filter gain value, K 2 s,whichcanbe pre-calculated. It is quite common to use only the steady-state gain in applications. For a nonlinear system K s may vary with the operating point (if the system matrix A of the linearized model varies with the operating point). In practical applications K s may be re-calculated as the operating point changes. Figure 11.2 illustrates the information needed to compute the steady-state Kalman Filter gain, K s. 3. Model errors. There are always model errors since no model can give a perfect description of a real (practical system). It is possible to analyze the implications of the model errors on the state estimates calculated by the Kalman Filter, but this will not be done in this book. However, in general, the estimation errors are smaller with the Kalman Filter than with a so-called ballistic state estimator, which is the state estimator that you get if the Kalman Filter gain K is set to zero. In the latter case there is no correction of the estimates. It is only the predictions (11.38) that make up the estimates. The Kalman Filter then just runs as a simulator. 2 MATLAB and LabVIEW have functions for calculating the steady-state Kalman Gain.

122 122 Transition matrix Process noise gain matrix Measurement gain matrix Process noise auto-covariance Measurement noise auto -covariance A G C Q R Steady state Kalman Filter gain K s K s Figure 11.2: Illustration of what information is needed to compute the steadystate Kalman Filter gain, K s. Note that you can try to estimate model errors by augmenting the states with states representing the model errors. Augmented Kalman Filter is described in Section How to tune the Kalman Filter: Usually it is necessary to fine-tune the Kalman Filter when it is connected to the real system. The process disturbance (noise) auto-covariance Q and/or the measurement noise auto-covariance R arecommonlyusedforthe tuning. However, since R is relatively easy to calculate from a time series of measurements (using some variance function in for example LabVIEW or MATLAB), we only consider adjusting Q here. What is good behaviour of the Kalman Filter? How can good behaviour be observed? It is when the estimates seems to have reasonable values as you judge from your physical knowledge about the physical process. In addition the estimates must not be too noisy! What is the cause of the noise? In real systems it is mainly the measurement noise that introduces noise into the estimates. How do you tune Q to avoid too noisy estimates? The larger Q the stronger measurement-based updating of the state estimates because a large Q tells the Kalman Filter that the variations in the real state variables are assumed to be large (remember that the process noise influences on the state variables, cf. (11.15)). Hence, the larger Q the larger Kalman Gain K and the stronger updating of the estimates. But this causes more measurement noise to be added to the estimates because the measurement noise is a term in the innovation process e which is calculated by K: x c (k) = x p (k)+ke(k) (11.46) = x p (k)+k{g[x(k)] + v(k) g [x p (k)]} (11.47) where v is real measurement noise. So, the main tuning rule is as follows: Select as large Q as possible without the state estimates becoming too noisy.

123 123 But Q is a matrix! How to select it large or small. Since each of the process disturbances typically are assumed to act on their respective state independently, Q can be set as a diagonal matrix: Q Q Q = = diag(q 11,Q 22,,Q nn ) (11.48) Q nn where each of the diagonal elements can be adjusted independently. If you do not have any idea about numerical values, you can start by setting all the diagonal elements to one, and hence Q is Q = Q (11.49) where Q 0 is the only tuning parameter. If you do not have any idea about a proper value of Q 0 you may initially try Q 0 =0.01 (11.50) Then you may adjust Q 0 or try to fine tune each of the diagonal elements individually. 5. The error-model: Assuming that the system model is linear and that the model is correct (giving a correct representation of the real system), it can be shown that the behaviour of the error of the corrected state estimation, e xc (k), cf. (11.13), is given by the following error-model: 3 Error-model of Kalman Filter: e xc (k +1)=(I KC) Ae xc (k)+(i KC) Gv(k) Kw(k +1) (11.51) This model can be used to analyze the Kalman Filter. Note: (11.13) is not identical to the auto-covariance of the estimation error which is P c (k) =E{[e xc (k) m xc (k)][e xc (k) m xc (k)] T } (11.52) But (11.13) is the trace of P c (the trace is the sum of the diagonal elements): e xc = trace [ P c (k)] (11.53) 3 You can derive this model by subtracting the model describing the corrected state estimate from the model that describes the real state (the latter is simply the process model).

124 The dynamics of the Kalman Filter. The error-model (11.51) of the Kalman Filter represents a dynamic system. The dynamics of the Kalman Filter is given can be analyzed by calculating the eigenvalues of the system matrix of (11.51). These eigenvalues are {λ 1,λ 2,...,λ n } = eig [(I KC)A] (11.54) 7. The stability of the Kalman Filter. It can be shown that the Kalman Filter always is an asymptotically stable dynamic system (otherwise it could not give an optimal estimate). In other words, the eigenvalues defined by (11.54) are always inside the unity circle. 8. Testing the Kalman Filter before practical use: Aswithevery model-based algorithm (as controllers) you should test your Kalman Filter with a simulated process before applying it to the real system. You can implement a simulator in e.g. LabVIEW or MATLAB/Simulink since you already have a model (the Kalman Filter is model-based). Firstly, you should test the Kalman Filter with the nominal model in the simulator, including process and measurement noise. This is the model that you are basing the Kalman Filter on. Secondly, you should introduce some reasonable model errors by making the simulator model somewhat different from the Kalman Filter model, and observe if the Kalman Filter still produces usable estimates. 9. Predictor type Kalman Filter. In the predictortypeofthe Kalman Filter there is only one formula for the calculation of the state estimate: x est (k +1)=Ax est (k)+k[y(k) Cx est (k)] (11.55) Thus, there is no distinction between the predicted state estimate and the corrected estimate (it the same variable). (Actually, this is the original version of the Kalman Filter[7].) The predictor type Kalman Filter has the drawback that there is a time delay of one time step between the measurement y(k) and the calculated state estimate x est (k +1) Estimating parameters and disturbances with Kalman Filter In many applications the Kalman Filter is used to estimate parameters and/or disturbances in addition to the ordinary state variables. One

125 125 example is dynamic positioning systems for ship position control where the Kalman Filter is used to estimate environmental forces acting on the ship (these estimates are used in the controller as a feedforward control signal). These parameters and/or disturbances must be represented as state variables. They represent additional state variables. The original state vector is augmented with these new state variables which we may denote the augmentative states. The Kalman Filter is used to estimate the augmented state vector which consists of both the original state variables and the augmentative state variables. But how can you model these augmentative state variables? The augmentative model must be in the form of a difference equation because that is the model form of state variables are defined. To set up an augmentative model you must make an assumption about the behaviour of the augmentative state. Let us look at some augmentative models. Augmentative state is (almost) constant: The most common augmentative model is based on the assumption that the augmentative state variable x a is slowly varying, almost constant. The corresponding differential equation is ẋ a (t) =0 (11.56) Discretizing this differential equation with the Euler Forward method gives x a (k +1)=x a (k) (11.57) which is a difference equation ready for Kalman Filter algorithm. It is however common to assume that the state is driven by some noise, hence the augmentative model become: x a (k +1)=x a (k)+w a (k) (11.58) where w a is white process noise with assumed auto-covariance on the form R wa (L) =Q a δ(l). As pointed out in Section 11.3, the variance Q a can be used as a tuning parameter of the Kalman Filter. Augmentative state has (almost) constant rate: The corresponding differential equation is or, in state space form, with x a1 x a, ẍ a =0 (11.59) ẋ a1 = x a2 (11.60) ẋ a2 =0 (11.61)

126 where x a2 is another augmentative state variable. Applying Euler Forward discretization with sampling interval h [sec] to (11.60) (11.61) and including white process noise to the resulting difference equations gives 126 x a1 (k +1)=x a1 (k)+hx a2 (k)+w a1 (k) (11.62) x a2 (k +1)=x a2 (k)+w a2 (k) (11.63) Once you have defined the augmented model, you can design and implement the Kalman Filter in the usual way. The Kalman Filter then estimates both the original states and the augmentative states. The following example shows how the state augementation can be done in a practical application. The example also shows how to use functions in LabVIEW and in MATLAB to calculate the steady state Kalman Filter gain. Example 23 Kalman Filter Figure 11.3 shows a liquid tank. We will design a steady state Kalman Filter to estimate the outflow F out.thelevelh is measured. Mass balance of the liquid in the tank is (mass is ρah) ρa tank ḣ(t) = ρk p u ρf out (t) (11.64) = ρk p u ρf out (t) (11.65) After cancelling the density ρ the model is ḣ(t) = 1 A tank [K p u F out (t)] (11.66) We assume that the unknown outflow is slowly changing, almost constant. We define the following augmentative model: F out (t) =0 (11.67) The model of the system is given by (11.66) (11.67). Although it is not necessary, it is convenient to rename the state variables using standard names. So we define x 1 = h (11.68) x 2 = F out (11.69)

127 127 u [V] K p [m 3 /V] F in [m 3 /s] h [m] Level sensor Measured h [m] 0 A [m 2 ] F out [m 3 /s] m [kg] [kg/m 3 ] Kalman Filter Estimated F out Figure 11.3: Example 23: Liquid tank The model (11.66) (11.67) is now ẋ 1 (t) = 1 A tank [K p u(t) x 2 (t)] (11.70) ẋ 2 (t) =0 (11.71) Applying Euler Forward discretization with time step T and including white disturbance noise in the resulting difference equations yields x 1 (k +1)=x 1 (k)+ T [K p u(k) x 2 (k)] + w 1 (k) (11.72) A tank {z } f 1 ( ) or x 2 (k +1)=x 2 (k) + w {z } 2 (k) (11.73) f 2 ( ) x(k +1)=f [x(k),u(k)] + w(k) (11.74)

128 128 w 1 and w 2 are independent (uncorrelated) white process noises with assumed variances R w1 (L) =Q 1 δ(l) and R w2 (L) =Q 2 δ(l) respectively. Here, Q 1 and Q 2 are variances. The multivariable noise model is then w1 w = (11.75) w 2 with auto-covariance where R w (L) =Qδ(L) (11.76) Q11 0 Q = 0 Q 22 (11.77) Assuming that the level x 1 is measured, we add the following measurement equation to the state space model: y(k) =g [x p (k),u(k)] + v(k) =x 1 (k)+v(k) (11.78) where v is white measurement noise with assumed variance where R is the measurement variance. The following numerical values are used: R v (L) =Rδ(L) (11.79) Sampling time: T = 0.1 s (11.80) A tank =0.1 m 2 (11.81) K p =0.001 (m 3 /s)/v (11.82) Q = (initally, may be adjusted) (11.83) R = m 2 (Gaussian white noise) (11.84) We will set the initial estimates as follows: x 1p (0) = x 1 (0) = y(0) (from the sensor) (11.85) x 2p (0) = 0 (assuming no information about initial value (11.86) The Kalman Filter algorithm is as follows: The predicted level measurement is calculated according to (11.35): y p (k) =g [x p (k),u(k)] = x 1p (k) (11.87)

129 129 with initial value as given by (11.85). The innovation variable is calculated according to (11.36), where y is the level measurement: e(k) =y(k) y p (k) (11.88) The corrected state estimate is calculated according to (11.37): or, in detail: x1c (k) x 2c (k) This is the applied esimate! x c (k) =x p (k)+ke(k) (11.89) = x1p (k) x 2p (k) K11 + e(k) (11.90) K 21 {z } K Thepredictedstateestimateforthenexttimestep,x p (k +1), is calculated according to (11.38): x p (k +1)=f[x c (k), u(k)] (11.91) or, in detail: x1p (k +1) x 2p (k +1) = f1 f 2 = x1c (k)+ T A tank [K p u(k) x 2c (k)] x 2c (k) (11.92) To calculate the steady state Kalman Filter gain K s the following information is needed, cf. Figure 11.2: A = f( ) x xp (k),u(k) f 1 f 1 x 1 x 2 = G = = f 2 x 1 f 2 x 2 1 T A tank 0 1 xp (k),u(k) (11.93) (11.94) (11.95) = I 2 (identity matrix) (11.96) C = g( ) (11.97) x xp(k),u(k) = 1 0 (11.98)

130 130 Q = (initially, may be adjusted) (11.99) R =0.001 m 2 (11.100) Figure 11.4 shows the front panel of a LabVIEW simulator of this Figure 11.4: Example 23: Front panel of a LabVIEW simulator of this example example. The outflow F out = x 2 was changed during the simulation, and the Kalman Filter estimates the correct steady state value (the estimate is however noisy). During the simulations, I found that (11.99) gave too noise estimate of x 2 = F out. I ended up with Q = (11.101) as a proper value. Figure 11.5 shows how the steady state Kalman Filter gain K s is calculated using the Kalman Gain function. The figure also shows how to check for observability with the Observability Matrix function. Figure 11.6 shows the implementation of the Kalman Filter equations in a Formula Node.

131 131 Figure 11.5: Example 23: Calculation of the steady state Kalman Filter gain with the Kalman Gain function, and checking for observability with the Observability Matrix function. Below is MATLAB code that calculates the steady state Kalman Filter gain. It is calculated with the dlqe 4 function (in Control System Toolbox). A=[1,-1;0,1] G=[1,0;0,1] C=[1,0] Q=[0.01,0;0,1e-6] R=[0.0001]; [K,Pp,Pc,E] = dlqe(a,g,c,q,r) K is the steady state Kalman Filter gain. (Pp and Pc are steady state estimation error auto-covariances, cf. (11.45) and (11.44). E is a vector containing the eigenvalues of the Kalman Filter, cf. (11.54).) 4 dlqe = Discrete-time linear quadratic estimator.

132 132 Figure 11.6: Example 23: Implementation of the Kalman Filter equations in a Formula Node. (The Kalman Filter gain is fetched from the While loop in the Block diagram using local variables.) MATLAB answers K = Which is the same as calculated by the LabVIEW function Kalman Gain, cf. Figure [End of Example 23]

133 11.5 State estimators for deterministic systems: Observers 133 Assume that the system (process) for which we are to calculate the state estimate has no process disturbances: and no measurement noise signals: w(k) 0 (11.102) v(k) 0 (11.103) Then the system is not a stochastic system any more. In stead we can say is is non-stochastic or deterministic. An observer is a state estimator for a deterministic system. It is common to use the same estimator formulas in an observer as in the Kalman Filter, namely (11.35), (11.36), (11.37) and (11.38). For convenience these formulas are repeated here: Predicted measurement estimate: y p (k) =g [x p (k)] (11.104) Innovation variable: e(k) =y(k) y p (k) (11.105) Corrected state estimate: x c (k) =x p (k)+ke(k) (11.106) Predicted state estimate: x p (k +1)=f[x c (k), u(k)] (11.107) The estimator gain K (corresponding to the Kalman Filter gain in the stochastic case) is calculated from a specification of the eigenvalues of the error-model of the observer. The error-model is given by (11.51) with w(k) =0and v(k) =0,hence: Error-model of observers: e xc (k +1)=(I KC) Ae {z } xc (k) =A e e xc (k) (11.108) A e where A e is the transition matrix of the error-model. The error-model shows how the state estimation error develops as a function of time from an initial error e xc (0). If the error-model is asymptotically stable, the

134 134 steady-state estimation errors will be zero (because (11.108) is an autonomous system.) The dynamic behaviour of the estimation errors is given by the eigenvalues of the transition matrix A e (11.108). But how to calculate a proper value of the estimator gain K? Wecanno longer use the formulas (11.43) (11.45). In stead, K can be calculated from the specified eigenvalues {z 1,z 2,...,z n } of A e : eig [A e ]=eig [(I KC) A] ={z 1,z 2,...,z n } (11.109) As is known from mathematics, the eigenvalues are the λ-roots of the characteristic equation: det [zi (I KC) A] =(z z 1 )(z z 2 ) (z z n )=0 (11.110) In (11.110) all terms except K are known or specified. To actually calculate K from (11.109) you should use computer tools as Matlab or LabVIEW. In Matlab you can use the function place (in Control System Toobox). In LabVIEW you can use the function CD Place.vi (in Control Design Toolkit), and in MathScript (LabVIEW) you can use place. But how to specify the eigenvalues {z i }? One possibility is to specify discrete-time Butterworth eigenvalues {z i }. Alternatively you can specify continuous-time eigenvalues {s i } which you transform to discrete-time eigenvalues using the transformation z i = e s ih (11.111) where h [s] is the time step. Example 24 Calculating the observer gain K The following MathScript-script or Matlab-script calculates the estimator gain K using the place function. Note that place calculates the gain K 1 so that the eigenvalues of the matrix (A 1 B 1 K 1 ) are as specified. place is used as follows: K=place(A,B,z) But we need to calculate K so that the eigenvalues of (I KC) A = A KCA are as specified. The eigenvalues of A KCA are the same as the eigenvalues of Therefore we must use place as follows: (A KCA) T = A T (AC) T K T (11.112)

135 135 K1=place(A,(C*A),z); K=K1 Here is the script: h=0.05; %Time step s=[-2+2j, -2-2j] ; %Specified s-eigenvalues z=exp(s*h); %Corresponding z-eigenvalues A = [1,0.05;0,0.95]; %Transition matrix B = [0;0.05]; %Input matrix C = [1,0]; %Output matrix D = [0]; %Direct-output matrix K1=place(A,(C*A),z); %Calculating gain K1 using place K=K1 %Transposing K The result is K = The function CD Place.vi on the Control Design Toolkit palette in LabVIEW can also be used to calculate the estimator gain K. Figure 11.7 shows the front panel and Figure 11.8 shows the block diagram of a LabVIEW program that calculates K for the same system as above. The same K is obtained. Figure 11.7: Example 24: Front panel of the LabVIEW program to calculate the estimator gain K [End of Example 24]

136 136 Figure 11.8: Example 24: Block diagram of the LabVIEW program to calculate the estimator gain K 11.6 Kalman Filter or Observer? Now you have seen two ways to calculate the estimator gain K of a state estimator: K is the Kalman Gain in a Kalman Filter K is the estimator gain in an observer The formulas for calculating the state estimate are the same in both cases: Kalman Filter: (11.35), (11.36), (11.37), (11.38). Observer: (11.104), (11.105), (11.106), (11.107). The observer may seem simpler than the Kalman Filter, but there is a potential danger in using the observer: The observer does not calculate any optimal state estimate in the least mean square sense, and hence the state estimate may become too noisy due to the inevitable measurement noise if you specify too fast error models dynamics (i.e. too large absolute value of the error-model eigenvalues). This is because fast dynamics requires a large estimator gain K, causing the measurement noise to give large influence on the state estimates. In my opinion, the Kalman Filter gives a better approach to state estimation because it is easier and more intuitive to tune the filter in terms of process and measurement noise variances than in terms of eigenvalues of the error-model.

137 Part IV Model-based control 137

138 Chapter 12 Testing a model-based control system for robustness This part of the compendium describes several model-based controllers. A model-based controller contains the mathematical model of the process to be controlled either explicitly (as in Feedback Linearization, cf. Chapter 15) or implicitly (as in LQ Optimal Control, cf. Chapter 16). Of course, a process model can never give a perfect description of a physical process. Hence, there are model errors. The model errors are in the form of erroneous model structure, and/or erroneous parameter values. So, the controller is based on more or less erroneous information about the process. If the model errors are large, the real control system may behave quite different from what is specified during the design. The control system may even become unstable. A control system that is supposed to work in real life must be sufficiently robust. How can you check if the system is robust (before implementation)? You can simulate the control system. In the simulation you include reasonable model errors. How do you include model errors in a simulator? By using different models in the control function and in the process in the simulator. This is illustrated in Figure You may use the intial model, M 0, in the control function, while you use a changed model, M 1, for the process. You must thoroughly plan which model errors (changes) that you will make, and whether the changes are additive or 138

139 139 Model M 0 Model M 1 Setpoint Control function Control signal Process Process measurement Figure 12.1: Testing the control system when model errors. Models M 0 and M 1 aremadedifferent by purpose. multiplicative. A parameter, say K, is changed additively if it is changed as A multiplicative change is implemented with K 1 = K 0 + K (12.1) K 1 = FK 0 (12.2) where the factor F may be set to e.g. 1.2 (a 20% increase) or 0.8 (a 20% decrease). Even if the process model is accurate, the behaviour of the controller can be largely infuenced by measurement noise. Therefore, you get a more real picture if you also include measurement noise in the simulator. Typically the measurement noise is a random signal 1. Finally, I will mention that there are control system design methods which ensures roubustness of the control system. In the design phase you specify assumed maximum model errors, together with performance specifications. The control function is typically a non-standard controller, which have to be simplified before implementation. It is however beyond the scope of this compendium to describe these design methods. More information can be found in e.g. [12]. 1 Simulation tools as LabVIEW and Simulink contains signal generators for random signals.

140 Chapter 13 Feedforward control 13.1 Introduction We know from previous chapters that feedback control can bring the process output variable to or close to the setpoint. Feedback control is in most cases a sufficiently good control method. But improvements can be made, if required. A problem with feedback is that it is no adjustment of the control variable before the control error is different from zero, since the control variable is adjusted as a function of the control error. This problem does not exist in feedforward control, which may be used as the only control method, or, more typically, as a supplement to feedback control. In feedforward control there is a coupling from the setpoint and/or from the disturbance directly to the control variable, that is, a coupling from an input signal to the control variable. The control variable adjustment is not error-based. In stead it is based on knowledge about the process in the form of a mathematical model of the process and knowledge about or measurements of the process disturbances. Perfect or ideal feedforward control gives zero control error for all types of signals (e.g. a sinusoid and a ramp) in the setpoint and in the disturbance. This sounds good, but feedforward control may be difficult to implement since it assumes or is based on a mathematical process model and that all variables of the model at any instant of time must have known values through measurements or in some other way. These requirements are never completely satisfied, and therefore in practice the control error becomes different from zero. We can however assume that the control error becomes smaller with imperfect feedforward control than without feedforward 140

141 141 control. If feedforward control is used, it is typically used together with feedback control. Figure 13.1 shows the structure of a control system with both feedforward and feedback control. The purpose of feedback control is to Feedforward function, F f Feedforward from setpoint Feedforward from disturbance v F fsp u fsp u fd F fd M f u f y SP e PIDcontrol u fb u Process y Feedback M Figure 13.1: Control system with both feedforward and feedback control reduce the control error due to the inevitable imperfect feedforward control. Practical feedforward control can never be perfect, because of model errors and imprecise measurements. One interpretation of feedforward control is that it introduces an artificial connection from the disturbance to the process output variable. The purpose of this artificial connection is to counteract the natural connection (from the disturbance to the process output variable). The feedforward function F f, which usually consists of a sum of partial functions F fsp and F fd as shown in Figure 13.1, can be developed in several ways: From a differential equations process model From a transfer functions process model From experimental data. This method is model-free. Method one and three are described in the following sections as they are assumed to be the most useful in practical cases.

142 142 Using feedforward together with feedback does not influence the stability of the feedback loop because the feedforward does not introduce new dynamics in the loop Designing feedforward control from differential equation models The feedforward function F f can be derived quite easily from a differential equations model of the process to be controlled. The design method is to solve for the control output variable in the process model with the setpoint substituted for the process output variable (which is the variable to be controlled). The model may be linear or non-linear. Example 25 Feedforward control of a liquid tank Figure 13.2 shows a liquid tank with inflow and outflow. The level h is to be controlled using feedback with PID controller in combination with feedforward control. The principle of mass balance of the liquid in the tank yields the following process model: ρaḣ(t) =ρf in(t) ρf out (t) (13.1) where h [m] is the liquid level, F in [m 3 /s] is the inflow, F out [m 3 /s] is outflow, A [m 2 ] is the cross sectional area of the tank, ρ [kg/m 3 ]isthe liquid density. F in is assumed to be equal to the applied control signal u. By cancelling ρ in (13.1) and using F in = u we get the following simpler model: Aḣ(t) =u(t) F out(t) (13.2) Now, let us derive the feedforward control function from the process model (13.2). First, we substitute the level h by its setpoint h SP : AḣSP(t) =u(t) F out (t) (13.3) Then we solve (13.3) for the control variable u to get the feedforward control variable u f : u f (t) = AḣSP(t)+F out (t) (13.4) = AḣSP(t) + F {z } out (t) {z } (13.5) u fsp u fd

143 143 Figure 13.2: Example 25: Liquid tank where the level h is controlled with feedback control. Here, feedforward control is not applied. Here, u fsp is feedforward from setpoint and u fd is feedforward from disturbance. We see that calculation of feedforward control signal u f requires measurement or knowledge of the following three quantities: A, F out, in addition to the setpoint time-derivative, ḣsp. Figure 13.3 indicates that flow F out is measured. If the setpoint is assumed to be constant, which is typically the case, the time derivative of h SP is zero, and u f (t) becomes simply u f (t) =F out (t) (13.6) That is, at any instant of time the inflow is set equal to the outflow. 1 Two cases are simulated: Level control without and with feedforward control. In both cases there is feedback control with PI controller with 1 Not a surprising control strategy!

144 144 Figure 13.3: Example 25: Liquid tank where the level h is controlled with feedback control. Here, feedforward control is applied. parameters K p =10and T i = 100 s. The setpoint h SP is constant, but there are variations in the disturbance F out (the outflow). Without feedforward control (only feedback control): Figure 13.2 shows the responses in the level h due to variations in the disturbance, F out. The level varies clearly. With feedforward control (in addition to feedback control): Figure 13.3 shows the responses in the level due to variations in the disturbance, F out. We see that the control variable compensates immediately for the variations in the disturbance, F out, which is due to the direct control action inherent in feedforward control. The control error is in principle zero at every instant of time with feedforward control. 2 2 In this simulaton the control error is a little different from zero due to inevitable

145 145 Note: Without feedforward control the control signal range of the PID controller is [0, 4] (in unit of m 3 /s). With feedforward the output signal range of the PID controller was set to [ 4, +4] so that the contribution, u PID, from the PID controller can be negative. If u PID can not become negative, the total control signal, which is u = u PID + u f (13.7) may not get small enough value to give proper control when the outflow is small. (This was confirmed by a simulation, but the responses are not shown here.) [End of Example 25] 13.3 Designing feedforward control from experimental data Feedforward control can be designed from experimental data as follows: Decide a proper set of N different values of the disturbance, v on which the feedforward control will be based, for example N =6 different values of the disturbance. For each of these N distinct disturbance values, find (experimentally or by simulation) the value of the control signal u which corresponds to zero steady state control error, which can be obtained with PI or PID feedback control (typically feedback control is used together with feedforward control, so no extra effort is needed to run the feedback control here). The set of N corresponding values of v and u can be represented by a table, cf. Table or in a coordinate system, cf. Figure For any given (measured) value of the disturbance, the feedforward control signal u f is calculated using interpolation, for example linear interpolation as shown in Figure In practice, this linear interpolation can be implemented using a table lookup function. 3 numerical inaccuracies in the simulator. 3 Both MATLAB/SIMULINK and LabVIEW have functions that implement linear interpolation between tabular data.

146 146 u v u 1 v 1 u 2 v 2 u 3 v 3 u 4 v 4 u 5 v 5 u 6 v 6 Table 13.1: N corresponding values of v and u Feedforward control signal, u f Control signal, u u 6 u 5 u 4 u 3 u 2 Linear interpolation u 1 v 1 v 2 v 3 v 4 v 5 v 6 v Disturbance Figure 13.4: Calculation of feedforward control signal from known disturbance value Note: This feedforward design method is based on steady state data. Therefore, the feedforward control will not be ideal or perfect. However, it is easy to implement, and it may give substantial better control compared to only feedback control. Example 26 Temperature control with feedforward from flow Figure 13.5 shows a lab process consisting of a heated air tube where the air temperature will be controlled. The control signal adjusts the power to the heater. The air temperature is measured by the primary Pt100 element. The feedback PID control is based on this temperature measurement. (The control system is implemented with a LabVIEW program running on a laptop PC.)

147 147 Temperature sensor 2 Pulse Width Modulator (PWM) PWM indicator Air pipe AC/DC converter Electrical heater Pt100/ milliampere transducer Fan Mains cable (220/110 V) Air On/Off switch Temperature sensor 1 USB cable PC with LabVIEW 3 x Voltage AI (Temp 1, Temp 2, Fan indication) 1 x Voltage AO (Heating) Fan speed adjustment NI USB-6008 for analog I/O Figure 13.5: Example 26: A lab process consisting of a heated air tube where the air temperature will be controlled. Variations of the air flow act as disturbances to the process. The feedback controller tries to compensate for such variations using the temperature measurement. Can we obtain improved control by also basing the control signal on measured air flow, which is here available as the fan speed indication? First, ordinary PID control without feedforward is tried. The fan speed was changed from minimum to maximum, and then back again. The temperature setpoint was 40 %. Figure 13.6 shows the fan speed and the response in the temperature (which is represented in % with the range [0 100%] corresponding to [20 70 o C]). The maximum control error was 1.0 %. Will there be any improvement by using feedforward control from the fan speed (air flow)? A number of corresponding values of fan speed and control signal was found experimentally. The feedforward control signal, u f, was calculated by linear interpolation with Interpolate 1D Array function in LabVIEW, and was added to the PID control signal to make up the total control signal: u = u PID + u f. Figure 13.7 shows the the fan speed and the response in the temperature. Also, the set of 6 corresponding

148 148 Figure 13.6: Example 26: The response in the temperature after a change in the fan speed. Only feedback control (no feedforward) control is used. values of control signal and fan speed, on which the feeedforward control signal is based, is shown. In this case the maximum control error was 0.27, which is a large improvement compared to using no feedforward control!

149 149 Figure 13.7: Example 26: The response in the temperature after a change in the fan speed. Feedforward from fan speed (air flow) is used together with feedback control.

150 Chapter 14 Dead-time compensator (Smith predictor) The dead-time compensator also called the Smith predictor [14] is a control method particularly designed for processes with dead-time (time delay). Compared to ordinary feedback control with a PI(D) controller, dead-time compensation gives improved setpoint tracking in all cases, and it may under certain conditions give improved disturbance compensation. It is assumed that the process to be controlled has a mathematical model on the following transfer function form: H p (s) =H u (s)e τs (14.1) where H u (s) is a partial transfer function without time delay, and e τs is the transfer function of the time delay. Simply stated, with dead-time compensation the bandwidth (quickness) of the control system is independent of the dead-time, and relatively high bandwidth can be achieved. However, the controller function is more complicated than the ordinary PID controller since it contains a transfer function model of the process. A dead-time compensator are implemented in some controller products. Figure 14.1 shows the structure of the control system based on dead-time compensation. In the figure y mp is a predicted value of y m thereforethe name Smith predictor. y m1p is a predicted value of the non time delayed internal process variable y m1. There is a feedback from the predicted or calculated value y m1. The PID controller is the controller for the non delayed process, H u (s), andit is tuned for this process. The controller 150

151 151 Figure 14.1: Structure of a control system based on dead-time compensation tuning can be made using any standard method, e.g. the Skogestad s method which is reviewed in Appendix A. The bandwidth of this loop can be made (much) larger compared to the bandwidth if the time delay were included in the loop. The latter corresponds to an ordinary feedback control structure. As long as the model predicts a correct value of y m, the prediction error e p is zero, and the signal in the outer feedback is zero. But if e p is different from zero (due to modeling errors), there will be a compensation for this error via the outer feedback. What is the tracking transfer function, T (s), of the control system? To make it simple, we will assume that there are no modeling errors. From the block diagram in Figure 14.1 the following can be found: T (s) = y m(s) y msp (s) = H pid(s)h u (s) 1+H pid (s)h u (s) e τs = L(s) 1+L(s) e τs (14.2) where L(s) =H pid (s)h u (s) (14.3) is the loop transfer function of the loop consisting of the PID controller

152 152 and the partial non time-delayed process H u (s). How is the setpoint tracking and the disturbance compensation performance of the control system? Setpoint tracking is as if the feedback loop did not have time delay, and therefore faster setpoint tracking can be achieved with a dead-time compensator than with ordinary feedback control (with PID controller). However, the time delay of the response in the processmeasurementcannotbeavoidedwithadead-time compensator. Disturbance compensation: In [11] the disturbance compensation with a dead-time compensating control system is investigated for a first order with time delay process. It was found that dead-time compensation gave better disturbance compensation (assuming a step in the disturbance) compared to ordinary feedback control only if the time delay (dead-time) τ is larger than the time constant T of the process. Example 27 Dead-time compensator Given a process with the following transfer functions (cf. Figure 14.1): where H p (s) = K u e τs (14.4) T u s +1 {z } H u (s) H v (s) = K v T v s +1 (14.5) K u =1; T u =0.5; K v =1; T v =0.5; τ =2 (14.6) The following two control systems have been simulated: Dead-time compensator for the process defined above. The internal controller, H pid (s), is a PI controller with the following parameter values: K p =2.0; T i =0.36 (14.7) These PI parameters are calculated using Skogestad s method, cf. Table A.1, with τ =0, T C =0.25 (= T/2) andk 1 =1.44.

153 153 Ordinary feedback control with PI controller for the process defined above.the PI controller, H pid (s), is a PI controller with the following parameter values: K p =0.12; T i =0.5 (14.8) These PI parameters are calculated using Skogestad s method, cf. Table A.1, with T C =2(= τ) andk 1 =1.44. Figure 14.2 shows the simulated responses for the two control systems due to a setpoint step and a disturbance step. The dead-time compensator gives better setpoint tracking and better disturbance compensation than ordinary feedback control does. Figure 14.2: Example 27: Simulated responses for the two control systems due toasetpointstepandadisturbancestep [End of Example 27] The dead-time compensator is model-based since the controller includes a model of the process. Consequently, the stability and performance robustness of the control system depend on the accuracy of the model. Running a sequence of simulations with a varied process model (changed model parameters) in each run is one way to investigate the robustness.

154 Chapter 15 Feedback linearization 15.1 Introduction Feedback linearization is a multivariable control method 1 that is based on a mathematical model of the process to be controlled. Hence it is a model-based control method. The model is a state-space model. It is assumed that the values of all of the state variables and process disturbances are available at any instance of time either from measurements or from an estimator (typically a Kalman Filter). Hence, this control method demands much information about the process which may not be easy to get in practical applications. However, when this information is available the control system may give faster control than with PID controllers or other linear control functions. Since the control function is model-based, the performance may be ensured over a large operating range. The control function consists of two parts: A decoupler and linearizer which is based on the process model and the instantaneous values of the states and the disturbances. A multiloop PID controller which is designed for the decoupled linear process. 1 The method can be applied to monovariable processes, too. 154

155 Deriving the control function The following two sections cover these cases, respectively: All the state variables are controlled Not all the state variables are controlled Case 1: All state variables are controlled It assumed that the process model is a state space model on the following form: ẋ = f(x, v)+b(x, v) u (15.1) or, simpler, ẋ = f + Bu (15.2) x is the state vector, v is the disturbance vector, and u is the control vector. f is a vector of scalar functions, and B is a matrix of scalar functions. Note that the control vector u is assumed to appear linearly in the model. Assume that the output vector is y = x (15.3) By taking the derivative of (15.3) and using (15.2) we obtain the following differential equation describing the process output vector: ẏ = f + Bu (15.4) Assume that r y is the reference (or setpoint) of y. With the above assumptions, we derive the control function as follows: We start by defining the transformed control vector as z def = f + Bu (15.5) Then (15.4) can be written as ẏ = z (15.6) which are n decoupled or independent integrators (n is the number of state variables), because y(t) = R t 0 zdτ. The transfer function from z to y is y(s) z(s) = 1 (15.7) s

156 156 We can denote (15.6) as the transformed process. We will now derive the control function for this integrator process, and thereafter derive the final control function. How can you control an integrator? With feedback and feedforward! A good choice for the feedback controller is a PI controller (proportional plus integral) because the controller should contain integral action to ensure zero steady-state control error in the presence of unmodelled disturbances (and there are such in a real system). The proportional action is necessary to get a stable control system (if a pure integral controller acts on an integration process the closed loop system becomes marginally stable, i.e. it is pure oscillatory). The multiloop feedback PI controller is where e is the control error: Z t z fb = K p e + K i edτ (15.8) 0 e def = r y y (15.9) In (15.8) K p and K i are diagonal matrices: K p K p2 0 K p = = diag(k p 1,K p2,,k pn ) (15.10) K i = 0 0 K pn K i K i K in = diag(k i 1,K i2,,k in ) (15.11) where the scalar values are K ij = K p j (15.12) T ij where K pj is the proportional gain and T ij is the integral time of control loop no. j. K pj and T ij can be calculated in several ways. Skogestad s method is one option. Skogestad s method is reviewed in Appendix A. From Table A.1 (the second row) we get, since τ =0and K =1, K pj = 1 T Cj (15.13) and T ij = k j T Cj (15.14)

157 157 where T Cj is the specified time constant of feedback loop no. j, andk j is a coefficient that can be set to e.g. 4 according to the original work by Prof. Skogestad, alternatively to 1.44 to get faster compensation to disturbance changes[2]. In addition to the PI feedback action the controller should contain feedforward from the reference r y to get fast reference tracking when needed (assuming the reference is varying). The feedforward control function can be derived by substituting the process output y in the process model (15.6) by r y andthensolvingfory, giving z ff = ṙ yf (15.15) where index f indicates lowpass filter which may be of first order. A pure time differentiation should not be implemented because of noise amplification by the differentiation. Therefore the reference should be lowpass filtered before its time derivative is calculated. The control function for the process (15.6) based on the sum of the feedback control function and the feedforward control function is as follows: 2 z = z fb + z ff (15.16) Z t = K p e + K i edτ + ṙ yf (15.17) 0 {z} {z } z ff z fb Now it is time to get the final control function, that is, the formula for the control vector u. From (15.5) we get u = B 1 (z f) (15.18) Here we use (15.17) to get the final control function: µ Z t u = B 1 K p e + K i edτ + ṙ yf f 0 (15.19) If the reference is constant, as is the typical case in process control, the ṙ yf term has value zero, and it can therefore be left out in the control function. Figure 15.1 shows a block diagram of the control system. Here are some characteristics of the control system: 2 It is the sum because the process (the integrator) has a linear mathematical model.

158 158 Controller LPfilters r yf Feedforward Process r y e Multiloop PI-contr. z fb d/dt z ff z Decoupler and linearizer f B -1 u B f x(0) x = y Feedback Figure 15.1: Block diagram of the control system based on feedback linearization The controller is model based since it contains f and B from the process model. Since the process disturbance is an argument of f and/or B the controller implements feedforward from the disturbance. (It also implements feedforward from the reference, due to the term ṙ yf M in the controller.) The control system is linear even if the process is nonlinear. The control system consists of n decoupled single-loop control systems. This is illustrated in Figure Example 28 Feedback Linearization applied to level control In this example Feedback Linearization will be applied to a level control system. Figure 15.3 shows the control system. It assumed that the outflow is proportional to the control signal u and to the square root of the pressure drop along the control valve. The process model based on mass balance is (ρ is density) ρaḣ = ρq in ρk v u dp (15.20) or à ḣ = q in + K! v dp u (15.21) {z} A A {z } =f =B

159 159 Controller LPfilter r yf1 Process d/dt x 1 (0) r y1 e 1 PIcontroller z 1 1/s x 1 = y 1 Controller LPfilter r yf2 Process d/dt x 2 (0) r y2 e 2 PIcontroller z 2 1/s x 2 = y 2 Figure 15.2: The control system consists of n decoupled single-loop control systems. The control function becomes, cf. (15.19), µ Z t u = B 1 K p e + K i edτ + ṙ yf f 0 Ã = K! 1 µ Z v dp t K p e + K i edτ + ṙ yf q in A 0 A = A Z t µk p e + K i edτ + ṙ yf q in K v dp A 0 (15.22) (15.23) (15.24) This control function requires that the differential pressure dp is measured, and that the inflow q in is measured. [End of Example 28]

160 160 q in [m 3 /s] h [m] LT LC u A [m 2 ] q out [m 3 /s] Process dp [N/m 2 ] Figure 15.3: Example 28: Level control system Case 2: Not all state variables are controlled In Section it is assumed that all the state variables are to be controlled, i.e. there is a reference for each of the state variables. Feedback linearization can be used also in cases where not all the state variables are controlled. The typical case is in positional control. Positions are only one set of the state variables. The other set is the velocities (rate of change of position). We will here focus on this typical case. With position and velocity as state variables, the model of the process (e.g. motor or vessel) can be written on the following form where x is position, and u is the input (control variable), and y is the output: or simply The output variable is the position: ẍ = f(x, ẋ, v)+b(x, ẋ, v) u (15.25) ẍ = f + Bu (15.26) y = x (15.27) By taking the second order time-derivative of (15.27) and using (15.26) we

161 161 obtain this differential equation describing the process output: ÿ = f + Bu (15.28) Assume that r y is the reference of y (position). To derive the control function we define the transformed control vector as (15.28) can then be written as z def = f + Bu (15.29) ÿ = z (15.30) which are n decoupled (independent) double integrators. Thetransfer function from z to y is y(s) z(s) = 1 s 2 (15.31) These double integrators can be controlled with feedback with PID controllers plus feedforward: where e is the control error: z fb = K p e + K i Z t 0 edτ + K d de f dt (15.32) e def = r y y (15.33) In (15.32) e f is lowpass filtered control error (the derivative term should always contain a lowpass filter). K p, K i and K d are diagonal matrices similar to (15.10). The scalar values on the diagonal of these matrices are K pj (15.34) K ij = K p j (15.35) T ij K dj = K pj T dj (15.36) K pj, T ij and T dj can be calculated with e.g. Skogestad s method, cf. Appendix A. According to Skogestad s formulas shown in Table A.1 (the bottom row with τ =0and K =1), 1 K pj = 4 2 (15.37) T Cj and T ij =4T Cj (15.38) T dj =4T Cj (15.39)

162 162 where T Cj is the specified time constant of feedback loop no. j. Note that Skogestad s formulas assumes a serial PID function. If your controller actually implementes a parallel PID controller (as in the PID controllers in LabVIEW PID Control Toolkit and in the Matlab/Simulink PID controllers), you should transform from serial PID settings to parallell PID settings. If you do not implement these transformations, the control system may behave unnecessarily different from the specified response. The serial-to-parallel transformations are given by (A.3) (A.5). In addition to the PID feedback action the controller should contain feedforward from the reference y r to get fast reference tracking when needed (assuming the reference is varying). The feedforward control function can be derived by substituting the process output y in the process model (15.6) by y r andthensolvingfory, giving z ff = r yf (15.40) where index f indicates lowpass filter, which should be of second order. Now, we have the following control function for the process (15.30) consisting of the sum of the feedback control function and the feedforward control function: z = z fb + z ff (15.41) Z t de f = K p e + K i edτ + K d + r yf (15.42) 0 dt {z} {z } z ff z fb The final control function is found from (15.29): u = B 1 (z f) (15.43) Here we use and (15.42) to get µ Z t u = B 1 K p e + K i 0 de f edτ + K d dt + r y f f (15.44) If the reference is constant, as is the typical case in process control, the r yf term has value zero, and it can therefore be left out in the control function. Figure 15.4 shows a block diagram of the control system. The control system consists of n decoupled single-loop control systems. This is illustrated in Figure 15.5.

163 163 Controller LPfilters r yf Feedforward Process r y e Multiloop PI-contr. z fb d/dt z ff z Decoupler and linearizer f B -1 u B f x(0) x(0) x = y Feedback Figure 15.4: Block diagram of the control system based on feedback linearization Example 29 Feedback Linearization applied to motion control Given the following mathematical model of a body (e.g. motor, ship): mÿ = F c + F d (15.45) where y [m] is position, m =10kg is mass, F c [F] is force generated by the controller, and F d [N] is net disturbance force (sum of damping, friction, gravitation, etc). Assume that the position reference of y is r y, and that the specified time-constant of the control system is T C =1s. Also, assume that the PID controller is a parallel PID controller. To derive the control function we first write the process model on the standard form (15.26): The control function becames µ Z t u = B 1 K p e + K i = µ 1 m ÿ = 1 m F d + 1 F {z } {z} m {z} c u f B 1 K p e + K i Z t 0 edτ + K d de f 0 dt + r y f f edτ + K d de f dt + r y f (15.46) (15.47) µ 1 m F d (15.48) where e is the control error: e = r y y (15.49)

164 164 Controller LPfilter r yf1 Process d 2 /dt 2 x 1 (0) x 1 (0) r y1 e 1 PIDcontroller z 1 1/s 1/s x 1 = y 1 Controller LPfilter r yf2 Process d 2 /dt 2 x 2 (0) x 2 (0) r y2 e 2 PIDcontroller z 2 1/s 1/s x 2 = y 2 Figure 15.5: The control system consists of n decoupled single-loop control systems. The PID parameters of an assumed serial PID controller becomes K ps = 1 4(T C ) 2 = 1 2 =0.25 (15.50) 4(1) T is =4T C =4 1=4s (15.51) T ds =4T C =4 1=4s (15.52) Finally, the PID parameters of an assumed parallel PID controller becomes, cf. (A.3) (A.5), µ K pp = K ps 1+ T µ d s = s =0.5 (15.53) 4 s T is µ T ip = T is 1+ T d s T is 1 T dp = T ds 1+ T ds T i s µ =4s 1+ 4 s 4 s =4s s 4 s =8s (15.54) =2s (15.55)

165 [End of Example 29] 165

166 Chapter 16 LQ (Linear Quadratic) optimal control 16.1 Introduction Optimal control of a process means that the control function is designed so that a given optimization criterion or performance index get a minimal value. It is assumed that the process can be described by a linear model, and that the criterion contains a quadratic function of the state variables and the control variables. 1 This type of optimal control is therefore denoted Linear Quadratic control or LQ control. We will consider the case that the process is excited by stochastic or random normally or Gaussian distributed process noise or disturbance. To emphasize this assumption about the process disturbance, the term Linear Quadratic Gaussian control or LQG control is also used. The reference (setpoint) is assumed to be zero in the basic LQ control problem, and hence the term LQ regulation or LQR is also used. (We will however consider a non-zero reference in this chapter, cf. Section 16.3.) With LQ control the process can be controlled so that the state variable deviates from the setpoint with a smaller variance than would be possible with non-minimal variance. This may improve the quality of the product. This is illustrated in Figure It may also make it possible to move the reference (setpoint) closer to a more profitable limit. LQ control can be applied to both monovariable and multivariable 1 The main reason why a quadratic criterion is used is that the control function is relatively easy to derive and easy to implement :-) 166

167 167 Non-minumum variance control Minumum variance control Max limit Smaller variance Less error! Setpoint, y SP Process output, y Control error, e = y SP -y Min limit t t Figure 16.1: With LQ control the process can be controlled so that the state variable deviates from the setpoint with minimal variance. processes. It turns out that the control function is based on feedback from all the states of the process. If not all the states can be measured, a Kalman Filter can be used to estimate the states, and the controller then uses the estimated states as if they were measured. This principle is denoted the certainty equivalence principle. It turns out that the control function and the state estimator can be designed independently as long as the process is linear. The principle of separate design of the controller and the estimator is denoted the separation principle. LQ control is quite similar to Model-based Predictive Control (MPC), which has become an important control function the last decade. Also MPC is based on a quadratic criterion. However, MPC takes into account limitations in the control variables and the state variables, hence making it somewhat more useful than LQ controller, but also much more computational demanding to implement. MPC is described in Chapter The basic LQ controller In basic LQ control it is assumed that the process to be controlled is given by the following state space model x(k +1)=Ax(k)+Bu(k)+G w w(k) (16.1) The process disturbance w is assumed to be a normally (Gaussian) distributed random ( white ) signal with zero mean value and variance R w.(wisavector of scalar random signals.) The optimization criterion is X J = x T (k)qx(k)+u T (k)ru(k)+2x T (k)nu(k) (16.2) k=0

168 168 It is very common that the weight matrix N is a zero matrix (of proper dimension), and therefore N is assumed to be zero in the following. Q and R are weight matrices of the states and the control signal, respectively. Q and R are selected by the user, and they are the tuning factors of the LQ controller. Q is a symmetric positive semidefinite matrix, and R is a symmetric positive definite matrix. The criterion (16.2) gives you (the user) the possibility to punish large variations in the states (by selecting a large Q) or to punish large variations in the control variable u (by selecting a large R). It is fair to say that the LQ controller is a user-friendly controller because the tuning parameters (Q and R) are meaningful, at least in the qualitative sense. As an example, assume that the system has two state variables, x 1 and x 2, hence x1 x = (16.3) and one (scalar) control variable, u, and that the weight matrices are: Q11 0 Q = (16.4) 0 Q 22 The criterion J becomes J = = = x 2 R = R 11 (scalar) (16.5) X x T (k)qx(k)+u T (k)ru(k) (16.6) k=0 X k=0 ( x1 (k) x 2 (k) T Q Q 22 x1 (k) x 2 (k) ) + u(k)ru(k) (16.7) X Q11 x 2 1(k)+Q 22 x 2 2(k)+R 11 u 2 (k) ª (16.8) k=0 Thus, J is a sum of quadratic terms of the state variables and the control variable. It can be shown that the control function that gives J a minimum value is Optimal (LQ) controller: u(k) = G(k)x(k) (16.9) In other words, the control signal is based on feedback from a linear combination of the state variables. The controller gain G(k) (a matrix) is

169 169 w u -G Process LQcontroller x Figure 16.2: Control system with optimal LQ controller given by a so-called Riccati equation which will not be shown here. Figure 16.2 shows a block diagram of the control system. In the above example the controller becomes u(k) = Gx(k) = G 11 G 12 x 1 (k) x 2 (k) = (G 11 x 1 + G 12 x 2 ) (16.10) It is common to implement the steady-state value G s of the controller gain: Steady-state optimal (LQ) controller: u(k) = G s x(k) (16.11) G s can be calculated offline, and in advance (before the control system is started). Matlab and LabVIEW have functions to calculate G s : Matlab: The syntax of using the Matlab function dlqr (discrete linear quadratic regulator) is [Gs,S,E] = dlqr(a,b,q,r,n) where G s is the steady-state controller gain, S is the solution of the Riccati-equation mentioned (but not shown) above, and E is the vector containing the eigenvalues of the control system (see comment below). If N is actually a zero matrix, which is typically the case, it can be omitted from the argument list of dlqr. MathScript (in LabVIEW): The name and the syntax of the MathScript function is the same as for dlqr in Matlab. LabVIEW Control Design Toolkit palette: The function CD Linear Quadratic Regulator calculates the steady-state controller gain.

170 170 Figure 16.3: On the Control Design Toolkit palette in LabVIEW you can use the function CD Linear Quadratic Regulator. (If N is zero, you can keep this input unwired.) The state-space model can be constructed using CD Construct State-Space Model. To construct the state-space model you can use CD Construct State-Space Model. This is shown in Figure Here are comments about the LQ controller: The reference is zero, and the disturbance has zero mean: These assumptions seem somewhat unrealistic, because in real control systems the reference is typically non-zero, and the disturbances have non-zero mean. Therefore, for the controller to represent realistic control problems, the variables in (16.1) should actually be regarded as deviation variables about some operating point. Then, how do you bring the states to the operating point, so that the mean value control error is zero? By enforcing integrators into the controller. This is described in detail in Section Controllability: The process to be controlled has to be controllable. If it is not controllable, there exists no finite steady-state value of the gain G. Controllability means that there exists a control sequence {u i (k)} so that any state can be reached from any initial state in finite time. It can be shown that a system is controllable if the rank of the controllability matrix M control = B.AB.A 2 B..A n 1 B (16.12)

171 171 is n (the order of A, which is the number of state variables). Non-measured or noisy states: If not all the states are measured or if the measurements are noisy, it is optimal to use estimated states in stead of measured states in the controller: u(k) = G s x est (k) (16.13) where x est is the estimated state vector from a Kalman Filter, cf. Chapter 11. Figure 16.4 shows a block diagram of the control system with Kalman Filter. Disturbance w Measurement noise v -G LQcontroller u Process with sensor y Measurement Kalman Filter x est Estimated state variable Figure 16.4: Control system with Kalman Filter The eigenvalues of the control system: By combining the LQ controller (16.11) with the process model (16.1) we get the following model of the control system (the closed-loop system): x(k +1) = Ax(k)+B [ G s x(k)] + Gw(k) (16.14) = [A BG s ] x(k)+gw(k) (16.15) The eigenvalues {z 1, z 2,...z n } of the control system are the eigenvalues of the transition matrix [A BG s ]: 0 = det[zi (A BG s )] (16.16) = (z z 1 )(z z 2 ) (z z n ) (16.17) = z n + a n 1 z n a 1 z + a 0 (16.18)

172 172 Stability of the control system: ItcanbeshownthataLQ control system is asymptotically stable. 2 In other words, the eigenvalues of the (discrete-time) control system are inside the unit circle. Tuning of the LQ controller: From the criterion (16.2) we can conclude that a larger value of the weigth of one particular state variable causes the time response of that state variable to become smaller, and hence the control error (deviation from zero) is smaller. But what should be the initial values of the weight matrices, before they are tuned? One possibility is ½ ¾ 1 Q = diag (16.19) x imax where x imax is the assumed maximum value of state variable x i,and ½ ¾ 1 R = diag (16.20) u jmax where u jmax is the assumed maximum value of control variable u j. Pole placement design of the control system: Above it was stated that the eigenvalues of the control system are the roots of the characteristic equation 0=det[zI (A BG)] (16.21) For most systems, the poles, {z i }, of the system are the same as the eigenvalues. In the following the term poles are used in stead of eigenvalues. In pole placement design of control systems the poles are specified by the user, and, assuming the controller has the linear feedback structure as given by (16.11), it is usually possible to solve (16.21) for the controller gain matrix G. This can be done using the Ackerman s Formula [3] or by using numerical tools available in e.g. Matlab and LabVIEW. One serious drawback about the pole place design is that the user has no direct influence on the behaviour of the control signal, u. In LQ control, on the other hand, the control signal behaviour is expressed explicitly in the control system specification in the weighting matrix R in the optimization criterion. Example 30 LQ Optimal control of pendulum 2 Not a big surprise, since the controller minimizes the criterion J.

173 173 This example is about stabilization of a pendulum on a cart using LQ optimal control. A reference for the system described here is the text-book Nonlinear Systems by H. K. Khalil (Pearson Education, 2000). The original reference is Linear Optimal Control Systems by H. Kwakernaak and R. Sivan (Wiley, 1972). Mathematical model Figure 16.5 shows the cart with the pendulum. a [rad] 2L [m] m [kg] F [N] 0 m L mg [N] V [N] H [N] M [kg] -dy [N] y [m] Figure 16.5: Cart with pendulum A motor (in the cart) acts on the cart with a force F.Thisforceis manipulated by the controller to stabilize the pendulum in an upright position or in a downright position at a specified position of the cart. A mathematical model of the system is derived below. This model is used to design a stabilizing controller, namely an optimal controller. The mathematical model is based on the following principles: 1. Force balance (Newton s Second Law) applied to the horizontal movement of the center of gravity of the pendulum: m d2 (y + L sin a) =H (16.22) dt2 The differentiation must be carried out, but the result of it not shown here.

174 Force balance applied to the vertical movement of the center of gravity of the pendulum: m d2 (L cos a) =V mg (16.23) dt2 The differentiation must be carried out, but the result of it not shown here. 3. Torque balance (the rotational version of the Newton s Second Law applied to the center of gravity of the pendulum: 4. Force balance applied to the cart: Iä = VLsin a HLcos a (16.24) Mÿ = F H dẏ (16.25) In the above equations, I is the moment of inertia of the pendulum about it s center of gravity. For the pendulum shown in Figure 1, I = ml2 12 (16.26) V and H are vertical and horizontal forces, respectively, in the pivot. d is a damping coefficient. From Eq. (16.22) (16.25), the internal forces V and H can be eliminated, resulting in two differential equations (not containing V and H), which are not shown here. These two differential equations can be written as a state-space model: ẋ 1 = x 2 (16.27) ẋ 2 = m2 L 2 g cos x 3 sin x 3 + u + mlẋ 2 4 sin x 3 dx 2 I + ml 2 (16.28) D 1 ẋ 3 = x 4 (16.29) ẋ 4 = (m + M)(mgL sin x 4) u + mlẋ 2 4 sin x 3 dx 2 ml cos x4 (16.30) D 1 where D 1 = I + ml 2 (m + M) m 2 L 2 cos 2 x 4 (16.31) In the above state-space model,

175 175 x 1 = y (cart horizontal position) x 2 = ẏ (cart horizontal speed) x 3 = a (pendulum angular position) x 4 = ȧ (pendulum angular speed) Controller To stabilize the pendulum either vertically up or vertically down, a specified possibly non-zero position, r, of the cart, a steady-state LQ (Linear Quadratic Regulator) controller is used. The feedback control function is as follows: u(t) = G 11 [x 1 (t) r(t)] G 12 x 2 (t) G 13 x 3 (t) G 14 x 4 (t) (16.32) The controller output, u, is applied as the force F acting on the cart. Hence the force is calculated as a linear combination of the states of the system. The states are assumed to be available via measurements. (Thus, there is a feedback from the measured states to the process via the controller.) The controller gains, G 11, G 11, G 11, G 14, are calculated by the LabVIEW MathScript function lqr (Linear Quadratic Regulator) which has the following syntax: [G,X,E]=lqr(A,B,Q,R); where G is the calculated gain. (X is the steady-state solution of the Riccatti equation, and E is the eigenvalue vector of the closed loop system.) A and B are the matrices of the linear state-space process model corresponding to the nonlinear model (16.27) (16.30). A and B are presented below. Q is the state variable weight matrix and R is the control variable weight matrix used in the LQ(R) optimization criterion: J = Z t=0 Q has the following form: Q = x T (t)q x(t)+ u(t) T R u(t)dt (16.33) Q Q Q Q 44 while Q ii can be used as controller tuning parameters. R is (16.34) R =[R 11 ] (16.35)

176 176 and is also used as tuning parameter. The control system can be used to stabilize the cart and the pendulum in two different operation points, namely vertically up and vertically down. The process model is nonlinear. Therefore, the linear matrices A and B are derived for each of these operating points. The linearization of (16.27) (16.30) can be carried out using the following assumptions: When the angular position x 3 = a is close to zero, cos x 3 is approximately equal to 1. When the angular position x 3 = a is close to 180 = π rad, cos x 3 is approximately equal to 1. When the angular position x 3 = a is close to zero or close to 180 = π rad, sin x 3 is very similar to x 3, and hence is substituted by x 3. The angular speed x 4 = ȧ is small, approximately zero in both operating points. Using these assumptions, it can be shown that the linearized model becomes ẋ x 1 0 ẋ 2 ẋ 3 = 0 (I+mL2 )d D 2 s m2 L 2 g D 2 0 x 2 I+mL x D 2 [ u] 0 ẋ 4 0 s mld (m+m)mgl D 2 D 2 0 x ml 4 D 2 {z } {z } A B (16.36) where D 2 = I + ml 2 (m + M) (16.37) and ½ 1 for pendulum in vertical up position s = 1 for pendulum in vertical down position The total control signal is (16.38) u(t) =u op + u(t) (16.39) where u op is the control signal needed to keep the system at the operating point, nominally. However, u op =0in both operating points. Because the parameters of the linear model (parameters of A, only)are different in these two operating points, the controller gains will also be different.

177 177 Model for simulator and model for controller design The model (16.22) (16.25) is the basis of both the simulator (representing the real process) and the controller. However, to make it possible to check if the control system is robust, two sets of model parameters are available in the front panel of the simulator: Model parameters M real, m real, L real, d real used in the simulator. Model parameters M model, m model, L model, d model used in the design of the controller. By default, these two parameter sets have equal numerical values, but if you want to check for robustness of the controller against variations or inaccurate knowledge about one certain parameter, you must set the values of the corresponding parameters to different values, e.g. set d model to a different value from d real. Figure 16.6 shows simulated responses for the control system. The positional reference was changed. The cart converges to the reference value, and the pendulum is stabilized upright despite the changes of the the cart positions. [End of Example 30] 16.3 LQ controller with integral action The basic LQ controller described in Section 16.2 is based on assumptions that in many applications are unrealistic or idealized. The following is more realistic: The reference is non-zero. There is a reference for a number of the states variables (not for the whole state vector). The process disturbance has non-zero mean value. The controller given by (16.11) does not ensure zero steady-state control error under these assumptions. What is needed is integral action in the controller! Figure 16.7 shows a block diagram of the control system with

178 178 Figure 16.6: Simulated responses for the control system of the pendulum on the cart integrators enforced in the controller. It is assumed that not all state variables are measured and that there is measurement noise, hence the system includes a Kalman Filter. The integrator block is actually a number of integrators, as many as there are reference variables. The output of the integrators are regarded as augmentative state variables, x int. These state variables are given by the following differential equation(s): ẋ int = r x x r (16.40) which corresponds to this integral equation: x int (t) = Z t 0 (r x x r ) dτ (16.41) Here, r x is the reference vector for the state vector x r which consists of those state variables among the process state vector x that are to track a reference. (In the above equations it is assumed that x r is directly available from measurements. If x r is taken from a Kalman Filter, x r,est is used in stead of x r,ofcourse.) Since the LQ controller assumes that the process model is a discrete-time state space model, the continuous-time integrator model (16.40) model

179 179 r x Optimal controller with integral action e = x int Integrator x int Concatenate x tot -G Gain Disturbance u w Process with sensor Measurement noise v y x est x r,est Split x est Kalman Filter Figure 16.7: Optimal control system with integrators in the controller and with Kalman Filter. e is the contol error. must be converted to a corresponding discrete-time integrator model. This can be done using the Euler Forward method x int (k +1)=x int (k) hx r (k)+hr x (k) (16.42) where h is the time step. (16.42) is a difference equation that can be combined with the process model given by (16.1) to get the total state space model. The total state vector that is used to design the LQ controller is the state vector consisting of the original process state vector augmented with the integrator state vector: x tot = x x int (16.43) The control variable u is given by the control function x est (k) u(k) = Gx tot (k) = G x int (k) (16.44) where it has been assumed that the state vector x is available as x est akalmanfilter. from

180 180 Note: When writing up the discrete-time model that is used for designing the LQ controller, you can disregard the reference r x,i.e.yousetittozero because it is not taken into account when calculating the controller gain G. But of course it must be included in the implemented controller which is given by (16.44), with x int given by (16.42).

181 Chapter 17 Model-based predictive control Model-based predictive control or MPC [9] has become an important control method, and it can be regarded as the next most important control method in the industry, next to PID control. Commercial MPC products are available as separate products or as modules included in automation products. Matlab has support for MPC with the MPC Toolbox. LabVIEW has support for MPC with the Control Design Toolkit. MPC has found many applications, for example Dynamic positioning (positional control) of sea vessels Paper machine control Batch process control Control of oil/water/gas separator trains MPC can be applied to multivariable and depending on the MPC implementation non-linear processes. The controller function is based on a continuous calculation of the optimal future sequence or time-series of the control variable, u. This calculation is based on predicting the future behaviour of the process to be controlled. Of course, a process model is used to predict this future behaviour. The optimization criterion which is to be minimized in MPC can be stated 181

182 182 as follows, or similar to it: 1 J = N p X i=n w [ŷ(t k+i t k ) r(t k+i t k )] T Q [ŷ(t k+i t k ) r(t k+i t k )] (17.1) X N c i=1 N X p [ u(t k+i t k )] T R [ u(t k+i t k )] (17.2) i=n w [u(t k+i t k ) s(t k+i t k )] T N [u(t k+i t k ) s(t k+i t k )](17.3) Here: k is discrete time index. i is the index along the prediction horizon. ŷ(t k+i t k ) (a vector) is the predicted plant output at time k + i, given all measurements up to and including those at time k. r(t k+i t k ) (a vector) is the output setpoint profile at time k + i, given all measurements up to and including those at time k. [ŷ(t k+i t k ) r(t k+i t k )] is the control error, e. You may also say that it is the negative control error because it is more common to define the control error as r y. Independent of the error definition, since the error is squared, we can say that J contains a weighed quadratic error term. u(t k+i t k ) (a vector) is the predicted rate of change in control action at time k + i, given all measurements up to and including those at time k. u(t k+i t k ) (a vector) is the predicted optimal control action at time k + i, given all measurements up to and including those at time k. s(t k+i t k ) (a vector) is the input setpoint profile at time k + i, given all measurements up to and including those at time k. N p (a scalar) is the prediction horizon, i.e. the number of samples in the future during which the MPC controller predicts the plant output. This is a tuning parameter. A long prediction horizon 1 The information presented here is mainly drawn from Ch. 18 Creating and Implementing a Model Predictive Controller in the User Manual of LabVIEW 8.6 Control Design and Simulation Toolkit. The two figures below are copied from that manual with permission by National Instruments.

183 183 increases the predictive ability of the MPC controller. However, a long prediction horizon decreases the performance of the MPC controller by adding extra calculations to the control algorithm. In general, a short prediction horizon reduces the length of time during which the MPC controller predicts the plant outputs. Therefore, a short prediction horizon causes the MPC controller to operate more like a traditional feedback controller. N w (a scalar) is the beginning of the prediction horizon. This is a tuning parameter. N c (a scalar) is the control horizon, i.e. the number of samples within the prediction horizon during which the MPC controller can affect the control action. This is a tuning parameter. Because the control action cannot change after the control horizon ends, a short control horizon results in a few careful changes in control action. A long control horizon produces more aggressive changes in control action. These aggressive changes can result in oscillation and/or wasted energy. Q (a matrix) is the weight (or cost) matrix of the output error, or the control error. This is a tuning parameter. The larger values of the elements in Q, the smaller control error typically causing the controlsignaltovarymore. R (a matrix) is the weight (or cost) matrix of the rate of change in control action. This is a tuning parameter. The larger values of the elements in R, the smaller variations of the control signal typically causing the control error to vary more. N (a matrix) is the weight (or cost) matrix of the control action error. This is a tuning parameter. The larger values of the elements in N, the smaller control signal. If it does not matter what the control signal is (as long as it is sufficiently smooth, which is adjusted by the R matrix), N kan be set to zero. Probably, it is easier (more intuitive) to use the weight matrices in stead of the prediction and control horizons as tuning parameters in the cost function. The optimization criterion J has a condition in terms of the the process model. In the LabVIEW MPC the process model is a linear discrete-time state-space model. In other MPC implementations other models may be used:

184 184 Impulse response model (which can be derived from simple experiments on the process) Step response model (same comment as above) Transfer function model (which is a general linear dynamic model form than the impulse response model and the step response model) Non-linear state-space model (which is the most general model form since it may include nonlinearities and it may be valid over a broad operating range) Furthermore, the optimization has conditions in terms of constraints, i.e. specified maximum and minimum values, of y, u, and u: y min y y max (17.4) u min u u max (17.5) u min u u max (17.6) Figure 17.1 illustrates how predictive control works. The MPC controller Figure 17.1: How MPC works predicts the plant output for time k + N p. (The control action does not change after the control horizon ends.) At the next sample time, k +1,the prediction and control horizons move forward in time, and the MPC controller predicts the plant output again. Figure 17.2 shows how the

185 185 prediction horizon moves at each sample time k. The control horizon moves forward along with the prediction horizon. Before moving forward, the controller sends the control action u(k) to the plant. Figure 17.2: How the prediction horizon moves at each sample time k The MPC controller needs information about the present state of the process. If not all the states are measured, states are estimated with a Kalman Filter, cf. 11. Furthermore, disturbance measurements can be exploited by the MPC controller. Comparing with LQ optimal control, cf. Chapter 16: MPC incorporates constraint specifications explicitly, which is benefical. Prediction and control horizons can be adjusted, which is benefical. Some implementations of MPC can handle nonlinear models directly, which is benefical. The execution of the MPC is much more demanding computationally, which is a drawback. (The stationary LQ controller is very simple just state measurements or estimates multiplied by precalculated gains.) Today, for optimal multivariable control, MPC is used more frequently than LQ optimal control. [End of Example 26]

186 Appendix A Skogestad s method for PID tuning The design principle of Skogestad s method is as follows.the control system tracking transfer function T (s), which is the transfer function from the reference or setpoint to the (filtered) process measurement, is specified as a first order transfer function with time delay: T (s) = y mf(s) y msp (s) = 1 T C s +1 e τs (A.1) where T C is the time constant of the control system which the user must specify, andτ is the process time delay which is given by the process model (the method can however be used for processes without time delay, too). Figure A.1 shows the response in y mf after a step in the setpoint y msp for (A.1). (This response will be achieved only approximately because Skogestad s method is based on some simplifying assumptions.) Skogestad s tuning formulas for several processes are shown in Table A.1. Note that the process transfer function actually includes the sensor. Hence, H(s) is the transfer function from control signal to measurement signal. Unless you have reasons for a different specification, Skogestad suggests T C = τ (A.2) to be used for T C in Table A.1. If the time delay τ is zero, you must yourself define a reasonable value of T C. Note that Skogestad s formulas assumes a serial PID function. If your controller actually implementes a parallel PID controller (as in the PID 186

187 187 Figure A.1: Step response of the specified tracking transfer function (A.1) in Skogestad s PID tuning method controllers in LabVIEW PID Control Toolkit and in the Matlab/Simulink PID controllers), you should transform from serial PID settings to parallell PID settings. If you do not implement these transformations, the control system may behave unnecessarily different from the specified response. 1 The serial-to-parallel transformations are as follows: µ K pp = K ps 1+ T d s (A.3) µ T ip = T is 1+ T d s (A.4) T is 1 T dp = T ds 1+ T (A.5) ds T i s where K pp, T ip,andt dp are parameters of the following parallel PID controller (given on transfer function form): u(s) = K pp + K p p T ip s + K p p T dp s e(s) (A.6) 1 The transformations are important because the integral time and the derivative time are equal, cf. (15.38) and (15.39). If the integral time is substantially larger than the derivative time, e.g. four times larger, the transformations are not necessary. T is

Automatique. A. Hably 1. Commande d un robot mobile. Automatique. A.Hably. Digital implementation

Automatique. A. Hably 1. Commande d un robot mobile. Automatique. A.Hably. Digital implementation A. Hably 1 1 Gipsa-lab, Grenoble-INP ahmad.hably@grenoble-inp.fr Commande d un robot mobile (Gipsa-lab (DA)) ASI 1 / 25 Outline 1 2 (Gipsa-lab (DA)) ASI 2 / 25 of controllers Signals must be sampled and

More information

Distributed Real-Time Control Systems

Distributed Real-Time Control Systems Distributed Real-Time Control Systems Chapter 9 Discrete PID Control 1 Computer Control 2 Approximation of Continuous Time Controllers Design Strategy: Design a continuous time controller C c (s) and then

More information

SAMPLE SOLUTION TO EXAM in MAS501 Control Systems 2 Autumn 2015

SAMPLE SOLUTION TO EXAM in MAS501 Control Systems 2 Autumn 2015 FACULTY OF ENGINEERING AND SCIENCE SAMPLE SOLUTION TO EXAM in MAS501 Control Systems 2 Autumn 2015 Lecturer: Michael Ruderman Problem 1: Frequency-domain analysis and control design (15 pt) Given is a

More information

7.2 Controller tuning from specified characteristic polynomial

7.2 Controller tuning from specified characteristic polynomial 192 Finn Haugen: PID Control 7.2 Controller tuning from specified characteristic polynomial 7.2.1 Introduction The subsequent sections explain controller tuning based on specifications of the characteristic

More information

Methods for analysis and control of. Lecture 6: Introduction to digital control

Methods for analysis and control of. Lecture 6: Introduction to digital control Methods for analysis and of Lecture 6: to digital O. Sename 1 1 Gipsa-lab, CNRS-INPG, FRANCE Olivier.Sename@gipsa-lab.inpg.fr www.lag.ensieg.inpg.fr/sename 6th May 2009 Outline Some interesting books:

More information

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11

sc Control Systems Design Q.1, Sem.1, Ac. Yr. 2010/11 sc46 - Control Systems Design Q Sem Ac Yr / Mock Exam originally given November 5 9 Notes: Please be reminded that only an A4 paper with formulas may be used during the exam no other material is to be

More information

Automatic Control Systems theory overview (discrete time systems)

Automatic Control Systems theory overview (discrete time systems) Automatic Control Systems theory overview (discrete time systems) Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Motivations

More information

Contents. PART I METHODS AND CONCEPTS 2. Transfer Function Approach Frequency Domain Representations... 42

Contents. PART I METHODS AND CONCEPTS 2. Transfer Function Approach Frequency Domain Representations... 42 Contents Preface.............................................. xiii 1. Introduction......................................... 1 1.1 Continuous and Discrete Control Systems................. 4 1.2 Open-Loop

More information

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review

Dr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review Week Date Content Notes 1 6 Mar Introduction 2 13 Mar Frequency Domain Modelling 3 20 Mar Transient Performance and the s-plane 4 27 Mar Block Diagrams Assign 1 Due 5 3 Apr Feedback System Characteristics

More information

CONTROL OF DIGITAL SYSTEMS

CONTROL OF DIGITAL SYSTEMS AUTOMATIC CONTROL AND SYSTEM THEORY CONTROL OF DIGITAL SYSTEMS Gianluca Palli Dipartimento di Ingegneria dell Energia Elettrica e dell Informazione (DEI) Università di Bologna Email: gianluca.palli@unibo.it

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

Chapter 13 Digital Control

Chapter 13 Digital Control Chapter 13 Digital Control Chapter 12 was concerned with building models for systems acting under digital control. We next turn to the question of control itself. Topics to be covered include: why one

More information

MAS107 Control Theory Exam Solutions 2008

MAS107 Control Theory Exam Solutions 2008 MAS07 CONTROL THEORY. HOVLAND: EXAM SOLUTION 2008 MAS07 Control Theory Exam Solutions 2008 Geir Hovland, Mechatronics Group, Grimstad, Norway June 30, 2008 C. Repeat question B, but plot the phase curve

More information

D(s) G(s) A control system design definition

D(s) G(s) A control system design definition R E Compensation D(s) U Plant G(s) Y Figure 7. A control system design definition x x x 2 x 2 U 2 s s 7 2 Y Figure 7.2 A block diagram representing Eq. (7.) in control form z U 2 s z Y 4 z 2 s z 2 3 Figure

More information

Laplace Transform Analysis of Signals and Systems

Laplace Transform Analysis of Signals and Systems Laplace Transform Analysis of Signals and Systems Transfer Functions Transfer functions of CT systems can be found from analysis of Differential Equations Block Diagrams Circuit Diagrams 5/10/04 M. J.

More information

Analysis and Synthesis of Single-Input Single-Output Control Systems

Analysis and Synthesis of Single-Input Single-Output Control Systems Lino Guzzella Analysis and Synthesis of Single-Input Single-Output Control Systems l+kja» \Uja>)W2(ja»\ um Contents 1 Definitions and Problem Formulations 1 1.1 Introduction 1 1.2 Definitions 1 1.2.1 Systems

More information

5.5 Application of Frequency Response: Signal Filters

5.5 Application of Frequency Response: Signal Filters 44 Dynamic Sytem Second order lowpa filter having tranfer function H()=H ()H () u H () H () y Firt order lowpa filter Figure 5.5: Contruction of a econd order low-pa filter by combining two firt order

More information

Intermediate Process Control CHE576 Lecture Notes # 2

Intermediate Process Control CHE576 Lecture Notes # 2 Intermediate Process Control CHE576 Lecture Notes # 2 B. Huang Department of Chemical & Materials Engineering University of Alberta, Edmonton, Alberta, Canada February 4, 2008 2 Chapter 2 Introduction

More information

Exam. 135 minutes + 15 minutes reading time

Exam. 135 minutes + 15 minutes reading time Exam January 23, 27 Control Systems I (5-59-L) Prof. Emilio Frazzoli Exam Exam Duration: 35 minutes + 5 minutes reading time Number of Problems: 45 Number of Points: 53 Permitted aids: Important: 4 pages

More information

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Preliminaries

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Preliminaries . AERO 632: of Advance Flight Control System. Preliminaries Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Preliminaries Signals & Systems Laplace

More information

EE C128 / ME C134 Final Exam Fall 2014

EE C128 / ME C134 Final Exam Fall 2014 EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket

More information

A FEEDBACK STRUCTURE WITH HIGHER ORDER DERIVATIVES IN REGULATOR. Ryszard Gessing

A FEEDBACK STRUCTURE WITH HIGHER ORDER DERIVATIVES IN REGULATOR. Ryszard Gessing A FEEDBACK STRUCTURE WITH HIGHER ORDER DERIVATIVES IN REGULATOR Ryszard Gessing Politechnika Śl aska Instytut Automatyki, ul. Akademicka 16, 44-101 Gliwice, Poland, fax: +4832 372127, email: gessing@ia.gliwice.edu.pl

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

EEE 188: Digital Control Systems

EEE 188: Digital Control Systems EEE 88: Digital Control Systems Lecture summary # the controlled variable. Example: cruise control. In feedback control, sensors and measurements play an important role. In discrete time systems, the control

More information

Index. Index. More information. in this web service Cambridge University Press

Index. Index. More information.  in this web service Cambridge University Press A-type elements, 4 7, 18, 31, 168, 198, 202, 219, 220, 222, 225 A-type variables. See Across variable ac current, 172, 251 ac induction motor, 251 Acceleration rotational, 30 translational, 16 Accumulator,

More information

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Prüfung Regelungstechnik I (Control Systems I) Prof. Dr. Lino Guzzella 3.. 24 Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Do not mark up this translation aid -

More information

Problem Set 3: Solution Due on Mon. 7 th Oct. in class. Fall 2013

Problem Set 3: Solution Due on Mon. 7 th Oct. in class. Fall 2013 EE 56: Digital Control Systems Problem Set 3: Solution Due on Mon 7 th Oct in class Fall 23 Problem For the causal LTI system described by the difference equation y k + 2 y k = x k, () (a) By first finding

More information

Chapter 7. Digital Control Systems

Chapter 7. Digital Control Systems Chapter 7 Digital Control Systems 1 1 Introduction In this chapter, we introduce analysis and design of stability, steady-state error, and transient response for computer-controlled systems. Transfer functions,

More information

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators.

EECS C128/ ME C134 Final Thu. May 14, pm. Closed book. One page, 2 sides of formula sheets. No calculators. Name: SID: EECS C28/ ME C34 Final Thu. May 4, 25 5-8 pm Closed book. One page, 2 sides of formula sheets. No calculators. There are 8 problems worth points total. Problem Points Score 4 2 4 3 6 4 8 5 3

More information

Lecture: Sampling. Automatic Control 2. Sampling. Prof. Alberto Bemporad. University of Trento. Academic year

Lecture: Sampling. Automatic Control 2. Sampling. Prof. Alberto Bemporad. University of Trento. Academic year Automatic Control 2 Sampling Prof. Alberto Bemporad University of rento Academic year 2010-2011 Prof. Alberto Bemporad (University of rento) Automatic Control 2 Academic year 2010-2011 1 / 31 ime-discretization

More information

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich Control Systems I Lecture 7: Feedback and the Root Locus method Readings: Jacopo Tani Institute for Dynamic Systems and Control D-MAVT ETH Zürich November 2, 2018 J. Tani, E. Frazzoli (ETH) Lecture 7:

More information

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4) A Exercise State will have dimension 5. One possible choice is given by y and its derivatives up to y (4 x T (t [ y(t y ( (t y (2 (t y (3 (t y (4 (t ] T With this choice we obtain A B C [ ] D 2 3 4 To

More information

Automatic Control II Computer exercise 3. LQG Design

Automatic Control II Computer exercise 3. LQG Design Uppsala University Information Technology Systems and Control HN,FS,KN 2000-10 Last revised by HR August 16, 2017 Automatic Control II Computer exercise 3 LQG Design Preparations: Read Chapters 5 and 9

More information

Control Systems! Copyright 2017 by Robert Stengel. All rights reserved. For educational use only.

Control Systems! Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. Control Systems Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 2017 Analog vs. digital systems Continuous- and Discretetime Dynamic Models Frequency Response Transfer Functions

More information

EC CONTROL SYSTEM UNIT I- CONTROL SYSTEM MODELING

EC CONTROL SYSTEM UNIT I- CONTROL SYSTEM MODELING EC 2255 - CONTROL SYSTEM UNIT I- CONTROL SYSTEM MODELING 1. What is meant by a system? It is an arrangement of physical components related in such a manner as to form an entire unit. 2. List the two types

More information

6 OUTPUT FEEDBACK DESIGN

6 OUTPUT FEEDBACK DESIGN 6 OUTPUT FEEDBACK DESIGN When the whole sate vector is not available for feedback, i.e, we can measure only y = Cx. 6.1 Review of observer design Recall from the first class in linear systems that a simple

More information

Analysis of Discrete-Time Systems

Analysis of Discrete-Time Systems TU Berlin Discrete-Time Control Systems 1 Analysis of Discrete-Time Systems Overview Stability Sensitivity and Robustness Controllability, Reachability, Observability, and Detectabiliy TU Berlin Discrete-Time

More information

Recursive, Infinite Impulse Response (IIR) Digital Filters:

Recursive, Infinite Impulse Response (IIR) Digital Filters: Recursive, Infinite Impulse Response (IIR) Digital Filters: Filters defined by Laplace Domain transfer functions (analog devices) can be easily converted to Z domain transfer functions (digital, sampled

More information

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks

More information

Process Modelling, Identification, and Control

Process Modelling, Identification, and Control Jan Mikles Miroslav Fikar 2008 AGI-Information Management Consultants May be used for personal purporses only or by libraries associated to dandelon.com network. Process Modelling, Identification, and

More information

Representation of standard models: Transfer functions are often used to represent standard models of controllers and signal filters.

Representation of standard models: Transfer functions are often used to represent standard models of controllers and signal filters. Chapter 5 Transfer functions 5.1 Introduction Transfer functions is a model form based on the Laplace transform, cf. Chapter 4. Transfer functions are very useful in analysis and design of linear dynamic

More information

Discrete-time Controllers

Discrete-time Controllers Schweizerische Gesellschaft für Automatik Association Suisse pour l Automatique Associazione Svizzera di Controllo Automatico Swiss Society for Automatic Control Advanced Control Discrete-time Controllers

More information

Identification in closed-loop, MISO identification, practical issues of identification

Identification in closed-loop, MISO identification, practical issues of identification Identification in closed-loop, MISO identification, practical issues of identification CHEM-E7145 Advanced Process Control Methods Lecture 4 Contents Identification in practice Identification in closed-loop

More information

Chapter 3 Data Acquisition and Manipulation

Chapter 3 Data Acquisition and Manipulation 1 Chapter 3 Data Acquisition and Manipulation In this chapter we introduce z transf orm, or the discrete Laplace Transform, to solve linear recursions. Section 3.1 z-transform Given a data stream x {x

More information

Control for. Maarten Steinbuch Dept. Mechanical Engineering Control Systems Technology Group TU/e

Control for. Maarten Steinbuch Dept. Mechanical Engineering Control Systems Technology Group TU/e Control for Maarten Steinbuch Dept. Mechanical Engineering Control Systems Technology Group TU/e Motion Systems m F Introduction Timedomain tuning Frequency domain & stability Filters Feedforward Servo-oriented

More information

DESIGN USING TRANSFORMATION TECHNIQUE CLASSICAL METHOD

DESIGN USING TRANSFORMATION TECHNIQUE CLASSICAL METHOD 206 Spring Semester ELEC733 Digital Control System LECTURE 7: DESIGN USING TRANSFORMATION TECHNIQUE CLASSICAL METHOD For a unit ramp input Tz Ez ( ) 2 ( z ) D( z) G( z) Tz e( ) lim( z) z 2 ( z ) D( z)

More information

Controls Problems for Qualifying Exam - Spring 2014

Controls Problems for Qualifying Exam - Spring 2014 Controls Problems for Qualifying Exam - Spring 2014 Problem 1 Consider the system block diagram given in Figure 1. Find the overall transfer function T(s) = C(s)/R(s). Note that this transfer function

More information

FEEDBACK CONTROL SYSTEMS

FEEDBACK CONTROL SYSTEMS FEEDBAC CONTROL SYSTEMS. Control System Design. Open and Closed-Loop Control Systems 3. Why Closed-Loop Control? 4. Case Study --- Speed Control of a DC Motor 5. Steady-State Errors in Unity Feedback Control

More information

Learn2Control Laboratory

Learn2Control Laboratory Learn2Control Laboratory Version 3.2 Summer Term 2014 1 This Script is for use in the scope of the Process Control lab. It is in no way claimed to be in any scientific way complete or unique. Errors should

More information

YTÜ Mechanical Engineering Department

YTÜ Mechanical Engineering Department YTÜ Mechanical Engineering Department Lecture of Special Laboratory of Machine Theory, System Dynamics and Control Division Coupled Tank 1 Level Control with using Feedforward PI Controller Lab Date: Lab

More information

Discrete-time linear systems

Discrete-time linear systems Automatic Control Discrete-time linear systems Prof. Alberto Bemporad University of Trento Academic year 2-2 Prof. Alberto Bemporad (University of Trento) Automatic Control Academic year 2-2 / 34 Introduction

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

Control System Design

Control System Design ELEC ENG 4CL4: Control System Design Notes for Lecture #27 Wednesday, March 17, 2004 Dr. Ian C. Bruce Room: CRL-229 Phone ext.: 26984 Email: ibruce@mail.ece.mcmaster.ca Discrete Delta Domain Models The

More information

Damped Oscillators (revisited)

Damped Oscillators (revisited) Damped Oscillators (revisited) We saw that damped oscillators can be modeled using a recursive filter with two coefficients and no feedforward components: Y(k) = - a(1)*y(k-1) - a(2)*y(k-2) We derived

More information

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)

More information

Some solutions of the written exam of January 27th, 2014

Some solutions of the written exam of January 27th, 2014 TEORIA DEI SISTEMI Systems Theory) Prof. C. Manes, Prof. A. Germani Some solutions of the written exam of January 7th, 0 Problem. Consider a feedback control system with unit feedback gain, with the following

More information

Control System Design

Control System Design ELEC ENG 4CL4: Control System Design Notes for Lecture #36 Dr. Ian C. Bruce Room: CRL-229 Phone ext.: 26984 Email: ibruce@mail.ece.mcmaster.ca Friday, April 4, 2003 3. Cascade Control Next we turn to an

More information

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design.

Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design. ISS0031 Modeling and Identification Lecture 5: Linear Systems. Transfer functions. Frequency Domain Analysis. Basic Control Design. Aleksei Tepljakov, Ph.D. September 30, 2015 Linear Dynamic Systems Definition

More information

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich Control Systems I Lecture 6: Poles and Zeros Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 27, 2017 E. Frazzoli (ETH) Lecture 6: Control Systems I 27/10/2017

More information

Control Systems Lab - SC4070 Control techniques

Control Systems Lab - SC4070 Control techniques Control Systems Lab - SC4070 Control techniques Dr. Manuel Mazo Jr. Delft Center for Systems and Control (TU Delft) m.mazo@tudelft.nl Tel.:015-2788131 TU Delft, February 16, 2015 (slides modified from

More information

EEE582 Homework Problems

EEE582 Homework Problems EEE582 Homework Problems HW. Write a state-space realization of the linearized model for the cruise control system around speeds v = 4 (Section.3, http://tsakalis.faculty.asu.edu/notes/models.pdf). Use

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Discrete-Time Systems Hanz Richter, Professor Mechanical Engineering Department Cleveland State University Discrete-Time vs. Sampled-Data Systems A continuous-time

More information

YTÜ Mechanical Engineering Department

YTÜ Mechanical Engineering Department YTÜ Mechanical Engineering Department Lecture of Special Laboratory of Machine Theory, System Dynamics and Control Division Coupled Tank 1 Level Control with using Feedforward PI Controller Lab Report

More information

Control System Design

Control System Design ELEC4410 Control System Design Lecture 19: Feedback from Estimated States and Discrete-Time Control Design Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science

More information

Analysis of Discrete-Time Systems

Analysis of Discrete-Time Systems TU Berlin Discrete-Time Control Systems TU Berlin Discrete-Time Control Systems 2 Stability Definitions We define stability first with respect to changes in the initial conditions Analysis of Discrete-Time

More information

Review of Linear Time-Invariant Network Analysis

Review of Linear Time-Invariant Network Analysis D1 APPENDIX D Review of Linear Time-Invariant Network Analysis Consider a network with input x(t) and output y(t) as shown in Figure D-1. If an input x 1 (t) produces an output y 1 (t), and an input x

More information

Time Response of Systems

Time Response of Systems Chapter 0 Time Response of Systems 0. Some Standard Time Responses Let us try to get some impulse time responses just by inspection: Poles F (s) f(t) s-plane Time response p =0 s p =0,p 2 =0 s 2 t p =

More information

Frequency methods for the analysis of feedback systems. Lecture 6. Loop analysis of feedback systems. Nyquist approach to study stability

Frequency methods for the analysis of feedback systems. Lecture 6. Loop analysis of feedback systems. Nyquist approach to study stability Lecture 6. Loop analysis of feedback systems 1. Motivation 2. Graphical representation of frequency response: Bode and Nyquist curves 3. Nyquist stability theorem 4. Stability margins Frequency methods

More information

(b) A unity feedback system is characterized by the transfer function. Design a suitable compensator to meet the following specifications:

(b) A unity feedback system is characterized by the transfer function. Design a suitable compensator to meet the following specifications: 1. (a) The open loop transfer function of a unity feedback control system is given by G(S) = K/S(1+0.1S)(1+S) (i) Determine the value of K so that the resonance peak M r of the system is equal to 1.4.

More information

Lecture 11. Frequency Response in Discrete Time Control Systems

Lecture 11. Frequency Response in Discrete Time Control Systems EE42 - Discrete Time Systems Spring 28 Lecturer: Asst. Prof. M. Mert Ankarali Lecture.. Frequency Response in Discrete Time Control Systems Let s assume u[k], y[k], and G(z) represents the input, output,

More information

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Prüfung Regelungstechnik I (Control Systems I) Prof. Dr. Lino Guzzella 9. 8. 2 Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Do not mark up this translation aid -

More information

10/8/2015. Control Design. Pole-placement by state-space methods. Process to be controlled. State controller

10/8/2015. Control Design. Pole-placement by state-space methods. Process to be controlled. State controller Pole-placement by state-space methods Control Design To be considered in controller design * Compensate the effect of load disturbances * Reduce the effect of measurement noise * Setpoint following (target

More information

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Dynamic Response

Raktim Bhattacharya. . AERO 422: Active Controls for Aerospace Vehicles. Dynamic Response .. AERO 422: Active Controls for Aerospace Vehicles Dynamic Response Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. . Previous Class...........

More information

(a) Find the transfer function of the amplifier. Ans.: G(s) =

(a) Find the transfer function of the amplifier. Ans.: G(s) = 126 INTRDUCTIN T CNTR ENGINEERING 10( s 1) (a) Find the transfer function of the amplifier. Ans.: (. 02s 1)(. 001s 1) (b) Find the expected percent overshoot for a step input for the closed-loop system

More information

Theory of Linear Systems Exercises. Luigi Palopoli and Daniele Fontanelli

Theory of Linear Systems Exercises. Luigi Palopoli and Daniele Fontanelli Theory of Linear Systems Exercises Luigi Palopoli and Daniele Fontanelli Dipartimento di Ingegneria e Scienza dell Informazione Università di Trento Contents Chapter. Exercises on the Laplace Transform

More information

Exam. 135 minutes, 15 minutes reading time

Exam. 135 minutes, 15 minutes reading time Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.

More information

Trajectory planning and feedforward design for electromechanical motion systems version 2

Trajectory planning and feedforward design for electromechanical motion systems version 2 2 Trajectory planning and feedforward design for electromechanical motion systems version 2 Report nr. DCT 2003-8 Paul Lambrechts Email: P.F.Lambrechts@tue.nl April, 2003 Abstract This report considers

More information

Laboratory Exercise 1 DC servo

Laboratory Exercise 1 DC servo Laboratory Exercise DC servo Per-Olof Källén ø 0,8 POWER SAT. OVL.RESET POS.RESET Moment Reference ø 0,5 ø 0,5 ø 0,5 ø 0,65 ø 0,65 Int ø 0,8 ø 0,8 Σ k Js + d ø 0,8 s ø 0 8 Off Off ø 0,8 Ext. Int. + x0,

More information

Feedback Control of Linear SISO systems. Process Dynamics and Control

Feedback Control of Linear SISO systems. Process Dynamics and Control Feedback Control of Linear SISO systems Process Dynamics and Control 1 Open-Loop Process The study of dynamics was limited to open-loop systems Observe process behavior as a result of specific input signals

More information

Andrea Zanchettin Automatic Control AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Linear systems (frequency domain)

Andrea Zanchettin Automatic Control AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Linear systems (frequency domain) 1 AUTOMATIC CONTROL Andrea M. Zanchettin, PhD Spring Semester, 2018 Linear systems (frequency domain) 2 Motivations Consider an LTI system Thanks to the Lagrange s formula we can compute the motion of

More information

Models for Sampled Data Systems

Models for Sampled Data Systems Chapter 12 Models for Sampled Data Systems The Shift Operator Forward shift operator q(f[k]) f[k +1] In terms of this operator, the model given earlier becomes: q n y[k]+a n 1 q n 1 y[k]+ + a 0 y[k] =b

More information

Department of Electrical and Computer Engineering. EE461: Digital Control - Lab Manual

Department of Electrical and Computer Engineering. EE461: Digital Control - Lab Manual Department of Electrical and Computer Engineering EE461: Digital Control - Lab Manual Winter 2011 EE 461 Experiment #1 Digital Control of DC Servomotor 1 Objectives The objective of this lab is to introduce

More information

Control Systems. EC / EE / IN. For

Control Systems.   EC / EE / IN. For Control Systems For EC / EE / IN By www.thegateacademy.com Syllabus Syllabus for Control Systems Basic Control System Components; Block Diagrammatic Description, Reduction of Block Diagrams. Open Loop

More information

Models for Sampled Data Systems

Models for Sampled Data Systems Chapter 12 Models for Sampled Data Systems Motivation Up to this point in the book, we have assumed that the control systems we have studied operate in continuous time and that the control law is implemented

More information

CONTROL SYSTEMS ENGINEERING Sixth Edition International Student Version

CONTROL SYSTEMS ENGINEERING Sixth Edition International Student Version CONTROL SYSTEMS ENGINEERING Sixth Edition International Student Version Norman S. Nise California State Polytechnic University, Pomona John Wiley fir Sons, Inc. Contents PREFACE, vii 1. INTRODUCTION, 1

More information

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam!

Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Prüfung Regelungstechnik I (Control Systems I) Prof. Dr. Lino Guzzella 3. 8. 24 Übersetzungshilfe / Translation aid (English) To be returned at the end of the exam! Do not mark up this translation aid

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #19 16.31 Feedback Control Systems Stengel Chapter 6 Question: how well do the large gain and phase margins discussed for LQR map over to DOFB using LQR and LQE (called LQG)? Fall 2010 16.30/31 19

More information

3.1 Overview 3.2 Process and control-loop interactions

3.1 Overview 3.2 Process and control-loop interactions 3. Multivariable 3.1 Overview 3.2 and control-loop interactions 3.2.1 Interaction analysis 3.2.2 Closed-loop stability 3.3 Decoupling control 3.3.1 Basic design principle 3.3.2 Complete decoupling 3.3.3

More information

Iterative Controller Tuning Using Bode s Integrals

Iterative Controller Tuning Using Bode s Integrals Iterative Controller Tuning Using Bode s Integrals A. Karimi, D. Garcia and R. Longchamp Laboratoire d automatique, École Polytechnique Fédérale de Lausanne (EPFL), 05 Lausanne, Switzerland. email: alireza.karimi@epfl.ch

More information

Goodwin, Graebe, Salgado, Prentice Hall Chapter 11. Chapter 11. Dealing with Constraints

Goodwin, Graebe, Salgado, Prentice Hall Chapter 11. Chapter 11. Dealing with Constraints Chapter 11 Dealing with Constraints Topics to be covered An ubiquitous problem in control is that all real actuators have limited authority. This implies that they are constrained in amplitude and/or rate

More information

Simulation Study on Pressure Control using Nonlinear Input/Output Linearization Method and Classical PID Approach

Simulation Study on Pressure Control using Nonlinear Input/Output Linearization Method and Classical PID Approach Simulation Study on Pressure Control using Nonlinear Input/Output Linearization Method and Classical PID Approach Ufuk Bakirdogen*, Matthias Liermann** *Institute for Fluid Power Drives and Controls (IFAS),

More information

1 x(k +1)=(Φ LH) x(k) = T 1 x 2 (k) x1 (0) 1 T x 2(0) T x 1 (0) x 2 (0) x(1) = x(2) = x(3) =

1 x(k +1)=(Φ LH) x(k) = T 1 x 2 (k) x1 (0) 1 T x 2(0) T x 1 (0) x 2 (0) x(1) = x(2) = x(3) = 567 This is often referred to as Þnite settling time or deadbeat design because the dynamics will settle in a Þnite number of sample periods. This estimator always drives the error to zero in time 2T or

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information

IC6501 CONTROL SYSTEMS

IC6501 CONTROL SYSTEMS DHANALAKSHMI COLLEGE OF ENGINEERING CHENNAI DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING YEAR/SEMESTER: II/IV IC6501 CONTROL SYSTEMS UNIT I SYSTEMS AND THEIR REPRESENTATION 1. What is the mathematical

More information

Outline. Classical Control. Lecture 1

Outline. Classical Control. Lecture 1 Outline Outline Outline 1 Introduction 2 Prerequisites Block diagram for system modeling Modeling Mechanical Electrical Outline Introduction Background Basic Systems Models/Transfers functions 1 Introduction

More information

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping Contents lecture 5 Automatic Control III Lecture 5 H 2 and H loop shaping Thomas Schön Division of Systems and Control Department of Information Technology Uppsala University. Email: thomas.schon@it.uu.se,

More information

DIGITAL CONTROLLER DESIGN

DIGITAL CONTROLLER DESIGN ECE4540/5540: Digital Control Systems 5 DIGITAL CONTROLLER DESIGN 5.: Direct digital design: Steady-state accuracy We have spent quite a bit of time discussing digital hybrid system analysis, and some

More information

1 Mathematics. 1.1 Determine the one-sided Laplace transform of the following signals. + 2y = σ(t) dt 2 + 3dy dt. , where A is a constant.

1 Mathematics. 1.1 Determine the one-sided Laplace transform of the following signals. + 2y = σ(t) dt 2 + 3dy dt. , where A is a constant. Mathematics. Determine the one-sided Laplace transform of the following signals. {, t < a) u(t) =, where A is a constant. A, t {, t < b) u(t) =, where A is a constant. At, t c) u(t) = e 2t for t. d) u(t)

More information

Process Modelling, Identification, and Control

Process Modelling, Identification, and Control Jan Mikles Miroslav Fikar Process Modelling, Identification, and Control With 187 Figures and 13 Tables 4u Springer Contents 1 Introduction 1 1.1 Topics in Process Control 1 1.2 An Example of Process Control

More information

Trajectory Planning, Setpoint Generation and Feedforward for Motion Systems

Trajectory Planning, Setpoint Generation and Feedforward for Motion Systems 2 Trajectory Planning, Setpoint Generation and Feedforward for Motion Systems Paul Lambrechts Digital Motion Control (4K4), 23 Faculty of Mechanical Engineering, Control Systems Technology Group /42 2

More information