Hence the systems in è3.5è can also be written as where D 11,D 22. Gèsè : 2 4 A C 1 B 1 D 11 B 2 D 12 C 2 D 21 D è3.è In è3.5è, the direct feed

Size: px
Start display at page:

Download "Hence the systems in è3.5è can also be written as where D 11,D 22. Gèsè : 2 4 A C 1 B 1 D 11 B 2 D 12 C 2 D 21 D è3.è In è3.5è, the direct feed"

Transcription

1 Chapter 3 The H 2 -optimal control problem In this chapter we present the solution of the H 2 -optimal control problem. We consider the control system in Figure 1.2. Introducing the partition of G according to z y G11 G 12 G 21 G 22 v u è3.1è the closed-loop system z F èg; Kèv has the transfer function F èg; Kè given by F èg; Kè G 11 + G 12 èi, KG 22 è,1 KG 21 è3.2è è3.3è The H 2 -optimal control problem consists of ænding a causal controller K which stabilizes the plant G and which minimizes the cost function J 2 èkè kfèg; Kèk 2 2 è3.4è where kf èg; Kèk 2 is the H 2 norm, cf. Section The control problem is most conveniently solved in the time domain. We will assume that the plant G has the state-space representation _xètè Axètè+B 1 vètè+b 2 uètè zètè C 1 xètè+d 12 uètè yètè C 2 xètè+d 21 vètè è3.5è Remark 3.1. In the optimal and robust control literature, the state-space representation Gèsè CèsI, Aè,1 B + D is commonly written using the compact notation A B Gèsè : è3.è C D 24

2 Hence the systems in è3.5è can also be written as where D 11,D 22. Gèsè : 2 4 A C 1 B 1 D 11 B 2 D 12 C 2 D 21 D è3.è In è3.5è, the direct feedthrough from v to z has been assumed zero in order to obtain a ænite H 2 norm for the closed-loop system. The direct feedthrough from u to y has been assumed zero because physical systems always have a zero gain at inænite frequency. It is convenient to make the following assumptions on the system matrices. Assumptions: èa1è èa2è The pair èa; B 2 è is stabilizable. D T 12 D 12 is invertible. èa3è D T 12 C 1. èa4è èb1è èb2è The pair èc 1 ;Aè has no unobservable modes on the imaginary axis. The èc 2 ;Aè is detectable. D 21 D T 21 is invertible. èb3è D 21 B T 1. èb4è The pair èa; B 1 è has no uncontrallable modes on the imaginary axis. The ærst set of assumptions èa1èíèa4è is related to the state feedback control problem, while the second set of assumptions èb1èíèb4è is related to the state estimation problem. Assumptions èa1è and èb1è are necessary for a stabilizing controller u Ky to exist. Assumptions èa2è, èa3è and èb2è, èb3è can be relaxed, but they are not very restrictive, and as we will see they imply convenient simpliæcations in the solution. Assumptions èa4è and èb4è are required for the Riccati equations which characterize the optimal controller to have stabilizing solutions. These assumptions can be relaxed, but the solution of the optimal control problem must then be characterized in an alternative way,for example in terms of linear matrix inequalities èlmisè. It is appropriate to introduce the time-domain characterization in è2.45è of the cost è3.4è, giving J 2 èkè mx Z 1 ë zètè T zètèdt : v e k æètèë è3.8è Notice, however, that the characterization è3.8è is introduced solely because the solution of the H 2 -optimal control problem can be derived in a convenient way using è3.8è. On the other hand, the motivation of the H 2 -optimal is often more naturally stated in terms of the average frequency-domain characterization in è2.15è or the stochastic characterization in è2.4è. 25

3 Remark 3.2. In the linear quadratic optimal control literature, the cost function is usually deæned as J LQ èkè mx Z 1 ëxètè T Qxètè+uètè T Ruètèëdt : v e k æètè è3.9è where Q is a symmetric positive semideænite matrix and R is a symmetric positive deænite matrix. It is easy to see that the cost functions è3.8è and è3.9è are equivalent, because the positive èsemièdeænite matrices Q and R can always be factorized as Q èq 12 è T Q 12 ; RèR 12 è T R 12 è3.1è The factorizations in è3.1è are called Cholesky factorizations, cf. the MATLAB routine chol. Deæning the matrices Q 12 C 1 ; D 12 è3.11è R 12 it follows that z T z èc 1 x+d 12 uè T èc 1 x + D 12 uèx T Qx + u T Ru è3.12è and the costs è3.8è and è3.9è are thus equivalent. Notice also that with C 1 and D 12 deæned by è3.11è, assumption èa3è holds. The solution of the H 2 -optimal control problem can be done in two stages. In the ærst stage, an optimal state-feedback law is constructed. The second stage consists of ænding an optimal state estimator. 3.1 The optimal state-feedback problem The following result gives the optimal state-feedback law ^uèsè K x èsè^xèsè which minimizes the quadratic cost è3.8è or è3.9è. The result is classical in optimal linear quadratic èlqè control theory. Theorem 3.1 H 2 -optimal state feedback control. Consider the system è3.5è. Suppose that the assumptions èa1èíèa4è hold. Assume that the control signal uètè has access to the present and past values of the state, xèè; t. Then the cost è3.8è is minimized by the static state state-feedback controller uètè K opt xètè è3.13è where K opt,èd T 12 D 12è,1 B T 2 S è3.14è and where S is the unique symmetric positive èsemièdeænite solution to the algebraic Riccati equation èareè A T S + SA, SB 2 èd T 12 D 12è,1 B T 2 S + CT 1 C 1 è3.15è 2

4 such that the matrix A + B 2 K opt è3.1è is stable, i.e. all its eigenvalues have negative real parts. Moreover, the minimum value of the cost è3.8è achieved by the control law è3.13è is given by min J 2 èk x è trèb T 1 SB 1è è3.1è Kx Proof: By the theory of Riccati equations it follows from assumptions èa1è and èa4è that the algebraic Riccati equation è3.15è has a positive èsemièdeænite solution S such that the matrix in è3.1è is stable. Consider the integral in è3.8è. It can be decomposed as Z 1 Z + Z 1 zètè T zètèdt zètè T zètèdt + + zètèt zètèdt è3.18è where + denotes the time immediately after the impulse input at time t. With xèè and vètè e k æètè, we have formally Z + xè + è e Aè+, è ëb 1 e k æèè+b 2 uèèëd B 1 e k è3.19è It follows that the ærst integral in è3.18è is zero. Assuming that the algebraic Riccati equation è3.15è has a positive èsemièdeænite solution, the second integral in è3.18è can be expanded as where Z 1 + zètèt zètèdt Z 1 + ëc 1xètè+D 12 uètèë T ëc 1 xètè+d 12 uètèëdt hëc 1 xètè+d 12 uètèë T ëc 1 xètè+d 12 uètèë + dèxètèt Sxètèè Z 1 +,xè1è T Sxè1è+xè + è T Sxè + è Z 1 dt i dt + ëuètè, u ètèë T D T 12 D 12ëuètè, u ètèëdt + xè + è T Sxè + è è3.2è u ètè K opt xètè; è3.21è and xè + è denotes the state immediately after the impulse input at time t. Wehave also assumed that xè1è due to stability, and used the fact that S satisæes è3.15è. Introducing è3.2è and è3.19è, the cost è3.8è can be expressed as J 2 èkè mx Z 1 mx Z 1 mx Z 1 zètè T zètèdt : v e k æètè ëuètè, u ètèë T D T 12 D 12ëuètè, u ètèëdt + xè + è T Sxè + è:ve k æètè ëuètè, u ètèë T D T 12 D 12ëuètè, u ètèëdt + e T k BT 1 SB 1e k : v e k æètè è3.22è 2

5 Here mx e T k BT 1 SB 1e k mx trèb T 1 SB 1e k e T k è trèbt 1 SB 1 mx e k e T k è trèbt 1 SB 1è è3.23è Hence, J 2 èkè mx Z 1 ëuètè, u ètèë T D T 12 D 12ëuètè, u ètèëdt : v e k æètè + trèb T 1 SB 1è è3.24è Since the integrals in è3.24è are non-negative, and are equal to zero when uètè u ètè, it follows that the cost J 2 èk x è is minimized by the static state feedback è3.13è, or è3.21è, and the minimum cost is given by è3.1è. 2 Problem 3.1. Verify the last step in è3.2è. Notice that the optimal state-feedback controller consists of a static state feedback, although no restrictions on the controller structure was imposed. Another point to notice about the optimal controller is that it does not depend on the disturbance v, since the controller is independent of the matrix B 1. This matrix enters only in the expression è3.1è for the minimum cost. Remark 3.3. By equation è3.2è, the control law è3.21è is optimal for all initial states xè + è. This implies that the solution of the H 2 control problem can be interpreted in a worst-case setting as the controller which minimizes the worst-case cost J 2;worst èkè : max xèè max xèè n kzk 2 2 : xt èèxèè 1 Z 1 o z T ètèzètèdt : x T èèxèè 1 è3.25è The optimal state-feedback control result suggests how an optimal output feedback controller ^uèsè Kèsè^yèsè can be obtained. In particular, the integral in the expansion è3.22è cannot be made equal to zero if the whole state vector xètè is not available to the controller. In the output feedback problem, one should instead make this integral as small as possible by using an optimal state estimate ^xètè instead. Therefore, we give next the solution to an H 2 -optimal state estimation problem. 3.2 The optimal state-estimation problem In this section we consider the system _xètè Axètè+B 1 vètè zètèc 1 xètè yètè C 2 xètè+d 21 vètè è3.2è 28

6 Notice that in the estimation problem, we need not consider the input u. As u is a known input, its eæect on the state is exactly known. We consider stable causal state estimators F such that the state estimate ^xètè is determined from the measured output y according to èin the Laplace-domainè ^xèsè Fèsèyèsè. In the H 2 -optimal estimation problem, the problem is to construct an estimator such that the quadratic H 2 -type cost J e èf è mx Z 1 ë ëxètè, ^xètèë T C T 1 C 1ëxètè, ^xètèëdt : v e k æètèë è3.2è is minimized. The solution of the optimal state estimation problem is given as follows. Theorem 3.2 H 2 -optimal state estimation. Consider the system è3.2è. Suppose that the assumptions èb1èíèb4è hold. The stable causal state estimator which minimizes the cost è3.2è is given by _^xètè A^xètè+Lëyètè,C 2^xètèë è3.28è where L PC T 2èD 21 D T 21è,1 è3.29è and where P is the unique symmetric positive èsemièdeænite solution to the algebraic Riccati equation such that the matrix AP + PA T,PC T 2èD 21 D T 21è,1 C 2 P + B 1 B T 1 è3.3è A, LC 2 è3.31è is stable, i.e. all its eigenvalues have negative real parts. Moreover, the minimum value of the cost è3.2è achieved bythe estimator è3.28è is given by min J eèf è trèc 1 PC1è T è3.32è F Remark 3.4. Notice that the optimal state estimator is independent ofthe matrix C 1 in the cost è3.2è. The matrix enters only in the expression è3.32è for the minimum cost. Remark 3.5. The solution deæned by è3.29è and è3.3è is the dual to the solution è3.14è and è3.15è of the optimal state-feedback problem in the sense that è3.14è and è3.15è reduce to è3.29è, è3.3è by making the substitutions A! A T C 1! B T 1 è3.33è B 2! C T 2 D 12! D T 21 29

7 and the costs è3.1è and è3.32è are dual under the substitution B 1! C T 1. The estimator è3.28è is the celebrated Kalman ælter due to Rudolf Kalman and R. S. Bucy è19, 191è. Originally, ithas been introduced in a stochastic framework for recursive state estimation in stochastic systems. Notice that the cost è3.2è equals the H 2 norm of the transfer function from the disturbance v to the estimation error C 1 èx, ^xè. From what was mentioned previously in Section it follows that if v is stochastic white noise, the cost è3.2è can be characterized as J e èf è lim t f!1 Eë 1 Z tf ëxètè, ^xètèë T C 1 C 1 ëxètè, ^xètèëdtë t f è3.34è Remark 3.. In the standard formulation of the stochastic estimation problem the system is deæned as _xètè Axètè +v 1 ètè zètè C 1 xètè yètè C 2 xètè +v 2 ètè è3.35è where v 1 ètè and v 2 ètè are mutually independent white noise processes with covariance matrices R 1 and R 2, respectively. This formulation reduces to the one above by the identiæcations B1 ë B T 1 D T 21 D ë è3.3è 21 and R1 R 2 v1 v 2 B1 v D 21 è3.3è The optimal estimator result is more diæcult to prove than the optimal statefeedback controller. We shall only provide a proof based on duality between the optimal state-feedback and estimation problems. Proof of Theorem 3.2: Consider a state estimator ^xèsè F èsèy with transfer function F èsè. Then we have in terms of operators, C 1 èx, ^xè ëc 1 G 1,C 1 FG 2 ëv è3.38è where ècf. è3.2èè G 1 èsè èsi, Aè,1 B 1 ; G 2 èsè C 2 èsi, Aè,1 B 1 + D 21 è3.39è The cost è3.2è can then be characterized as J e èf èkc 1 G 1,C 1 FG 2 k 2 2 è3.4è But by the deænition è2.15è, the H 2 norm of a transfer function matrix is equal to the H 2 norm of its transpose. Hence J e èf èkg T 1 CT 1,G T 2 FT C T 1k 2 2 è3.41è 3

8 Notice that Hence and the transposed system C1 G 1 èsè C1 èsi, Aè,1 B 1 + G 2 èsè C 2 D 21 è3.42è ë G 1 èsè T C T 1 G 2 èsè T ëb T 1 èsi, AT è,1 ë C T 1 C T 2 ë+ë DT 21 ë è3.43è thus has the state-space representation r ëg T 1 CT 1 G T 2 ë è3.44è _pètè A T pètè+c T 1 ètè+ct 2 ètè rètè B T 1 pètè+dt 21 ètè Using the control law,f T C T 1 the closed-loop system è3.44è, è3.4è is described by è3.45è è3.4è r èg T 1 CT 1,GT 2 FT C T 1è è3.4è Hence the H 2 -optimal estimation problem is equal to the dual control problem which consists of ænding a controller è3.4è such that the H 2 norm of the closed-loop system is minimized. The dual control problem is, however, equivalent to a state-feedback control problem, because the fact that C T 1 is available to the controller means that the state pètè in è3.45è is also available. More speciæcally, the problem of ænding a control law è3.4è which minimizes the H 2 norm of the closed-loop transfer function, equation è3.41è, is equivalent to the problem of ænding a state-feedback law ètè,fp T pètè è3.48è where pètè is available through equation è3.45è and knowledge of C 1 ètè and ètè. But the problem of ænding an optimal state-feedback controller for the dual system è3.45è is known, and from our previous result it is given by è3.14è and è3.15è under to the substitutions è3.33è. This gives the optimal stationary state-feedback with ètè,f T opt pètè; F T opt èd 21D T 21 è,1 C 2 P è3.49è where P is given by the algebraic Riccati equation è3.3è. In order to derive the optimal state estimator, notice that combining è3.49è and è3.45è, the dual state-feedback controller è3.49è corresponds to a controller,f T C T 1, è3.4è, with state-space representation _pètè A T pètè +C T 1 ètè,ct 2 FT opt pètè èa, F opt C 2 è T pètè +C T 1ètè è3.5è ètè,f T opt pètè Taking the negative of the transpose of è3.5è gives the state-space representation of èf T C T 1 èt C 1 F, and thus the optimal estimator ^x Fy, _^xètè èa,f opt C 2 è^xètè+f opt yètè ^zètè C 1^xètè è3.51è 31

9 which is equal to è3.28è. 2 Remark 3.. As noted above, the H 2 -optimal estimator is dual to an optimal state feedback problem. It follows that for all the properties of the latter there is a corresponding dual property of the former. In particular, it follows from Remark 3.3 that the optimal estimator can be interpreted as a worst-case estimator as follows. By Remark 3.3 the dual state-feedback problem gives the optimal controller è3.49è which minimizes the worst-case performance from pèè 2 R n to r 2 L 2 ë; 1è. By duality of the optimal estimation and the optimal state feedback problem, it follows that the optimal estimator can be interpreted in a worst-case setting as the estimator which minimizes the worst-case estimation error J e;worst èkè : max v max v n kxètè, ^xètèk 2 : kvk 2 1 ëxètè, ^xètèë T ëxètè, ^xètèë : o,1 vt èèvèèd 1 Z t è3.52è This observation gives a nice deterministic interpretation of the Kalman ælter as the ælter which minimizes the pointwise worst-case estimation error. This property can be compared with the H1 estimator, which will be studied in section The optimal output feedback problem We are now ready to present the solution to the H 2 -optimal control problem for the system shown in Fig The solution will be based on the cost function expansion in equation è3.24è. Recall that è3.24è can be written as J 2 èkè mx Z 1 ëuètè, K opt xètèë T D T 12 D 12ëuètè, K opt xètèëdt : v e k æètè +trèb T 1 SB 1è è3.53è In contrast to the state-feedback case, when only the output y is available to the controller the integrals cannot be made equal to zero. Instead, the cost è3.53è is equivalent to an H 2 -optimal state estimation problem, because the best one can do is to compute an estimate ^x of the state, and to determine the control signal according to uètè K opt^xètè è3.54è giving the cost J 2 èkè mx Z 1 +trèb T 1 SB 1è ë^xètè, xètèë T K T opt DT 12 D 12K opt ë^xètè, xètèëdt : v e k æètè è3.55è Hence, it follows that the minimum of the quadratic cost è3.8è is achieved by the controller è3.54è, where ^x is an H 2 -optimal state estimate for the system è3.5è. We summarize the result as follows. 32

10 Theorem 3.3 H 2 -optimal control. Consider the system è3.5è. Suppose that the assumptions èa1èíèa4è and èb1èíèb4è hold. Then the controller u Ky which minimizes the H 2 cost è3.8è is given by the equations _^xètè èa + B 2 K opt è^xètè+lëyètè,c 2^xètèë uètè K opt^xètè è3.5è where and K opt,èd T 12 D 12è,1 B T 2 S L PC T 2èD 21 D T 21è,1 è3.5è è3.58è and S and P are the symmetric positive èsemièdeænite solutions of the algebraic Riccati equations è3.15è and è3.3è, respectively. Moreover, the minimum value of the cost achieved by the controller è3.5è is min K J 2èKè trèd 12 K opt PK T opt DT 12è + trèb T 1 SB 1è è3.59è Proof: First we show that the optimal controller è3.5è gives a stable closed loop. Introduce the estimation error ~xètè : xètè,^xètè. Then the closed loop consisting of the system è3.5è and the controller è3.5è is given by _x A + B2 K opt B 1 _~x + v è3.è B 1,LD 21,B 2 K opt A, LC 2 x ~x Stability ofthe closed-loop follows from stability of the matrices A + B 2 K opt and A, LC 2. The fact that the controller è3.5è minimizes the H 2 cost è3.8è follows from the expansion è3.53è of Theorem 3.1 and Theorem The optimal controller è3.5è consists of an H 2 -optimal state estimator, and an H 2 -optimal state feedback of the estimated state. A particular feature of the solution is that the optimal estimator and state feedback can be calculated independently of each other. This feature of H 2 -optimal controllers is called the separation principle. Remark 3.8. The optimal controller è3.5è is a classical result in linear optimal control, where one usually assumes that the disturbances are stochastic white noise processes with a Gaussian distribution ècf. Remark 3.è, and the cost is deæned as " 1 Z tf J LQG èkè lim t f!1 E ëxètè T Qxètè +uètè T Ruètèëdt t f è è3.1è cf. Remark 3.1. problem. This problem is known as the Linear Quadratic Gaussian èlqgè control 33

11 3.4 Some comments on the application of H 2 optimal control The H 2 -optimal èlqgè control problem solves awell-deæned optimal control problem deæned by the quadratic cost in è3.4è, è3.8è, or è3.1è. Practical applications of the method require, however, that the cost be deæned in a manner which corresponds to the control objectives. This is not always very straightforward to achieve, and some observations are therefore in order. Here, we will only discuss two particular problems in H 2 controller design. The ærst one concerns the problem of applying the approach to the case with general deterministic inputs, not necessarily impulses. The second problem is concerned with the selection of weights in the quadratic cost è3.1è. H 2 -optimal control against general deterministic inputs In the deterministic interpretation of the H 2 cost, equation è3.8è, it is in many cases not very realistic to assume an impulse disturbance. For example, a step disturbance would often be a more realistic disturbance form. It is therefore important to generalize the control problem to other types of deterministic disturbance signals. Assume that the system è3.5è is subject to an input v with a rational Laplacetransform. Such an input can be characterized in the time domain by a state-space equation _x v ètè A v x v ètè; vètè C v x v ètè xèè b è3.2è or _x v ètè A v x v ètè+bwètè; vètèc v x v ètè wètè æètè è3.3è where the input w is an impulse function. By augmenting the system è3.5è with the disturbance model è3.3è, the problem has thus been taken to the standard form with an impulse input. One problem with the above approach is that for some important disturbance types, such as step, ramp and sinusoidal disturbances, the disturbance model è3.3è is unstable. As the disturbance dynamics is not aæected by the controller, the augmented plant consisting of è3.5è and è3.3è will not be stabilizable. It follows that the Riccati equation è3.15è does not have a stabilizable solution in these cases. This problem can be handled in the following way, which we will here demonstrate for the case with a step disturbance. Assume that the disturbance v is a step disturbance, described by vètè ; té v step ; t where v step is a constant vector. The disturbance can be characterized by the model è3.4è _vètè v step æètè è3.5è 34

12 Notice that this model is not stable. One approach would be to make the model stable, by using _vètè,avètè+v step æètè for some aéé. A more direct approach is, however, to eliminate the unstabilizable mode. This is achieved by diæerentiating the system equations è3.5è, giving íxètè A _xètè+b 1 _vètè+b 2 _uètè _yètè C 2 _xètè+d 21 _vètè è3.è Notice that if the input u is weighted in the cost, a steady-state oæset after the step disturbance will result. Therefore, it is more realistic to weight the input variation _uètè instead. The controlled output z is therefore redeæned according to zètè C 1 xètè+d 12 _uètè è3.è Introducing z x : C 1 x, the system can be written as íxètè _z x ètè 5 4 A C 1 _yètè C _xètè z x ètè 5 + yètè B 1 D _vètè+ 4 B 2 3 5_uètè zètèë I ë 4 _xètè z x ètè 5+D 12 _uètè è3.8è yètè 2 yètè ë Ië 4 _xètè z x ètè yètè 3 5 This characterization is in standard form with an impulse disturbance input _v given by è3.5è. Notice, however, that assumption èb2è associated with the measurement noise does not hold. In order to satisfy this assumption as well, we can add a noise term v meas on the measured output, 2 3 yètè ë Ië 4 _xètè z x ètè 5+v meas ètè è3.9è yètè As there is often a measurement noise present in practical situations, this modiæcation seems to be realistic one. Selection of weighting matrices in H 2 -optimal control The second design topic which we will discuss is the speciæcation of the controlled output z, or equivalently, the choice of weighting matrices Q and R in the quadratic cost è3.9è or è3.1è. For convenience, we will focus the discussion in this section on the stochastic problem formulation, and the associated cost è3.1è. It has been stated that the LQG problem is rather artiæcial, because the quadratic cost è3.1è seldom, if ever, corresponds to a physically relevant cost in practical applications. This may hold for the quadratic cost è3.1è. However, it should be noted that 35

13 it is often meaningful to consider the variances of the individual variables. Therefore, we introduce the individual costs " 1 Z tf J i èkè lim t f!1 E x i ètè 2 dt t f " 1 Z tf J n+i èkè lim t f!1 E u i ètè 2 dt t f è è ; i 1;:::;n ; i 1;:::;m è3.è Thus, the costs J i èkè;i 1;:::;n denote the variances of the state variables, and J n+i èkè denote the variances of the inputs. In contrast to the quadratic cost è3.1è, the individual costs J i èkè are often physically relevant. If the states x i represent quality variables èsuch as concentrations, basis weight etcè, then the variance is a measure of the quality variations. The input variances, again, show how much control eæort is required. It is therefore well motivated to control the system in such awaythat all the individual costs è3.è are as small as possible. As all the costs cannot be made arbitrarily small simultaneously, the problem deæned in this way is a multiobjective optimization problem, and it can be solved eæciently using techniques developed in multiobjective optimization. The set of optimal points in multiobjective optimization is characterized by the set Pareto-optimal, or noninferior, points. Deænition 3.1. èpareto-optimalityè Consider the multiobjective control problem associated with the individual costs J i èkè, i 1;:::;n+m in equation è3.è. A stabilizing controller K is a Pareto-optimal, or noninferior, controller if there exists no other stabilizing controller K such that J i èkè J i èk è for all i 1;:::;n+m and J j èkè éj j èk è for some j 2f1;:::;n+mg Thus, a Pareto-optimal controller is simply any controller such that no individual cost can be improved without making at least one other cost worse. It is clear that the search for a suitable controller should be made in the set of Pareto-optimal controllers. The multiobjective optimal control problem is connected to the H 2 or LQG optimal control problem as follows. It turns out that the set of Pareto-optimal controllers consists of precisely those controllers which minimize linear combinations of the individual costs, i.e., where nx J LQG èkè q i J i èkè+ r i J n+i èkè i1 i1 " 1 Z tf E ëxètè T Qxètè+uètè T Ruètèëdt t f mx è è3.1è Q diag èq 1 ;:::;q n è; R diag èr 1 ;:::;r m è è3.2è 3

14 and q i ;r i are positive. The multiobjective control problem is thus reduced to a selection of the weights in the LQ cost in such awaythat all the individual costs J i èkè havesatisfactory values. Thus, although the LQ cost may not be physically well motivated as such, it is still relevant via the individual costs J i èkè, which often can be motivated physically. There are systematic methods to ænd suitable weights è3.2è in such away that the individual costs have satisfactory values. A particularly simple case is when one cost, J j èkè say, is the primary cost to be minimized, while the other costs should be restricted. In this case the problem can be formulated as a constrained minimization problem, subject to Minimize J j èkè è3.3è J i èkè c 2 ; i 1;:::;n+m; i j è3.4è 3.5 Literature In addition to the books referenced in Chapter 1, there are some excellent classic texts on the optimal LQ and LQG problems. Standard texts on the subject are for example Anderson and Moore è191, 199è, Astríom è19è and Kwakernaak and Sivan è19è. A newer text with a very extensive treatment of the theory is Grimble and Johnson è1988è. The minimax property of the Kalman ælter, which was mentioned in Remark 3., has been observed by many people, see for example Krener è198è. The multiobjective approach to LQG design described in Section 3.4 has been discussed in Toivonen and Míakilía è1989è. References Anderson, B. D. O. and J. B. Moore è191è. Linear Optimal Control. Prentice-Hall, Englewood Cliæs, N.J. Anderson, B. D. O., and J. B. Moore è191è. Optimal Filtering. Prentice-Hall, Englewood Cliæs, N.J. Astríom, K. J. è19è. Introduction to Linear Stochastic Control Theory. Academic Press, New York. Grimble, M. J. and M. A. Johnson è1988è. Optimal Control and Stochastic Estimation. Volumes 1 and 2. Wiley, Chichester. Krener, A. J. è198è. Kalman-Bucy and minimax æltering. IEEE Transactions on Automatic Control Theory AC-25, 291í292. Kwakernaak, H. and R. Sivan è192è. Linear Optimal Control Systems. Wiley, New York. Toivonen, H. T. and P. M. Míakilía è1989è. Computer-aided design procedure for multiobjective LQG control problems. Int. J. Control 49, 55í. 3

A.V. SAVKIN AND I.R. PETERSEN uncertain systems in which the uncertainty satisæes a certain integral quadratic constraint; e.g., see ë5, 6, 7ë. The ad

A.V. SAVKIN AND I.R. PETERSEN uncertain systems in which the uncertainty satisæes a certain integral quadratic constraint; e.g., see ë5, 6, 7ë. The ad Journal of Mathematical Systems, Estimation, and Control Vol. 6, No. 3, 1996, pp. 1í14 cæ 1996 Birkhíauser-Boston Robust H 1 Control of Uncertain Systems with Structured Uncertainty æ Andrey V. Savkin

More information

LECTURE Review. In this lecture we shall study the errors and stability properties for numerical solutions of initial value.

LECTURE Review. In this lecture we shall study the errors and stability properties for numerical solutions of initial value. LECTURE 24 Error Analysis for Multi-step Methods 1. Review In this lecture we shall study the errors and stability properties for numerical solutions of initial value problems of the form è24.1è dx = fèt;

More information

W 1 æw 2 G + 0 e? u K y Figure 5.1: Control of uncertain system. For MIMO systems, the normbounded uncertainty description is generalized by assuming

W 1 æw 2 G + 0 e? u K y Figure 5.1: Control of uncertain system. For MIMO systems, the normbounded uncertainty description is generalized by assuming Chapter 5 Robust stability and the H1 norm An important application of the H1 control problem arises when studying robustness against model uncertainties. It turns out that the condition that a control

More information

SELECTION OF VARIABLES FOR REGULATORY CONTROL USING POLE VECTORS. Kjetil Havre 1 Sigurd Skogestad 2

SELECTION OF VARIABLES FOR REGULATORY CONTROL USING POLE VECTORS. Kjetil Havre 1 Sigurd Skogestad 2 SELECTION OF VARIABLES FOR REGULATORY CONTROL USING POLE VECTORS Kjetil Havre 1 Sigurd Skogestad 2 Chemical Engineering, Norwegian University of Science and Technology N-734 Trondheim, Norway. Abstract:

More information

Preface The purpose of these lecture notes is to present modern feedback control methods based on H 2 - and H1-optimal control theory in a concise way

Preface The purpose of these lecture notes is to present modern feedback control methods based on H 2 - and H1-optimal control theory in a concise way ROBUST CONTROL METHODS Hannu T. Toivonen Process Control Laboratory çabo Akademi University Turku èçaboè, Finland htoivone@abo.fi Preface The purpose of these lecture notes is to present modern feedback

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

EL2520 Control Theory and Practice

EL2520 Control Theory and Practice EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system

More information

1 Summary This paper outlines a general approach for the design of H 1 dynamic output feedback controllers and applies this method to designing contro

1 Summary This paper outlines a general approach for the design of H 1 dynamic output feedback controllers and applies this method to designing contro 1 Summary This paper outlines a general approach for the design of H 1 dynamic output feedback controllers and applies this method to designing controllers for the active mass driver èamdè benchmark problem.

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

An LQ R weight selection approach to the discrete generalized H 2 control problem

An LQ R weight selection approach to the discrete generalized H 2 control problem INT. J. CONTROL, 1998, VOL. 71, NO. 1, 93± 11 An LQ R weight selection approach to the discrete generalized H 2 control problem D. A. WILSON², M. A. NEKOUI² and G. D. HALIKIAS² It is known that a generalized

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

6 OUTPUT FEEDBACK DESIGN

6 OUTPUT FEEDBACK DESIGN 6 OUTPUT FEEDBACK DESIGN When the whole sate vector is not available for feedback, i.e, we can measure only y = Cx. 6.1 Review of observer design Recall from the first class in linear systems that a simple

More information

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31 Contents Preamble xiii Linear Systems I Basic Concepts 1 I System Representation 3 1 State-Space Linear Systems 5 1.1 State-Space Linear Systems 5 1.2 Block Diagrams 7 1.3 Exercises 11 2 Linearization

More information

Discrete-Time H Gaussian Filter

Discrete-Time H Gaussian Filter Proceedings of the 17th World Congress The International Federation of Automatic Control Discrete-Time H Gaussian Filter Ali Tahmasebi and Xiang Chen Department of Electrical and Computer Engineering,

More information

In the previous chapters we have presented synthesis methods for optimal H 2 and

In the previous chapters we have presented synthesis methods for optimal H 2 and Chapter 8 Robust performance problems In the previous chapters we have presented synthesis methods for optimal H 2 and H1 control problems, and studied the robust stabilization problem with respect to

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 25 January 2006 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

Steady-state DKF. M. Sami Fadali Professor EE UNR

Steady-state DKF. M. Sami Fadali Professor EE UNR Steady-state DKF M. Sami Fadali Professor EE UNR 1 Outline Stability of linear estimators. Lyapunov equation. Uniform exponential stability. Steady-state behavior of Lyapunov equation. Riccati equation.

More information

Optimal Polynomial Control for Discrete-Time Systems

Optimal Polynomial Control for Discrete-Time Systems 1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should

More information

Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures

Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures Preprints of the 19th World Congress The International Federation of Automatic Control Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures Eric Peterson Harry G.

More information

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping Contents lecture 5 Automatic Control III Lecture 5 H 2 and H loop shaping Thomas Schön Division of Systems and Control Department of Information Technology Uppsala University. Email: thomas.schon@it.uu.se,

More information

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

State estimation and the Kalman filter

State estimation and the Kalman filter State estimation and the Kalman filter PhD, David Di Ruscio Telemark university college Department of Technology Systems and Control Engineering N-3914 Porsgrunn, Norway Fax: +47 35 57 52 50 Tel: +47 35

More information

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October

More information

Time-Invariant Linear Quadratic Regulators!

Time-Invariant Linear Quadratic Regulators! Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

Robust Control 5 Nominal Controller Design Continued

Robust Control 5 Nominal Controller Design Continued Robust Control 5 Nominal Controller Design Continued Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University 4/14/2003 Outline he LQR Problem A Generalization to LQR Min-Max

More information

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming Mathematical Problems in Engineering Volume 2012, Article ID 674087, 14 pages doi:10.1155/2012/674087 Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

More information

Lecture 2: Discrete-time Linear Quadratic Optimal Control

Lecture 2: Discrete-time Linear Quadratic Optimal Control ME 33, U Berkeley, Spring 04 Xu hen Lecture : Discrete-time Linear Quadratic Optimal ontrol Big picture Example onvergence of finite-time LQ solutions Big picture previously: dynamic programming and finite-horizon

More information

Topic # Feedback Control Systems

Topic # Feedback Control Systems Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the

More information

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015 Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting

More information

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

Final Exam Solutions

Final Exam Solutions EE55: Linear Systems Final Exam SIST, ShanghaiTech Final Exam Solutions Course: Linear Systems Teacher: Prof. Boris Houska Duration: 85min YOUR NAME: (type in English letters) I Introduction This exam

More information

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks

More information

H 1 optimisation. Is hoped that the practical advantages of receding horizon control might be combined with the robustness advantages of H 1 control.

H 1 optimisation. Is hoped that the practical advantages of receding horizon control might be combined with the robustness advantages of H 1 control. A game theoretic approach to moving horizon control Sanjay Lall and Keith Glover Abstract A control law is constructed for a linear time varying system by solving a two player zero sum dierential game

More information

Denis ARZELIER arzelier

Denis ARZELIER   arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15

More information

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004 MER42 Advanced Control Lecture 9 Introduction to Kalman Filtering Linear Quadratic Gaussian Control (LQG) G. Hovland 24 Announcement No tutorials on hursday mornings 8-9am I will be present in all practical

More information

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation EE363 Winter 2008-09 Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation partially observed linear-quadratic stochastic control problem estimation-control separation principle

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

H -Optimal Control and Related Minimax Design Problems

H -Optimal Control and Related Minimax Design Problems Tamer Başar Pierre Bernhard H -Optimal Control and Related Minimax Design Problems A Dynamic Game Approach Second Edition 1995 Birkhäuser Boston Basel Berlin Contents Preface v 1 A General Introduction

More information

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112 Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS

More information

5. Observer-based Controller Design

5. Observer-based Controller Design EE635 - Control System Theory 5. Observer-based Controller Design Jitkomut Songsiri state feedback pole-placement design regulation and tracking state observer feedback observer design LQR and LQG 5-1

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

Suppose that we have a specific single stage dynamic system governed by the following equation:

Suppose that we have a specific single stage dynamic system governed by the following equation: Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where

More information

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant

MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant MS-E2133 Systems Analysis Laboratory II Assignment 2 Control of thermal power plant How to control the thermal power plant in order to ensure the stable operation of the plant? In the assignment Production

More information

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems July 2001 Revised: December 2005 Ronald J. Balvers Douglas W. Mitchell Department of Economics Department

More information

Contents. PART I METHODS AND CONCEPTS 2. Transfer Function Approach Frequency Domain Representations... 42

Contents. PART I METHODS AND CONCEPTS 2. Transfer Function Approach Frequency Domain Representations... 42 Contents Preface.............................................. xiii 1. Introduction......................................... 1 1.1 Continuous and Discrete Control Systems................. 4 1.2 Open-Loop

More information

H Estimation. Speaker : R.Lakshminarayanan Guide : Prof. K.Giridhar. H Estimation p.1/34

H Estimation. Speaker : R.Lakshminarayanan Guide : Prof. K.Giridhar. H Estimation p.1/34 H Estimation Speaker : R.Lakshminarayanan Guide : Prof. K.Giridhar H Estimation p.1/34 H Motivation The Kalman and Wiener Filters minimize the mean squared error between the true value and estimated values

More information

Review of Controllability Results of Dynamical System

Review of Controllability Results of Dynamical System IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 13, Issue 4 Ver. II (Jul. Aug. 2017), PP 01-05 www.iosrjournals.org Review of Controllability Results of Dynamical System

More information

Variable-gain output feedback control

Variable-gain output feedback control 7. Variable-gain output feedback control 7.1. Introduction PUC-Rio - Certificação Digital Nº 611865/CA In designing control laws, the usual first step is to describe the plant at a given operating point

More information

Automatic Control II Computer exercise 3. LQG Design

Automatic Control II Computer exercise 3. LQG Design Uppsala University Information Technology Systems and Control HN,FS,KN 2000-10 Last revised by HR August 16, 2017 Automatic Control II Computer exercise 3 LQG Design Preparations: Read Chapters 5 and 9

More information

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems July 2001 Ronald J. Balvers Douglas W. Mitchell Department of Economics Department of Economics P.O.

More information

EEE582 Homework Problems

EEE582 Homework Problems EEE582 Homework Problems HW. Write a state-space realization of the linearized model for the cruise control system around speeds v = 4 (Section.3, http://tsakalis.faculty.asu.edu/notes/models.pdf). Use

More information

Linear Quadratic Regulator (LQR) Design I

Linear Quadratic Regulator (LQR) Design I Lecture 7 Linear Quadratic Regulator LQR) Design I Dr. Radhakant Padhi Asst. Proessor Dept. o Aerospace Engineering Indian Institute o Science - Bangalore LQR Design: Problem Objective o drive the state

More information

Limiting Performance of Optimal Linear Filters

Limiting Performance of Optimal Linear Filters Limiting Performance of Optimal Linear Filters J.H. Braslavsky M.M. Seron D.Q. Mayne P.V. Kokotović Technical Report CCEC 97-414 Center for Control Engineering and Computation University of California

More information

Networked Sensing, Estimation and Control Systems

Networked Sensing, Estimation and Control Systems Networked Sensing, Estimation and Control Systems Vijay Gupta University of Notre Dame Richard M. Murray California Institute of echnology Ling Shi Hong Kong University of Science and echnology Bruno Sinopoli

More information

State Observers and the Kalman filter

State Observers and the Kalman filter Modelling and Control of Dynamic Systems State Observers and the Kalman filter Prof. Oreste S. Bursi University of Trento Page 1 Feedback System State variable feedback system: Control feedback law:u =

More information

Optimal State Estimators for Linear Systems with Unknown Inputs

Optimal State Estimators for Linear Systems with Unknown Inputs Optimal tate Estimators for Linear ystems with Unknown Inputs hreyas undaram and Christoforos N Hadjicostis Abstract We present a method for constructing linear minimum-variance unbiased state estimators

More information

FEL3210 Multivariable Feedback Control

FEL3210 Multivariable Feedback Control FEL3210 Multivariable Feedback Control Lecture 8: Youla parametrization, LMIs, Model Reduction and Summary [Ch. 11-12] Elling W. Jacobsen, Automatic Control Lab, KTH Lecture 8: Youla, LMIs, Model Reduction

More information

Decentralized LQG Control of Systems with a Broadcast Architecture

Decentralized LQG Control of Systems with a Broadcast Architecture Decentralized LQG Control of Systems with a Broadcast Architecture Laurent Lessard 1,2 IEEE Conference on Decision and Control, pp. 6241 6246, 212 Abstract In this paper, we consider dynamical subsystems

More information

ROBUST CONTROL SYSTEM DESIGN FOR SMALL UAV USING H2-OPTIMIZATION

ROBUST CONTROL SYSTEM DESIGN FOR SMALL UAV USING H2-OPTIMIZATION Technical Sciences 151 ROBUST CONTROL SYSTEM DESIGN FOR SMALL UAV USING H2-OPTIMIZATION Róbert SZABOLCSI Óbuda University, Budapest, Hungary szabolcsi.robert@bgk.uni-obuda.hu ABSTRACT Unmanned aerial vehicles

More information

LQG/LTR CONTROLLER DESIGN FOR AN AIRCRAFT MODEL

LQG/LTR CONTROLLER DESIGN FOR AN AIRCRAFT MODEL PERIODICA POLYTECHNICA SER. TRANSP. ENG. VOL. 8, NO., PP. 3 4 () LQG/LTR CONTROLLER DESIGN FOR AN AIRCRAFT MODEL Balázs KULCSÁR Department of Control and Transport Automation Budapest University of Technology

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

Optimal discrete-time H /γ 0 filtering and control under unknown covariances

Optimal discrete-time H /γ 0 filtering and control under unknown covariances International Journal of Control ISSN: 0020-7179 (Print) 1366-5820 (Online) Journal homepage: http://www.tandfonline.com/loi/tcon20 Optimal discrete-time H filtering and control under unknown covariances

More information

Structured State Space Realizations for SLS Distributed Controllers

Structured State Space Realizations for SLS Distributed Controllers Structured State Space Realizations for SLS Distributed Controllers James Anderson and Nikolai Matni Abstract In recent work the system level synthesis (SLS) paradigm has been shown to provide a truly

More information

MODELING OF CONTROL SYSTEMS

MODELING OF CONTROL SYSTEMS 1 MODELING OF CONTROL SYSTEMS Feb-15 Dr. Mohammed Morsy Outline Introduction Differential equations and Linearization of nonlinear mathematical models Transfer function and impulse response function Laplace

More information

Figure 1: Startup screen for Interactive Physics Getting Started The ærst simulation will be very simple. With the mouse, select the circle picture on

Figure 1: Startup screen for Interactive Physics Getting Started The ærst simulation will be very simple. With the mouse, select the circle picture on Experiment Simulations í Kinematics, Collisions and Simple Harmonic Motion Introduction Each of the other experiments you perform in this laboratory involve a physical apparatus which you use to make measurements

More information

J. TSINIAS In the present paper we extend previous results on the same problem for interconnected nonlinear systems èsee ë1-18ë and references therein

J. TSINIAS In the present paper we extend previous results on the same problem for interconnected nonlinear systems èsee ë1-18ë and references therein Journal of Mathematical Systems, Estimation, and Control Vol. 6, No. 1, 1996, pp. 1í17 cæ 1996 Birkhíauser-Boston Versions of Sontag's Input to State Stability Condition and Output Feedback Global Stabilization

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Unit 2: Modeling in the Frequency Domain Part 2: The Laplace Transform. The Laplace Transform. The need for Laplace

Unit 2: Modeling in the Frequency Domain Part 2: The Laplace Transform. The Laplace Transform. The need for Laplace Unit : Modeling in the Frequency Domain Part : Engineering 81: Control Systems I Faculty of Engineering & Applied Science Memorial University of Newfoundland January 1, 010 1 Pair Table Unit, Part : Unit,

More information

where A and B are constants of integration, v t = p g=k is the terminal velocity, g is the acceleration of gravity, and m is the mass of the projectil

where A and B are constants of integration, v t = p g=k is the terminal velocity, g is the acceleration of gravity, and m is the mass of the projectil Homework 5 Physics 14 Methods in Theoretical Physics Due: Wednesday February 16, 4 Reading Assignment: Thornton and Marion, Ch..4. 1. è1d motion, Newtonian resistance.è In class, we considered the motion

More information

Balancing of Lossless and Passive Systems

Balancing of Lossless and Passive Systems Balancing of Lossless and Passive Systems Arjan van der Schaft Abstract Different balancing techniques are applied to lossless nonlinear systems, with open-loop balancing applied to their scattering representation.

More information

DISCRETE-TIME ADAPTIVE LQG/LTR CONTROL

DISCRETE-TIME ADAPTIVE LQG/LTR CONTROL DISCRETE-TIME ADAPTIVE LQG/LTR CONTROL DARIUSZ HORLA Poznan University of Technology Inst. of Control and Inf. Eng. ul. Piotrowo 3a, 6-965 Poznan POLAND ANDRZEJ KRÓLIKOWSKI Poznan University of Technology

More information

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m A-AE 567 Final Homework Spring 212 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You

More information

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor Dr.-Ing. Sudchai Boonto Assistant Professor Department of Control System and Instrumentation Engineering King Mongkuts Unniversity of Technology Thonburi Thailand Linear Quadratic Gaussian The state space

More information

Video 6.1 Vijay Kumar and Ani Hsieh

Video 6.1 Vijay Kumar and Ani Hsieh Video 6.1 Vijay Kumar and Ani Hsieh Robo3x-1.6 1 In General Disturbance Input + - Input Controller + + System Output Robo3x-1.6 2 Learning Objectives for this Week State Space Notation Modeling in the

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

Introduction Principally, behaviours focus our attention on the primary level of system dynamics, that is, on empirical observations. In a sense the b

Introduction Principally, behaviours focus our attention on the primary level of system dynamics, that is, on empirical observations. In a sense the b Identiæcation of System Behaviours by Approximation of Time Series Data Wolfgang Scherrer a Christiaan Heij bæ version 9-03-97 a Institut fíur íokonometrie, Operations Research und Systemtheorie, Technische

More information

Floor Control (kn) Time (sec) Floor 5. Displacement (mm) Time (sec) Floor 5.

Floor Control (kn) Time (sec) Floor 5. Displacement (mm) Time (sec) Floor 5. DECENTRALIZED ROBUST H CONTROL OF MECHANICAL STRUCTURES. Introduction L. Bakule and J. Böhm Institute of Information Theory and Automation Academy of Sciences of the Czech Republic The results contributed

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

A Method to Teach the Parameterization of All Stabilizing Controllers

A Method to Teach the Parameterization of All Stabilizing Controllers Preprints of the 8th FAC World Congress Milano (taly) August 8 - September, A Method to Teach the Parameterization of All Stabilizing Controllers Vladimír Kučera* *Czech Technical University in Prague,

More information

LMI based output-feedback controllers: γ-optimal versus linear quadratic.

LMI based output-feedback controllers: γ-optimal versus linear quadratic. Proceedings of the 17th World Congress he International Federation of Automatic Control Seoul Korea July 6-11 28 LMI based output-feedback controllers: γ-optimal versus linear quadratic. Dmitry V. Balandin

More information

Self-Tuning Control for Synchronous Machine Stabilization

Self-Tuning Control for Synchronous Machine Stabilization http://dx.doi.org/.5755/j.eee.2.4.2773 ELEKTRONIKA IR ELEKTROTECHNIKA, ISSN 392-25, VOL. 2, NO. 4, 25 Self-Tuning Control for Synchronous Machine Stabilization Jozef Ritonja Faculty of Electrical Engineering

More information

CRC for Robust and Adaptive Systems. Peter J Kootsookos. Abstract. This paper develops an adaptive controller for active vibration control.

CRC for Robust and Adaptive Systems. Peter J Kootsookos. Abstract. This paper develops an adaptive controller for active vibration control. Closed-loop Frequency Tracking and Rejection Allan J Connolly CRC for Robust and Adaptive Systems Department of Systems Engineering The Australian National University, Canberra ACT 0200 Australia Barbara

More information

Intermediate Process Control CHE576 Lecture Notes # 2

Intermediate Process Control CHE576 Lecture Notes # 2 Intermediate Process Control CHE576 Lecture Notes # 2 B. Huang Department of Chemical & Materials Engineering University of Alberta, Edmonton, Alberta, Canada February 4, 2008 2 Chapter 2 Introduction

More information

Information Structures Preserved Under Nonlinear Time-Varying Feedback

Information Structures Preserved Under Nonlinear Time-Varying Feedback Information Structures Preserved Under Nonlinear Time-Varying Feedback Michael Rotkowitz Electrical Engineering Royal Institute of Technology (KTH) SE-100 44 Stockholm, Sweden Email: michael.rotkowitz@ee.kth.se

More information

International Journal of PharmTech Research CODEN (USA): IJPRIF, ISSN: Vol.8, No.7, pp , 2015

International Journal of PharmTech Research CODEN (USA): IJPRIF, ISSN: Vol.8, No.7, pp , 2015 International Journal of PharmTech Research CODEN (USA): IJPRIF, ISSN: 0974-4304 Vol.8, No.7, pp 99-, 05 Lotka-Volterra Two-Species Mutualistic Biology Models and Their Ecological Monitoring Sundarapandian

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information