The Open-Loop Linear Quadratic Differential Game Revisited

Size: px
Start display at page:

Download "The Open-Loop Linear Quadratic Differential Game Revisited"

Transcription

1 The Open-Loop Linear Quadratic Differential Game Revisited Jacob Engwerda Tilburg University Dept. of Econometrics and O.R. P.O. Box: 9153, 5 LE Tilburg, The Netherlands engwerda@uvt.nl February, 24 1

2 Abstract: In this paper we consider the indefinite open-loop Nash linear quadratic differential game with an infinite planning horizon. We derive both necessary and sufficient conditions for the existence of equilibria. Moreover, we present a sufficient condition under which the game will have a unique equilibrium. (Some) results are elaborated for the zero-sum game and the scalar case. Keywords: equations. linear-quadratic games, open-loop Nash equilibrium, solvability conditions, Riccati 1 Introduction In the last decades, there is an increased interest in studying diverse problems in economics using dynamic game theory. In particular in environmental economics and macroeconomic policy coordination, dynamic games are a natural framework to model policy coordination problems (see e.g. the books and references in Dockner et al. [5] and Engwerda [1]). In these problems, the open-loop Nash strategy is often used as one of the benchmarks to evaluate outcomes of the game. Other, frequently used, benchmarks are the feedback Nash and Stackelberg strategy. Inspired by the analysis and results obtained by Abou-Kandil et al. [1, 2], in Engwerda [8, 9] several aspects of open-loop Nash equilibria of the standard linear-quadratic differential game are studied, as considered by Starr and Ho in [23] (see, e.g., also Lukes et al. [17], Meyer [19], Eisele [7], Haurie and Leitmann [12], Feucht [11], Weeren [24] and Başar and Olsder [4]). In particular, a detailed study of the infinite-planning horizon is given in these papers and it is shown that, in general, the infinite-planning horizon problem does not have a unique equilibrium. In Kremer [13] this problem was reconsidered using a functional analysis approach. One of the results he shows is that whenever a stable system has more than one equilibrium, there will exist an infinite number of equilibria. In this paper we will reconsider the infinite-planning horizon linear quadratic differential game. In section 2 we generalize the results obtained by Engwerda [8, 9] and Kremer [13]. Moreover, we elaborate extensively the scalar case in section 4 and discuss some implications for the zero-sum game in section 3. In this paper we assume that the performance criterion player i =1, 2 likes to minimize is: where J i = Z T subject to the linear dynamic state equation lim T J i(x,u 1,u 2,T) (1) {x T (t)q i x(t)+u T i (t)r i u i (t)}dt, ẋ(t) =Ax(t)+B 1 u 1 (t)+b 2 u 2 (t), x() = x, (2) Here, the matrices Q i and R i aresymmetricandr i are, moreover, assumed to be positive definite, i = 1, 2. Notice that we do not make any definiteness assumptions w.r.t. matrix Q. We assume that the matrix pairs (A, B i ), i =1, 2, are stabilizable. So, in principle, each player is capable to stabilize the system on his own. 1

3 The inclusion of player j s control efforts into player i s cost function is not present here because this term drops out in the analysis, which is due to the open-loop information structure we consider in this paper. So, for convenience, we drop this term from the outset. The open-loop information structure of the game means that we assume that both players only know the initial state of the system and that the set of admissible control actions are functions of time, where time runs from zero to infinity. Since we only like to consider those outcomes of the game that yield a finite cost to both players, we restrict ourselves to functions belonging to the set n o U s (x )= u L 2,loc J i (x,u)existsinir {, }, lim x(t) =, t where L 2,loc is the set of locally square-integrable functions, i.e., L 2,loc = {u[, ) T >, Z T u T (s)u(s)ds < }. Notice that U s (x ) depends on the inital state of the system. For simplicity of notation we omit, however, this dependency. Another set of functions we consider is the class of locally square integrable functions which exponentially converge to zero when t, L e 2,loc. That is, for every c(.) Le 2,loc there exist strictly positive constants M and α such that c(t) Me αt. Finally, notice that the restriction to this set of control functions requires some form of communication between the players. 2 General Case We start our analysis with some results on the regular linear quadratic optimal control problem. The next theorem generalizes a property that can be found in Willems [?] under a controllability assumption. Theorem 2.1 Assume (A, B) stabilizable, Q symmetric and R>. Consider the minimization of the linear quadratic cost function Z x T (s)qx(s)+u T (s)ru(s)ds (3) subject to the state dynamics ẋ(t) =Ax(t)+Bu(t), x() = x, (4) and u U s (x ). Then, the linear quadratic problem (3,4) has a solution for all x IR n if and only if the algebraic Riccati equation = A T K KA + KSK Q (5) has a symmetric solution K s such that A SK s is stable 1. Moreover, if the linear quadratic control problem has a solution, then the optimal control in feedback form is u (t) = R 1 B T K s x(t). 1 K s is then called a stabilizing solution of the Riccati equation (5) 2

4 Proof: part This part follows from a standard completion of squares argument. part Since the problem has a minimum for all x it can be shown (similar to e.g. Molinari [2]) that J (t, x ):=min x T (s)qx(s)+u T (s)ru(s)ds u U s t subject to the state dynamics (4) exists for all x and that this is a quadratic form, i.e. J (t, x )=x T K(t)x. Moreover, it can be shown that J J (t, x) exists and is continuous and also (t, x) exists. So,wecan x t use the theorem of dynamic programming to conclude that J (t, x) is in fact independent of time t and satisfies the Hamilton-Jacobi-Bellman differential equation (see e.g. Weeren [24], Lemma A.1) where =min{x T Qx + u T Ru + V (x)(ax + Bu)}. (6) u U s x u (t, x) =argmin{x T Qx + u T Ru + V (x)(ax + Bu)}. (7) u U s x From (7) it follows that u (t) = R 1 B T Kx(t),where(see6)K satisfies the equation =x T (A T K + KA KSK + Q)x. However, since this equation holds for an arbitrary state x, it follows that K satisfies the algebraic Riccati equation (5). Moreover, we know that for an arbitrary state x(t) ift. So, we conclude that K is a stabilizing solution of (5). A proof of the next preliminary theorem is provided in the Appendix. Theorem 2.2 Let c x (.) L e 2,loc,, (A, B) stabilizable, Q symmetric and R>. Consider the minimization of the linear quadratic cost function (3) subject to the state dynamics ẋ(t) =Ax(t)+Bu(t)+c x (t), x() = x, (8) and u U s (x ). Then, the linear quadratic problem (3,8) has a solution for all x IR n if and only if the algebraic Riccati equation (5) has a symmetric stabilizing solution K(.). Moreover, if the linear quadratic control problem has a solution, then the optimal control in feedback form is Here m(t) is given by u (t) = R 1 B T (Kx(t)+m(t)). m(t) = t e (A SK)T (t s) Kc(s)ds, (9) and x(t) is the through this optimal control implied solution of the differential equation ẋ(t) =(A SK)x(t) Sm(t)+c(t), x() = x. 3

5 With the help of this theorem we prove in the Appendix the next result. Theorem 2.3 Consider matrix M = A S 1 S 2 Q 1 A T Q 2 A T. (1) If the linear quadratic differential game (1,2) has an open-loop Nash equilibrium for every initial state, then 1. M has at least n stable eigenvalues (counted with algebraic multiplicities). More in particular, there exists a p-dimensional stable M-invariant subspace S, withp n, such that I Im V 1 S, V 2 for some V i IR n n. 2. the two algebraic Riccati equations, have a symmetric stabilizing solution K i, i =1, 2. Q i + A T K i + K i A K i S i K i =, (11) Conversely, if the two algebraic Riccati equations (11) have a stabilizing solution and v T (t) =: [x T (t), ψ T 1 (t), ψ T 2 (t)] is an asymptotically stable solution of then, v(t) =Mv(t), x() = x, u i := R 1 i B T i ψ i (t), i =1, 2, provides an open-loop Nash equilibrium for the linear quadratic differential game (1,2). Remark 2.4 From this theorem we can draw a number of conlusions concerning the existence of open-loop Nash equilibria. A general conclusion is that this number depends critically on the eigenstructure of matrix M. We will distinguish some cases. To that end, let s denote the number (counting algebraic multiplicities) of stable eigenvalues of M and assume that the Riccati equations (11) have a stabilizing solution. 1. If s<n, still for some initial state there may exist an open-loop Nash equilibrium. Consider, e.g., the case that s = 1. Then, for every x Span [I,, ]v, wherev is an eigenvector corresponding with the stable eigenvalue, the game has a Nash equilibrium. 4

6 2. In case s 2, the situation might arise that for some initial states there exists an infinite number of equilibria. A situation in which there are an infinite number of Nash equilibrium actions occurs if, e.g., v 1 and v 2 are two independent eigenvectors in the stable subspace of M for which [I,, ]v 1 = µ[i,, ]v 2, for some scalar µ. In such a situation, x = λ[i,, ]v 1 +(1 λ)µ[i,, ]v 2, for an arbitrary scalar λ IR. The resulting equilibrium control actions, however, differ for each λ (apart from some exceptional cases). 3. In case matrix M has a stable graph subspace 2, S, ofdimensions>n, for every initial state x there exists, generically, an infinite number of open-loop Nash equilibria. For, let {b 1,,b s } be a basis for S. Denote d i := [I,, ]b i, and assume (without loss of generality) that Span[d 1,,d n ]=IR n. Then, d n+1 = λ 1 d 1 +, λ n d n for some λ i, i =1,,n.Elementary calculations show then that every x can be written as both x,1 = α 1 d α n d n and as x,2 = β 1 d β n+1 d n+1,whereβ n+1 6=. So,x = λx,1 +(1 λ)x,2 for an arbitrary choice of the scalar λ. Since the vectors {b 1,,b n+1 } are linearly independent, it follows that for every λ, thevectorb(λ) :=λ(α 1 b α n b n )+(1 λ)(β 1 b β n+1 b n+1 )differs. So, generically, the corresponding equilibrium control actions induced by this vector b(λ) (via v(t) = Mv(t), v() = b(λ)) will differ. It is tempting to believe that, under the conditions posed in the above Theorem 2.3, matrix M will always have an n-dimensional M-invariant graph subspace S. In case the equilibrium control actions allow for a feedback synthesis 3, this claim is correct. The proof, basically, can be given along the lines of the proof of Theorem 7 in Engwerda [9] and is therefore omitted. Theorem 2.5 Assume the linear quadratic differential game (1,2) has an open-loop Nash equilibrium for every initial state, and the equilibrium control actions allow for a feedback synthesis. Then, 1. M has at least n stable eigenvalues (counted with algebraic multiplicities). More in particular, for each Nash equilibrium there exists a uniquely determined n-dimensional stable M-invariant subspace for some V i IR n n. 2. the two algebraic Riccati equations, Im I V 1 V 2 Q i + A T K i + K i A K i S i K i =, (12) have a symmetric solution K i (.) such that A S i K i is stable, i =1, 2. 2 X1 AsubspaceS is called a graph subspace if S =Im, with X 1 IR n n invertible. X 2 3 That is, the closed-loop dynamics of the game can be described by: ẋ(t) =Fx(t); x() = x for some constant matrix F. 5

7 From Engwerda [8], Theorem 6, we know that Theorem 2.5, point 1., implies that the set of coupled algebraic Riccati equations = A T P 1 + P 1 A + Q 1 P 1 S 1 P 1 P 1 S 2 P 2 ; (13) = A T P 2 + P 2 A + Q 2 P 2 S 2 P 2 P 2 S 1 P 1 (14) has a set of solutions P i, i =1, 2, such that A S 1 P 1 S 2 P 2 is stable 4. The next theorem is well-known in case the matrices Q i are assumed to be positive definite and shows that the converse statement of this result always holds. It is, however, not difficult to see that its proof as stated e.g. in Başar and Olsder [4], Theorem 6.22, can be straightforwardly generalized (see Engwerda [1] for a detailed exposition). That is, Theorem 2.6 Assume that 1. the set of coupled algebraic Riccati equations (13,14) has a set of stabilizing solutions P i, i = 1, 2; and 2. the two algebraic Riccati equations, =A T K i + K i A K i S i K i + Q i, (15) have a symmetric solution K i (.) such that A S i K i is stable, i =1, 2. Then the linear quadratic differential game (1,2) has an open-loop Nash equilibrium for every initial state. Moreover, a set of equilibrium actions is provided by: Here Φ(t, ) satisfies the transition equation As an immediate consequence we have that u i (t) = R 1 i Bi T P iφ(t, )x, i =1, 2. (16) Φ(t, ) = (A S 1 P 1 S 2 P 2 )Φ(t, ); Φ(t, t) =I Corollary 2.7 if matrix M has a stable invariant graph subspace and the two algebraic Riccati equations (15) have a stabilizing solution, then the game will have an open-loop Nash equilibrium that permits a feedback synthesis for every initial state. Combining the results of both previous theorems we have the following generalization of the result as stated in Engwerda [9], Theorem 7. The correctness on the statement concerning the involved cost for the players below can be found, e.g., in Engwerda [8], Remark Corollary 2.8 The infinite-planning horizon two-player linear quadratic differential game (1,2) has for every initial state an open-loop Nash set of equilibrium actions (u 1,u 2) which permit a feedback synthesis if and only if 4 (P 1,P 2 )arecalledasetofstabilizing solutions of (13,14). 6

8 1. there exist P 1 and P 2 that are solutions of the set of coupled algebraic Riccati equations (13,14) satisfying the additional constraint that the eigenvalues of A cl := A S 1 P 1 S 2 P 2 are all situated in the left half complex plane, and 2. the two algebraic Riccati equations (15) have a symmetric solution K i (.) such that A S i K i is stable, i =1, 2. If (P 1,P 2 ) is a set of stabilizing solutions of the coupled algebraic Riccati equations (13,14), the actions u i (t) = R 1 i B T i P i Φ(t, )x,i=1, 2, where Φ(t, ) satisfies the transition equation Φ(t, ) = A cl Φ(t, ); Φ(, ) = I, yield an open-loop Nash equilibrium. The costs, by using these actions, for the players are where M i is the unique solution of the Lyapunov equation x T M i x, i =1, 2, (17) A T clm i + M i A cl + Q i + P T i S i P i =, (18) We stress here once again the point that in case for every initial state there exists more than one open-loop Nash equilibrium that permits a feedback synthesis then the stable subspace has a dimension larger than n. Consequently (see Remark 2.4, item 3) for every initial state there will exist, generically, an infinite number of open-loop Nash equilibria. This, even though there exist only a finite number of open-loop Nash equilibria permitting a feedback synthesis. This point was first noted by Kremer in [13] in case matrix A is stable. The above reflections raise the question what happens in case the dimension of the stable subspace is n. The next Theorem 2.9 provides a sufficient condition to conclude that the game has a unique open-loop Nash equilibrium. Moreover, it shows that in that case the unique equilibrium actions can be synthesized as a state feedback. The proof of this theorem is provided in the Appendix. Theorem 2.9 Assume that 1. matrixm has n stable eigenvalues and 2n eigenvalues that are not stable (counting algebraic multiplicities); 2. thestablesubspaceisagraphsubspace. 3. The two algebraic Riccati equations (15) have a stabilizing solution. Then, for every initial state the game has a unique open-loop Nash equilibrium. Moreover, the unique equilibrium actions are given by (16). 7

9 3 Zero-Sum Case In this section we consider the special case of the zero-sum game. By specializing the above results to this case, analogues are obtained for the open-loop zero-sum linear quadratic game. As an example we will elaborate the analogue of Corollary 2.8. The next proposition follows directly from this corollary. Proposition 3.1 (Open-loop zero-sum differential game) Consider the differential game described by with, for player one, the quadratic cost functional: J 1 (u 1,u 2 )= ẋ(t) =Ax(t)+B 1 u 1 (t)+b 2 u 2 (t), x() = x, (19) and, for player two, the opposite objective function x T (t)qx(t)+u T 1 (t)r 1u 1 (t) u T 2 (t)r 2u 2 (t)dt, (2) J 2 (u 1,u 2 )= J 1 (u 1,u 2 ); where the matrices Q and R i, i =1, 2, are symmetric. Moreover, assume that R i, i =1, 2, are positive definite. Then, this infinite-planning horizon zero-sum linear quadratic differential game has for every initial state an open-loop Nash equilibrium which permits a feedback synthesis if and only if the next two conditions hold. 1. The coupled algebraic Riccati equations A T P 1 + P 1 A + Q P 1 S 1 P 1 P 1 S 2 P 2 =, (21) A T P 2 + P 2 A Q P 2 S 2 P 2 P 2 S 1 P 1 =, (22) has a set of solutions P i, i =1, 2, such that A S 1 P 1 S 2 P 2 is stable; and 2. The two algebraic Riccati equations A T K 1 + K 1 A K 1 S 1 K 1 + Q =, (23) A T K 2 + K 2 A K 2 S 2 K 2 Q =, (24) have a symmetric solution K i such that A S i K i is stable, i =1, 2. Moreover, the corresponding equilibrium actions are Here x(t) satisfies the differential equation u 1(t) = R 1 1 B T 1 P 1 x(t) andu 2(t) = R 1 2 B T 2 P 2 x(t). ẋ(t) =(A S 1 P 1 S 2 P 2 )x(t); x() = x. 8

10 Next, we consider a special case of this game: the case that matrix A is stable. Then, the conditions under which the game has an equilibrium can be simplified somewhat further. Proposition 3.2 Consider the open-loop zero-sum differential game as described in Proposition 3.1. Assume, additionally, that matrix A is stable. Then, this game has for every initial state an open-loop Nash equilibrium which permits a feedback synthesis if and only if the next two conditions hold. 1. The coupled algebraic Riccati equation A T P + PA+ Q P (S 1 S 2 )P = (25) has a (symmetric) solution P such that A (S 1 S 2 )P is stable; and 2. The two algebraic Riccati equations (23,24) have a symmetric solution K i such that A S i K i is stable, i =1, 2. Moreover, the corresponding unique equilibrium actions are Here x(t) satisfies the differential equation u 1 (t) = R 1 1 BT 1 Px(t) andu 2 (t) =R 1 2 BT 2 Px(t). ẋ(t) =(A (S 1 S 2 )P )x(t); x() = x. The cost for player 1 are J 1 := x T Px and for player 2, J 1. Proof. According Proposition 3.1 this game has an open-loop Nash equilibrium which permits a feedback synthesis for every initial state if and only if the next two coupled algebraic Riccati differential equations have a set of stabilizing solutions P i,i=1, 2, A T P 1 + P 1 A + Q P 1 S 1 P 1 P 1 S 2 P 2 =, (26) A T P 2 + P 2 A Q P 2 S 2 P 2 P 2 S 1 P 1 = (27) and the two algebraic Riccati equations (23,24) have a stabilizing solution K i. Adding (26) to (27) yields the next differential equation in (P 1 + P 2 ): A T (P 1 + P 2 )+(P 1 + P 2 )(A S 1 P 1 S 2 P 2 )=. This is a Lyapunov equation of the form A T X+XB T =,withx := P 1 +P 2 and B = A S 1 P 1 S 2 P 2. Now, independent of the specification of (P 1,P 2 ), B is always stable. So, whatever P i,i=1, 2are, this Lyapunov equation has a unique solution X(P 1,P 2 ). Obviously, P 1 + P 2 = satisfies this Lyapunov equation. So, necessarily, we conclude that P 1 = P 2. Substitution of this into (26) shows then that (26,27) have a stabilizing solution P i if and only if (25) has a stabilizing solution P. The corresponding equilibrium strategies follow then directly from Theorem 3.1. The symmetry and uniqueness property of P follow immediately from the fact that P is a stabilizing solution of an ordinary Riccati equation (see, e.g., [26]). 9

11 Finally the cost for player 1 can be rewritten, using (25), as J 1 = = = = = x T Px. x T (t){q + PS 1 P PS 2 P }x(t)dt x T (t){2ps 1 P 2PS 2 P A T P PA}x(t)dt x T (t){(a (S 1 S 2 )P ) T P + P (A (S 1 S 2 )P )}x(t)dt d[x T (t)px(t)] dt dt Notice that if we assume additionally in the above zero-sum game that Q is positive definite, (23) always has a stabilizing solution (see, e.g., [26]). Moreover, by considering K := K 2, the fact that (24) should have a stabilizing solution can be rephrased as that the next Riccati equation should have a solution K such that A + S 2 K is stable: A T K + KA + KS 2 K + Q =. In this form one, usually, encounters one of the solvability conditions for existence of a controller in H robust control design (see, e.g., [3]). A natural question that arises is whether the equilibria we presented in the above Propositions will be unique. The next Lemma provides us with the answer. Lemma 3.3 Consider the zero-sum differential game. Then µ A S1 + S σ(m) =σ( A) σ 2 Q A T. Proof: The result follows straightforwardly from the fact that M λi = S A λi S 1 + S 2 S 2 Q A T λi S 1, A T λi where S = I I. I I An immediate consequence of this result is Corollary If the conditions of Proposition 3.2 are satisfied, the game has a unique equilibrium. This follows from the fact that matrix M has an n-dimensional stable graph subspace. For, since all eigenvalues of A are unstable, it follows from the above lemma that necessarily the Hamiltonian matrix A S2 + S 1 Q A T 1

12 must have n stable eigenvalues. However, since the spectrum of an Hamiltonian matrix is symmetric w.r.t. the imaginary axis, we infer that this matrix must have n unstable eigenvalues as well. So, we conclude that matrix M has n stable eigenvalues and 2n unstable eigenvalues. Theorem 2.9 yields then that, under the stated conditions, there is a unique open-loop Nash equilibrium. 2. If A is not stable, it may happen that the zero-sum game as described in Proposition 3.1 has an infinite number of open-loop Nash equilibria. See, e.g., Theorem Scalar Case In this section we study the scalar case. To avoid the consideration of various peculiar cases, we will assume throughout that B i 6=,i=1, 2. To stress the fact that we are dealing with the scalar case, we will put the system parameters in lower case so, e.g., a instead of A. First, we calculate the exponential of matrix M. Lemma 4.1 Consider matrix M in (1). Let µ := p a 2 + s 1 q 1 + s 2 q 2 5. Then, the characteristic polynomial of M is ( a λ)(µ λ)( µ λ). Moreover, an eigenvector of M corresponding with λ = a is [, s 2, s 1 ] T. In case not both q 1 = q 2 =and µ 6= the exponential of matrix M, e Ms, is given by V eµs e as V 1. (28) e µs Here V = a µ a + µ q 1 s 2 q 1 q 2 s 1 q 2 and its inverse V 1 = 1 det V (s 1q 1 + s 2 q 2 ) s 1 (a + µ) s 2 (a + µ) 2q 2 µ 2q 1 µ (s 1 q 1 + s 2 q 2 ) s 1 (a µ) s 2 (a µ), with the determinant of V, det V =2µ(s 1 q 1 + s 2 q 2 ). Proof. Straightforward multiplication shows that we can factorize M as M = V diag(µ, a, µ)v 1. So, the exponential of matrix M, e Ms,isasstatedabove. Theorem 4.2 Assume that s 1 q 1 + s 2 q 2 6=.Then,theinfinite-planning horizon game has for every initial state a Nash equilibrium if and only if the next conditions hold. 5 Notice that µ can be a pure imaginary complex number. 11

13 1. a 2 + s 1 q 1 + s 2 q 2 >, 2. a 2 + s i q i >, i =1, 2. Moreover, if there exists an equilibrium for every initial state, then there is exactly one equilibrium that permits a feedback synthesis. The corresponding set of equilibrium actions is: u i (t) = b i r i p i x(t), i =1, 2, where p 1 = (a+µ)q 1 s 1 q 1 +s 2 q 2, p 2 = (a+µ)q 2 s 1 q 1 +s 2 q 2,µ= p a 2 + s 1 q 1 + s 2 q 2 and ẋ(t) =(a s 1 p 1 s 2 p 2 )x(t), with x() = x. The with these actions corresponding cost for player i are: J i = x 2 q i +s i p 2 i, i =1, 2. µ Proof. First assume that 1. and 2. hold. Let p i be as mentioned above. From point 2. it follows that the algebraic Riccati equations 2ak i s i ki 2 + q i =,i=1, 2, (29) have a stabilizing solution k i = a+ a 2 +s i q i s i,i=1, 2. Moreover it is straightforwardly verified that, if a 2 + s 1 q 1 + s 2 q 2 >, p 1 and p 2 satisfy the set of coupled algebraic Riccati equations 2ap 1 + q 1 s 1 p 2 1 s 2p 1 p 2 =and2ap 2 + q 2 s 2 p 2 2 s 1p 1 p 2 =, whereas a s 1 p 1 s 2 p 2 = (a 2 + s 1 q 1 + s 2 q 2 ) <. So, according Theorem 2.6, u i (.) providesanash equilibrium. Next consider the converse statement. From Theorem 2.3, point 2., we know that the algebraic Riccati equations (29) have a stabilizing solution k i,i=1, 2. From this it follows immediately that condition 2. must hold. On the other hand, we have from Lemma 4.1 that if a 2 + s 1 q 1 + s 2 q 2, matrix M will not have a stable M-invariant subspace S such that 1 Im v 1 S. v 2 So according to Theorem 2.3, part 1., in this case the game does not have for all initial states an open-loop Nash equilibrium. Finally, since µ differs from a, it follows straightforwardly from Lemma 4.1 that M has only one stable invariant graph subspace. From this (see Corollary 2.8) it is easily verified that (p 1,p 2 )yields the only Nash equilibrium that permits a feedback synthesis of the game. Corollary 4.3 Assume that q i >, i =1, 2. Then, the infinite-horizon game has for every initial state an open-loop Nash equilibrium. Furthermore, there is exactly one of them that permits a feedback synthesis. 12

14 Using Theorem 2.9 we can precisely indicate the number of equilibria. Theorem 4.4 Assume that 1. s 1 q 1 + s 2 q 2 6=; 2. a 2 + s 1 q 1 + s 2 q 2 > ; 3. a 2 + s i q i >, i =1, 2. Then, the infinite-planning horizon game has for every initial state - a unique equilibrium if a. -aninfinite number of equilibria if a>. Proof. In case a matrix M has a one dimensional stable graph subspace (see again Lemma 4.1). The claim follows then directly from Theorem 2.9. If a>, we conclude from Theorem 2.3, part 2., that every set of control actions u i (t) = b i ψ i (t), r i yields an open-loop Nash equilibrium. Here ψ i (t), i =1, 2, are obtained as the solutions of the differential equation That is, ż(t) =Mz(t), where z := x ψ 1 () ψ 2 () = x a µ a µ q 1 q 2 ψ 1 (t) = q 1x a µ e µt λs 2 e at, ψ 2 (t) = q 2x a µ e µt + λs 1 e at. + λ s 2 s 1, λ IR. Remark 4.5 It is not difficult to show, using in the above proof the characterization of the equilibria, that in case a> all equilibrium actions yield the same closed-loop system ẋ = µx(t), x() = x. However, the cost associated with every equilibrium differ for each player. Example 4.6 Consider a =3;q i =2,i=1, 2ands i = 4. According Theorem 4.2 and Theorem 4.4 this game will have an infinite number of equilibrium actions. Furthermore, p 1 = p 2 = 1 induce the unique equilibrium actions of the infinite-planning horizon game that permits a feedback synthesis. Notice that (p 1,p 2 ) is not a strongly stabilizing 6 solution of the set of coupled algebraic Riccati equations (13,14), since the eigenvalues of p1 s 1 a p 1 s 2 p 2 s 1 p 2 s 2 a = Asolution(P 1,P 2 ) of the set of algebraic Riccati equations (13,14) is called strongly stabilizing if it is a stabilizing solution, and A σ T + P 1 S 1 P 1 S 2 P 2 S 1 A T lc + P 2 S

15 are 3 and5. So, this example demonstrates that if the infinite-planning horizon has a unique solution permitting a feedback synthesis for every initial state, this does not imply that the with this game corresponding set of coupled algebraic Riccati equations (13,14) has a strongly stabilizing solution. In Theorem 4.2 we excluded the case s 1 q 1 + s 2 q 2 =. This, since the analysis of this case requires a different approach. The results for this case are summarized below. Theorem 4.7 Assume s 1 q 1 + s 2 q 2 =. Then, the infinite-planning horizon game has for every initial state 1. a unique set of Nash equilibrium actions u i := b i q i x(t), i =1, 2, ifa <. r i 2a The corresponding closed-loop system and cost are ẋ(t) =ax(t) andj i = q is i 4a (2a + q i 2 s i 2a )x2, i =1, 2, 2. infinitely many Nash equilibrium actions parameterized by (u 1,u 2 ):=( 1 b 1 (a + y), 1 b 2 (a y)), where y IR is arbitrarily, if a > and q 1 = q 2 =. The corresponding closed-loop system and cost are ẋ(t) = ax(t), J 1 = 1 2s 1 a (q 1s 1 +(a + y) 2 )x 2, and J 2 = 1 2s 2 a (q 2s 2 +(a y) 2 )x 2, 3. no equilibrium in all other cases. Proof. Case 1. can be shown along the lines of the proofs of Theorem 4.2 and 4.4. To prove Case 2. we first notice that (15) reduces, under the assumption that q 1 = q 2 =,to k i (2a s i k i ) =. Since a>itisclearthatk i = 2a s i, i =1, 2, yields a stabilizing solution to this equation. Next consider (13 14). These equations reduce in this case to p 1 (2a s 1 p 1 s 2 p 2 ) = p 2 (2a s 1 p 1 s 2 p 2 ) = Obviously, with p 1 = p 2 =,theclosed-loopsystema s 1 p 1 s 2 p 2 = a>. On the other hand, we see that for every choice of (p 1,p 2 ) such that s 1 p 1 + s 2 p 2 =2a, the closed-loop system becomes stable. Theorem 2.6 yields then the stated conclusion. Next, consider the case a>andeitherq 1 or q 2 differs from zero. From Lemma 4.1 it follows then immediately that the first entry of all eigenvectors corresponding with the eigenvalue a is zero. 14

16 According Theorem 2.5, item 1., the game has therefore not an equilibrium for every initial state in this case. Finally, consider the only case that has not been handled with yet: the case a =. Equation (15) reduces now to: s i k 2 i = q i,i=1, 2. (3) Since s 1 q 1 + s 2 q 2 =, it follows that (3) only has a solution if q 1 = q 2 =. Butthenwehave k i =,i=1, 2, yielding a closed-loop system which is not stable. So, according Theorem 2.5, item 2., the game has neither a solution for all initial states in this game. Which completes the proof. For completeness, we present also the generalization of Theorems 4.2 and 4.4 for the N-player situation. The proof can be given along the lines of those theorems. To that purpose notice that the characteristic polynomial of matrix M is ( a λ) N 1 (λ 2 (a 2 + s 1 q s N q N )). Moreover, it is easily verified that every eigenvector corresponding with the eigenvalue a has as its first entry a zero. From this the results summarized below follow then analogously. Theorem 4.8 Assume that σ := s 1 q s N q N 6=. Then, the infinite-planning horizon game has for every initial state a Nash equilibrium if and only if the next conditions hold. 1. a 2 + σ >, 2. a 2 + s i q i >, i =1,,N. In case there exists an equilibrium for every initial state, then there is exactly one equilibrium which permits a feedback synthesis. The corresponding set of equilibrium actions is: u i (t) = b i r i p i x(t), i =1, 2, where p i = (a+µ)q i, µ = a σ 2 + σ and ẋ(t) =(a s 1 p 1 s N p N )x(t), with x() = x. The corresponding cost for player i for this equilibrium are: J i = x 2 q i +s i p 2 i, i =1,,N. µ Moreover, in case there is an equilibrium for every initial state, this equilibrium is unique if a. If a> there exist infinitely many equilibria. 5 Concluding Remarks In this paper we considered the indefinite regular infinite-horizon linear quadratic differential game. The results obtained in this paper can be straightforwardly generalized to the N-player case. All results concerning matrix M should then be substituted by A S M = Q A T, 2 where S := [S 1,,S N ]; Q := [Q 1,,Q N ] T and A 2 =diag{a}. A different mathematical approach to tackle the present case open-loop dynamic games that has been successfully exploited in literature is the Hilbert space method. Lukes and Russel [17], Simaan 15

17 and Cruz [22], Eisele [7] and, more recently, Kremer [13] took, e.g., this approach. A disadvantage of this approach is that there are some technical conditions under which this theory can be used. The analysis in this paper shows, amongst others, that results obtained using that approach continu to hold in a more general context. Our analysis shows that the stable eigenspace of the extended Hamiltonian matrix M plays a crucial role in determining the open-loop Nash equilibria. From an application point of view an important conclusion is that the game will have a unique Nash equilibrium for every initial state in case the stable eigenspace of matrix M is an n-dimensional graph subspace and the individual regulator problems have a solution. Under these conditions the equilibrium actions allow for a feedback synthesis. A numerical algorithm to calculate these actions has, e.g., been outlined in Engwerda [9]. However, notice that various suggestions have been made in literature to improve the numerical stability of this algorithm (see e.g. Laub [15] and [16], Paige et al. [21], Van Dooren [6] and Mehrmann [18]). Particularly if one considers the implementation of large scale models one should consult this literature. Appendix Proof of Theorem 2.2. Before we proof this theorem we first recall the next lemma from, e.g., Kun [14] Lemma 5.1 Let IH be an arbitrary Hilbert-space with some inner product h.,.i; u, v IH; T:IH 7 IH a self-adjoint operator and α IR. Consider the functional hu, T (u)i +2hu, vi + α (31) Then, (i) a necessary condition for (31) to have a minimum is that the operator T is positive semi-definite. (ii) (31) has a unique minimum if and only if the operator T is positive definite. In this case the minimum is achieved at u = T 1 v. Usingthisresulttheproofofthetheoremreadsasfollows. part Let K be the stabilizing solution of the algebraic Riccati equation (5) and m(t) as defined in (9). Next consider (the value) function where V (t) :=x T (t)kx(t)+2m T (t)x(t)+n(t), n(t) = t { m T (s)sm(s)+2m T (s)c(s)}ds. Next, we consider V. Note that ṅ(t) =m T (t)sm(t) 2m T (t)c(t). Substitution of ṅ, ẋ and ṁ (see (8) and (9)) into V, using the fact that A T K + KA = Q + KSK (see (5)) yields V (t) = ẋ T (t)kx(t)+x T (t)kẋ(t)+2ṁ T (t)x(t)+2m T (t)ẋ(t)+ṅ(t) = x T (t)qx(t) u T (t)ru(t)+ [u(t)+r 1 B T (Kx(t)+m(t))] T R[u(t)+R 1 B T (Kx(t)+m(t))]. 16

18 Since m(t) converges exponentially to zero we have that lim t n(t) = too. Since lim t x(t) = too, we have V (s)ds = V (). Consequently substitution of V into this expression and rearranging terms gives V () + Z T x T (t)qx(t)+u T (t)ru(t)dt = [u + R 1 B T (Kx(t)+m(t))] T R[u + R 1 B T (Kx(t)+m(t))]dt. Since V () does not depend on u(.) and R is positive definite, the advertized result follows. part Define the linear mappings (operators) Then the state trajectory x becomes P : x 7 e At x, L : u 7 D : c 7 Z t Z t t e A(t s) Bu(s)ds and t e A(t s) c(s)ds. x = P(x )+L(u)+D(c). (32) Consider the inner product hf,gi := R f T (s)g(s)ds, onthesetl e 2,loc of all square integrable (state) functions from [t,t]intoir n and the inner product hu, vi := R u T (s)v(s)ds on the set L m 2 of all square integrable control functions. Then the cost function J can be rewritten as J(t,x,u,c) = hp(x )+L(u)+D(c), Q(P(x )+L(u)+D(c))i L e 2,loc + hu, Rui L m 2 = hu, (L QL + R)(u)iL m 2 +2hu, L Q(P(x )+D(c))i L m 2 + hp(x )+D(c), Q(P(x )+D(c)i L e 2,loc. Here L denotes the adjoint operator of L. From part (ii) of Lemma 6.1 it follows now immediately that J(t,x,u,c) has a unique minimum for an arbitrary choice of x if and only if the operator T := L QL + R is positive definite. But this implies that J(t,x,u,) has a unique minimum for an arbitrary choice of x too. According Theorem 2.1 this implies that the Riccati equation (5) has a stabilizing solution. Proof of Theorem 2.3. In the proof of this theorem we need some preliminary results. We state these in some separate lemmas. Furthermore we use the notation E s and E u to denote the stable subspace and unstable subspace associated with a given square matrix, respectively. 17

19 Lemma 5.2 Assume that A IR n n and σ(a) lc +.Then, lim t MeAt N =if andonlyif MA i N =, i =,,n 1. Using this lemma we can prove then the next result. Lemma 5.3 Let x IR p, y IR n p and Y IR (n p) p. Consider the differential equation d x(t) A11 A = 12 x(t) x() x, =. dt y(t) A 21 A 22 y(t) y() y x I If lim t x(t) =, for all Span, then y Y 1. dim E s p, and Ī 2. there exists a matrix Ȳ IR(n p) p such that Span E Y s. Proof. 1. Using the Jordan canonical form, we rewrite matrix A as Λs A = S S 1, Λ u where Λ s IR q q contains all stable eigenvalues of A and Λ u IR (n q) (n q) contains the remaining eigenvalues. Now, assume that q<p. By assumption Iq lim t I p q eλ s t S e Λut S 1 I q I p q Y Y 1 =, where I k denotes the k k identity matrix and Y =[Y Y 1 ]. Denoting the entries of S by S ij,i,j= 1, 2, 3 and the entries of S 1 I q I p q by P 1 P 2 Q 1 Q 2 we can rewrite the above equation as Y Y 1 R 1 R 2 S11 lim e Λst [P t S 1 P 2 ]+ 21 S12 S 13 S 23 S 22 e Λut Q1 Q 2 =. R 1 R 2 Since lim t e Λst =, we conclude that S12 S lim 13 e t S 22 S Λut Q1 Q 2 =. 23 R 1 R 2 So, from Lemma 6.2, we have that in particular S12 S 13 Q1 Q 2 =. (33) S 22 S 23 R 1 R 2 18

20 Next observe that, on the one hand, S11 S H := 12 S 13 P 1 P 2 Q S 21 S 22 S 1 Q 2 = 23 R 1 R 2 Iq I p q On the other hand we have that, using (33), S11 H = P1 P S So, combining both results, we obtain H = I p = S11 S 21 P1 P 2. I q I p q Y Y 1 = I p. However, obviously, the lastmentioned matrix is not a full rank matrix. So, we have a contradiction. Therefore, our assumption that q<pmust have been wrong, which completes the firstpartof the proof. 2. From the first part of the lemma we conclude that q p. Using the same notation, assume that S 11 S 12 S 21 S 22 S 31 S 32 and S 13 S 23 S 33 constitute a basis for E s and E u, respectively. Denote S 1 Ip Y lim [I eλ st p ]S t e Λ ut S 1 Ip Y = P Q. Since R =, we conclude that lim [S 11 S 12 ] e Λ st P t Q + S 13 e Λut R =. So, from Lemma 6.2 again we infer that S 13 R =. But,since H := [S 11 S 12 S 13 ] P Q R = I p, we conclude then that P H =[S 11 S 12 ] Q = I p. So, necessarily matrix [S 11 S 12 ] must be a full row rank matrix. From this observation the conclusion follows then straightforwardly. 19

21 Next we proceed with the proof of Theorem 2.3. Suppose that u 1,u 2 are a Nash solution. That is, J 1 (u 1,u 2) J 1 (u 1,u 2)andJ 2 (u 1,u 2 ) J 2 (u 1,u 2). From the first inequality we see that for every x control problem to minimize IR n the (nonhomogeneous) linear quadratic J 1 = subject to the (nonhomogeneous) state equation {x T (t)q 1 x(t)+u T 1 (t)r 1u 1 (t)}dt, ẋ(t) =Ax(t)+B 1 u 1 (t)+b 2 u 2 (t), x() = x, has a solution. This implies, see Theorem 2.2, that the algebraic Riccati equation (15) has a stabilizing solution (with i = 1). In a similar way it follows that also the second algebraic Riccati equation must have a stabilizing solution. Which completes the proof of point 2. To prove point 1. we consider Theorem 2.2 in some more detail. According Theorem 2.2 the minimization problem where min u 1 J 1 (x,u 1,u 2 )= {x T 1 (t)q 1x 1 (t)+u T 1 (t)r 1u 1 (t)}dt, has a unique solution. This solution is given by Here m 1 (t) isgivenby x 1 = Ax 1 + B 1 u 1 + B 2 u 2,x 1() = x, ũ 1 (t) = R 1 1 B T 1 (K 1 x 1 (t)+m 1 (t)). (34) m 1 (t) = whereas K 1 is the stabilizing solution of the algebraic Riccati equation t e (A S 1K 1 ) T (t s) K 1 B 2 u 2(s)ds. (35) Q 1 + A T X + XA XS 1 X =. (36) Notice that, since the optimal control ũ 1 is uniquely determined, and by definition the equilibrium control u 1 solves the optimization problem, u 1(t) =ũ 1 (t). Consequently, d(x(t) x 1 (t)) dt = Ax(t)+B 1 u 1 (t)+b 2u 2 (t) ((A S 1K 1 )x 1 (t) S 1 m 1 (t)+b 2 u 2 (t)) = Ax(t) S 1 (K 1 x 1 (t)) + m 1 (t) Ax 1 (t)+s 1 K 1 x 1 (t)+s 1 m 1 (t) = A(x(t) x 1 (t)). 2

22 Since x() x 1 () = x x = it follows that x 1 (t) =x(t). In a similar way we obtain from the minimization of J 2,withu 1 now entering into the system as an external signal, that where u 2 (t) = R 1 2 BT 2 (K 2x(t)+m 2 (t)), m 2 (t) = and K 2 is the stabilizing solution of the algebraic Riccati equation t e (A S 2K 2 ) T (t s) K 2 B 1 u 1 (s)ds. (37) Q 2 + A T X + XA XS 2 X =. (38) By straightforward differentiation of (35) and (37), respectively, we obtain and Next, introduce Using (39) and (36) we get ṁ 1 (t) = (A S 1 K 1 ) T m 1 (t) K 1 B 2 u 2(t) (39) ṁ 2 (t) = (A S 2 K 2 ) T m 2 (t) K 2 B 1 u 1(t). (4) ψ i (t) :=K i x(t)+m i (t), i=1, 2. (41) ψ 1 (t) = K 1 ẋ 1 (t)+ṁ 1 (t) = K 1 (A S 1 K 1 )x(t) K 1 S 1 m 1 (t)+k 1 B 2 u 2(t) K 1 B 2 u 2(t) (A S 1 K 1 ) T m 1 (t) = ( Q 1 A T K 1 )x(t) K 1 S 1 m 1 (t) (A S 1 K 1 ) T m 1 (t) = Q 1 x(t) A T (K 1 x(t)+m 1 (t)) = Q 1 x(t) A T ψ 1 (t). (42) In a similar way we have that ψ 2 (t) = Q 2 x(t) A T ψ 2 (t). Consequently, with the vector v T (t) := [x T (t), ψ1 T (t), ψ2 T (t)], v(t) satisfies v(t) = A S 1 S 2 Q 1 A T v(t), with v 1 () = x. Q 2 A T Since by assumption, for arbitrary x, v 1 (t) converges to zero it is clear from Lemma 6.3 by choosing consecutively x = e i,i=1,,n,that matrix M must have at least n stable eigenvalues (counting algebraic multiplicities). Moreover, the other statement follows from the second part of this lemma. Which completes this part of the proof. Letu 2 be as claimed in the theorem, that is u 2(t) = R 1 2 B T 2 ψ 2. 21

23 We next show that then necessarily u 1 solves the optimization problem subject to min u 1 { x T (t)q 1 x(t)+u T 1 R 1u 1 (t)}dt, x(t) =A x(t)+b 1 u 1 (t)+b 2 u (t), x() = x. Since, by assumption, the algebraic Riccati equation Q 1 + A T K 1 + K 1 A K 1 S 1 K 1 = (43) has a stabilizing solution, according Theorem 2.2, the above minimization problem has a solution. This solution is given by where ũ 1(t) = R 1 B T 1 (K 1 x + m 1 ), Next, introduce m 1 = t e (A S 1K 1 ) T (t s) K 1 B 2 u 2 (s)ds. ψ 1 (t) :=K 1 x(t)+m 1 (t). Then, similar to (42) we obtain ψ 1 = Q 1 x A T ψ1. Consequently, x d (t) :=x(t) x(t) andψ d (t) :=ψ 1 (t) ψ 1 (t) satisfy ẋd (t) A S1 xd (t) xd () = ψ d (t) Q 1 A T, =. ψ d (t) ψ d () p A S1 Notice that matrix Q 1 A T is the Hamiltonian matrix associated with the algebraic Riccati equation (43). Recall that the spectrum of this matrix is symmetric w.r.t. the imaginary axis. Since by assumption the Riccati equation (43) has a stabilizing solution, we know that its stable invariant subspace is given by Span[I K 1 ] T. Therefore, with E u representing a basis for the unstable subspace, we can write I = v p 1 + E u v 2, K 1 for some vectors v i, i =1, 2. However, it is easily verified that due to our asymptotic stability assumption both x d (t) andψ d (t) converge to zero if t.so,v 2 must be zero. From this it follows now directly that p =. Since the solution of the differential equation is uniquely determined, and [x d (t) ψ d (t)] = [ ] solve it, we conclude that x(t) =x(t) and ψ 1 (t) =ψ 1 (t). Or stated differently, 22

24 u 1 solves the minimization problem. In a similar way it is shown that for u 1 given by u 1,playertwohisoptimalcontrolisgivenbyu 2. Which proves the claim.. Proof of Theorem 2.9. In the proof of this theorem we need a result about observable systems. Therefore, we will first state this result in a separate lemma. Lemma 5.4 Consider the system ẋ(t) =Ax(t), y(t) =Cx(t). Assume that (C, A) is detectable. Then whenever y(t) =Cx(t), ift, it follows that then also x(t) if t. Next, we proceed with the proof of the theorem. Since by assumption the stable subspace, E s, is a graph subspace we know that every initial state, x, can be written uniquely as a combination of the first n entries of the basisvectors in E s.consequently, with every x there corresponds a unique ψ 1 and ψ 2 for which the solution of the differential equation ż(t) =Mz(t), with z T =[x T ψ T 1 ψ T 2 ], converges to zero. So, according Theorem 2.3, for every x there is a Nash equilibrium. On the other hand we have from the proof of Theorem 2.3 that all Nash equilibrium actions (u 1,u 2) satisfy u i (t) = R 1 i Bi T ψ i(t), i=1, 2, where ψ i (t) satisfy the differential equation ẋ(t) ψ 1 (t) = M x(t) ψ 1 (t), with x() = x. ψ 2 (t) ψ 2 (t) Now, consider the system ż(t) =Mz(t); y(t) =Cz(t), where C := I R1 1 B 1. R2 1 B 2 Since a standing assumption in this chapter is that (A, B i ),i=1, 2, is stabilizable, it is easily verified that the pair (C, M) is detectable. Consequently, due to our assumption that x(t) andu i (t), i=1, 2, converge to zero, we have from Lemma 6.4 that [x T (t), ψ T 1 (t), ψ T 2 (t)] converges to zero. Therefore, [x T (), ψ T 1 (), ψ T 2 ()] has to belong to the stable subspace of M. However,aswearguedabove,for every x thereisexactlyonevectorψ 1 () and vector ψ 2 ()suchthat[x T (), ψ T 1 (), ψ T 2 ()] E s. So we conclude that for every x there exists exactly one Nash equilibrium. 23

25 From the assumption that the stable subspace is a graph subspace, we conclude furthermore, see e.g. Engwerda [8] Theorem 6, that the set of algebraic Riccati equations (13,14) has a stabilizing solution. Consequently, we obtain from Theorem 2.6 that the game has an equilibrium for every initial state given by (16). Since for every initial state the equilibrium actions are uniquely determined, it follows that the equilibrium actions u i,i=1, 2, have to coincide with (16). References [1] Abou-Kandil H. and Bertrand P., 1986, Analytic solution for a class of linear quadratic open-loop Nash games, International Journal of Control, Vol.43, pp [2] Abou-Kandil H., Freiling G. and Jank G., 1993, Necessary and sufficient conditions for constant solutions of coupled Riccati equations in Nash games, Systems&Control Letters, Vol.21, pp [3] Başar T. and Bernhard P., 1995, H -Optimal Control and Related Minimax Design Problems, Birkhäuser, Boston. [4] Başar T. and Olsder G.J., 1999, Dynamic Noncooperative Game Theory, SIAM, Philadelphia. [5] Dockner E., Jørgensen S., Long N. van and Sorger G., 2, Differential Games in Economics and Management Science, Cambridge University Press, Cambridge. [6] Dooren P. van, 1981, A generalized eigenvalue approach for solving Riccati equations, Siam Journal of Scientific Statistical Computation, Vol.2, pp [7] Eisele T., 1982, Nonexistence and nonuniqueness of open-loop equilibria in linear-quadratic differential games, Journal of Optimization Theory and Applications, Vol.37, no.4, pp [8] Engwerda J.C., 1998, On the open-loop Nash equilibrium in LQ-games, Journal of Economic Dynamics and Control, Vol.22, pp [9] Engwerda J.C., 1998, Computational aspects of the open-loop Nash equilibrium in LQ-games, Journal of Economic Dynamics and Control, Vol.22, pp [1] Engwerda J.C., 24, Linear-Quadratic Differential Games, to appear. [11] Feucht M., 1994, Linear-quadratische Differentialspiele und gekoppelte Riccatische Matrixdifferentialgleichungen, Ph.D. Thesis, Universität Ulm, Germany. [12] Haurie A. and Leitmann G., 1984, On the global asymptotic stability of equilibrium solutions for open-loop differential games, Large Scale Systems, Vol.6, [13] Kremer D., 22, Non-Symmetric Riccati Theory and Noncooperative Games, Ph.D.Thesis, RWTH-Aachen, Germany. [14] Kun G., 2, Stabilizability, Controllability and Optimal Strategies of Linear and Nonlinear Dynamical Games, Ph.D. Thesis, RWTH-Aachen, Germany. 24

26 [15] Laub A.J., 1979, A Schur method for solving algebraic Riccati equations, IEEE Trans. Automat. Contr., Vol.24, pp [16] Laub A.J.,1991, Invariant subspace methods for the numerical solution of Riccati equations, In: Bittanti S., Laub A.J. & Willems J.C. (eds.), The Riccati equation (pp ), Springer- Verlag, Berlin. [17] Lukes D.L. and Russell D.L., 1971, Linear-quadratic games, JournalofMathematicalAnalysis and Applications, Vol.33, pp [18] Mehrmann V.L., 1991, The autonomous Linear Quadratic Control Problem: Theory and Numerical Solution, In: Thoma and Wyner, eds., Lecture Notes in Control and Information Sciences 163, Springer-Verlag, Berlin. [19] Meyer H.-B., 1976, The matrix equation AZ + B ZCZ ZD =,SIAM Journal Applied Mathematics, Vol.3, [2] Molinari B.P., 1977, The time-invariant linear-quadratic optimal control problem, Automatica, Vol.13, pp [21] Paige C. and Van Loan C., 1981, A Schur decomposition for Hamiltonian matrices, Linear Algebra and its Applications, Vol.41, pp [22] Simaan M. and Cruz J.B., Jr., 1973, On the solution of the open-loop Nash Riccati equations in linear quadratic differential games, International Journal of Control, Vol.18, pp [23] Starr A.W. and Ho Y.C., 1969, Nonzero-sum differential games, Journal of Optimization Theory and Applications, Vol.3, pp [24] Weeren A.J.T.M., 1995, Coordination in Hierarchical Control, Ph.D. Thesis, Tilburg University, The Netherlands. [25] Willems J.C., 1971, Least squares stationary optimal control and the algebraic Riccati equation, IEEE Trans. Automat. Contr., Vol.16, pp [26] Zhou K., J.C. Doyle, and K. Glover, 1996, Robust and Optimal Control, Prentice Hall, New Jersey. 25

Tilburg University. Uniqueness conditions for the affine open-loop linear quadratic differential games Engwerda, Jacob. Published in: Automatica

Tilburg University. Uniqueness conditions for the affine open-loop linear quadratic differential games Engwerda, Jacob. Published in: Automatica Tilburg University Uniqueness conditions for the affine open-loop linear quadratic differential games Engwerda, Jacob Published in: Automatica Publication date: 2008 Link to publication Citation for published

More information

Tilburg University. An equivalence result in linear-quadratic theory van den Broek, W.A.; Engwerda, Jacob; Schumacher, Hans. Published in: Automatica

Tilburg University. An equivalence result in linear-quadratic theory van den Broek, W.A.; Engwerda, Jacob; Schumacher, Hans. Published in: Automatica Tilburg University An equivalence result in linear-quadratic theory van den Broek, W.A.; Engwerda, Jacob; Schumacher, Hans Published in: Automatica Publication date: 2003 Link to publication Citation for

More information

Generalized Riccati Equations Arising in Stochastic Games

Generalized Riccati Equations Arising in Stochastic Games Generalized Riccati Equations Arising in Stochastic Games Michael McAsey a a Department of Mathematics Bradley University Peoria IL 61625 USA mcasey@bradley.edu Libin Mou b b Department of Mathematics

More information

A Numerical Algorithm to find All Scalar Feedback Nash Equilibria Engwerda, Jacob

A Numerical Algorithm to find All Scalar Feedback Nash Equilibria Engwerda, Jacob Tilburg University A Numerical Algorithm to find All Scalar Feedback Nash Equilibria Engwerda, Jacob Publication date: 2013 Link to publication Citation for published version (APA: Engwerda, J. C. (2013.

More information

Tilburg University. Linear Quadratic Games Engwerda, Jacob. Publication date: Link to publication

Tilburg University. Linear Quadratic Games Engwerda, Jacob. Publication date: Link to publication Tilburg University Linear Quadratic Games Engwerda, Jacob Publication date: 26 Link to publication Citation for published version (APA): Engwerda, J. C. (26). Linear Quadratic Games: An Overview. (CentER

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

Noncooperative continuous-time Markov games

Noncooperative continuous-time Markov games Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and

More information

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT Hans Norlander Systems and Control, Department of Information Technology Uppsala University P O Box 337 SE 75105 UPPSALA, Sweden HansNorlander@ituuse

More information

Feedback Nash Equilibria for Linear Quadratic Descriptor Differential Games Engwerda, Jacob; Salmah, Y.

Feedback Nash Equilibria for Linear Quadratic Descriptor Differential Games Engwerda, Jacob; Salmah, Y. Tilburg University Feedback Nash Equilibria for Linear Quadratic Descriptor Differential Games Engwerda, Jacob; Salmah, Y. Publication date: 21 Link to publication Citation for published version (APA):

More information

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008 ECE504: Lecture 8 D. Richard Brown III Worcester Polytechnic Institute 28-Oct-2008 Worcester Polytechnic Institute D. Richard Brown III 28-Oct-2008 1 / 30 Lecture 8 Major Topics ECE504: Lecture 8 We are

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability

More information

Convergence Rate of Nonlinear Switched Systems

Convergence Rate of Nonlinear Switched Systems Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the

More information

Chapter Introduction. A. Bensoussan

Chapter Introduction. A. Bensoussan Chapter 2 EXPLICIT SOLUTIONS OFLINEAR QUADRATIC DIFFERENTIAL GAMES A. Bensoussan International Center for Decision and Risk Analysis University of Texas at Dallas School of Management, P.O. Box 830688

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

LQR and H 2 control problems

LQR and H 2 control problems LQR and H 2 control problems Domenico Prattichizzo DII, University of Siena, Italy MTNS - July 5-9, 2010 D. Prattichizzo (Siena, Italy) MTNS - July 5-9, 2010 1 / 23 Geometric Control Theory for Linear

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016 ACM/CMS 17 Linear Analysis & Applications Fall 216 Assignment 4: Linear ODEs and Control Theory Due: 5th December 216 Introduction Systems of ordinary differential equations (ODEs) can be used to describe

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. VII - System Characteristics: Stability, Controllability, Observability - Jerzy Klamka

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. VII - System Characteristics: Stability, Controllability, Observability - Jerzy Klamka SYSTEM CHARACTERISTICS: STABILITY, CONTROLLABILITY, OBSERVABILITY Jerzy Klamka Institute of Automatic Control, Technical University, Gliwice, Poland Keywords: stability, controllability, observability,

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

6. Linear Quadratic Regulator Control

6. Linear Quadratic Regulator Control EE635 - Control System Theory 6. Linear Quadratic Regulator Control Jitkomut Songsiri algebraic Riccati Equation (ARE) infinite-time LQR (continuous) Hamiltonian matrix gain margin of LQR 6-1 Algebraic

More information

On the Stability of the Best Reply Map for Noncooperative Differential Games

On the Stability of the Best Reply Map for Noncooperative Differential Games On the Stability of the Best Reply Map for Noncooperative Differential Games Alberto Bressan and Zipeng Wang Department of Mathematics, Penn State University, University Park, PA, 68, USA DPMMS, University

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book] LECTURE VII: THE JORDAN CANONICAL FORM MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO [See also Appendix B in the book] 1 Introduction In Lecture IV we have introduced the concept of eigenvalue

More information

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming Mathematical Problems in Engineering Volume 2012, Article ID 674087, 14 pages doi:10.1155/2012/674087 Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

More information

Optimal Control Feedback Nash in The Scalar Infinite Non-cooperative Dynamic Game with Discount Factor

Optimal Control Feedback Nash in The Scalar Infinite Non-cooperative Dynamic Game with Discount Factor Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 4 (2016), pp. 3463-3468 Research India Publications http://www.ripublication.com/gjpam.htm Optimal Control Feedback Nash

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

MATH JORDAN FORM

MATH JORDAN FORM MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It

More information

A New Algorithm for Solving Cross Coupled Algebraic Riccati Equations of Singularly Perturbed Nash Games

A New Algorithm for Solving Cross Coupled Algebraic Riccati Equations of Singularly Perturbed Nash Games A New Algorithm for Solving Cross Coupled Algebraic Riccati Equations of Singularly Perturbed Nash Games Hiroaki Mukaidani Hua Xu and Koichi Mizukami Faculty of Information Sciences Hiroshima City University

More information

Balancing of Lossless and Passive Systems

Balancing of Lossless and Passive Systems Balancing of Lossless and Passive Systems Arjan van der Schaft Abstract Different balancing techniques are applied to lossless nonlinear systems, with open-loop balancing applied to their scattering representation.

More information

A new robust delay-dependent stability criterion for a class of uncertain systems with delay

A new robust delay-dependent stability criterion for a class of uncertain systems with delay A new robust delay-dependent stability criterion for a class of uncertain systems with delay Fei Hao Long Wang and Tianguang Chu Abstract A new robust delay-dependent stability criterion for a class of

More information

CONVERGENCE PROOF FOR RECURSIVE SOLUTION OF LINEAR-QUADRATIC NASH GAMES FOR QUASI-SINGULARLY PERTURBED SYSTEMS. S. Koskie, D. Skataric and B.

CONVERGENCE PROOF FOR RECURSIVE SOLUTION OF LINEAR-QUADRATIC NASH GAMES FOR QUASI-SINGULARLY PERTURBED SYSTEMS. S. Koskie, D. Skataric and B. To appear in Dynamics of Continuous, Discrete and Impulsive Systems http:monotone.uwaterloo.ca/ journal CONVERGENCE PROOF FOR RECURSIVE SOLUTION OF LINEAR-QUADRATIC NASH GAMES FOR QUASI-SINGULARLY PERTURBED

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

On linear quadratic optimal control of linear time-varying singular systems

On linear quadratic optimal control of linear time-varying singular systems On linear quadratic optimal control of linear time-varying singular systems Chi-Jo Wang Department of Electrical Engineering Southern Taiwan University of Technology 1 Nan-Tai Street, Yungkung, Tainan

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Hyperbolicity of Systems Describing Value Functions in Differential Games which Model Duopoly Problems. Joanna Zwierzchowska

Hyperbolicity of Systems Describing Value Functions in Differential Games which Model Duopoly Problems. Joanna Zwierzchowska Decision Making in Manufacturing and Services Vol. 9 05 No. pp. 89 00 Hyperbolicity of Systems Describing Value Functions in Differential Games which Model Duopoly Problems Joanna Zwierzchowska Abstract.

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE TIMO REIS AND MATTHIAS VOIGT, Abstract. In this work we revisit the linear-quadratic

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Converse Lyapunov theorem and Input-to-State Stability

Converse Lyapunov theorem and Input-to-State Stability Converse Lyapunov theorem and Input-to-State Stability April 6, 2014 1 Converse Lyapunov theorem In the previous lecture, we have discussed few examples of nonlinear control systems and stability concepts

More information

LINEAR-CONVEX CONTROL AND DUALITY

LINEAR-CONVEX CONTROL AND DUALITY 1 LINEAR-CONVEX CONTROL AND DUALITY R.T. Rockafellar Department of Mathematics, University of Washington Seattle, WA 98195-4350, USA Email: rtr@math.washington.edu R. Goebel 3518 NE 42 St., Seattle, WA

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

Criterions on periodic feedback stabilization for some evolution equations

Criterions on periodic feedback stabilization for some evolution equations Criterions on periodic feedback stabilization for some evolution equations School of Mathematics and Statistics, Wuhan University, P. R. China (Joint work with Yashan Xu, Fudan University) Toulouse, June,

More information

EE363 homework 2 solutions

EE363 homework 2 solutions EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =

More information

L 2 -induced Gains of Switched Systems and Classes of Switching Signals

L 2 -induced Gains of Switched Systems and Classes of Switching Signals L 2 -induced Gains of Switched Systems and Classes of Switching Signals Kenji Hirata and João P. Hespanha Abstract This paper addresses the L 2-induced gain analysis for switched linear systems. We exploit

More information

Linear Quadratic Zero-Sum Two-Person Differential Games

Linear Quadratic Zero-Sum Two-Person Differential Games Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

USE OF SEMIDEFINITE PROGRAMMING FOR SOLVING THE LQR PROBLEM SUBJECT TO RECTANGULAR DESCRIPTOR SYSTEMS

USE OF SEMIDEFINITE PROGRAMMING FOR SOLVING THE LQR PROBLEM SUBJECT TO RECTANGULAR DESCRIPTOR SYSTEMS Int. J. Appl. Math. Comput. Sci. 21 Vol. 2 No. 4 655 664 DOI: 1.2478/v16-1-48-9 USE OF SEMIDEFINITE PROGRAMMING FOR SOLVING THE LQR PROBLEM SUBJECT TO RECTANGULAR DESCRIPTOR SYSTEMS MUHAFZAN Department

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Linear-Quadratic Stochastic Differential Games with General Noise Processes

Linear-Quadratic Stochastic Differential Games with General Noise Processes Linear-Quadratic Stochastic Differential Games with General Noise Processes Tyrone E. Duncan Abstract In this paper a noncooperative, two person, zero sum, stochastic differential game is formulated and

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

Approximate Bisimulations for Constrained Linear Systems

Approximate Bisimulations for Constrained Linear Systems Approximate Bisimulations for Constrained Linear Systems Antoine Girard and George J Pappas Abstract In this paper, inspired by exact notions of bisimulation equivalence for discrete-event and continuous-time

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0). Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D

More information

Some solutions of the written exam of January 27th, 2014

Some solutions of the written exam of January 27th, 2014 TEORIA DEI SISTEMI Systems Theory) Prof. C. Manes, Prof. A. Germani Some solutions of the written exam of January 7th, 0 Problem. Consider a feedback control system with unit feedback gain, with the following

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications L. Faybusovich T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN

More information

Zero controllability in discrete-time structured systems

Zero controllability in discrete-time structured systems 1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state

More information

Differential Equations and Modeling

Differential Equations and Modeling Differential Equations and Modeling Preliminary Lecture Notes Adolfo J. Rumbos c Draft date: March 22, 2018 March 22, 2018 2 Contents 1 Preface 5 2 Introduction to Modeling 7 2.1 Constructing Models.........................

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution

Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution Biswa N. Datta, IEEE Fellow Department of Mathematics Northern Illinois University DeKalb, IL, 60115 USA e-mail:

More information

Control, Stabilization and Numerics for Partial Differential Equations

Control, Stabilization and Numerics for Partial Differential Equations Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua

More information

On the linear quadratic optimization problems and associated Riccati equations for systems modeled by Ito linear differential equations

On the linear quadratic optimization problems and associated Riccati equations for systems modeled by Ito linear differential equations Innovativity in Modeling and Analytics Journal of Research vol. 1, 216, pp.13-33, http://imajor.info/ c imajor ISSN XXXX-XXXX On the linear quadratic optimization problems and associated Riccati equations

More information

Control for Coordination of Linear Systems

Control for Coordination of Linear Systems Control for Coordination of Linear Systems André C.M. Ran Department of Mathematics, Vrije Universiteit De Boelelaan 181a, 181 HV Amsterdam, The Netherlands Jan H. van Schuppen CWI, P.O. Box 9479, 19 GB

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology Grammians Matthew M. Peet Illinois Institute of Technology Lecture 2: Grammians Lyapunov Equations Proposition 1. Suppose A is Hurwitz and Q is a square matrix. Then X = e AT s Qe As ds is the unique solution

More information

On the Stabilization of Neutrally Stable Linear Discrete Time Systems

On the Stabilization of Neutrally Stable Linear Discrete Time Systems TWCCC Texas Wisconsin California Control Consortium Technical report number 2017 01 On the Stabilization of Neutrally Stable Linear Discrete Time Systems Travis J. Arnold and James B. Rawlings Department

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games Adaptive State Feedbac Nash Strategies for Linear Quadratic Discrete-Time Games Dan Shen and Jose B. Cruz, Jr. Intelligent Automation Inc., Rocville, MD 2858 USA (email: dshen@i-a-i.com). The Ohio State

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering h

Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering h Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering denghua@eng.umd.edu http://www.engr.umd.edu/~denghua/ University of Maryland

More information

On Linear-Quadratic Control Theory of Implicit Difference Equations

On Linear-Quadratic Control Theory of Implicit Difference Equations On Linear-Quadratic Control Theory of Implicit Difference Equations Daniel Bankmann Technische Universität Berlin 10. Elgersburg Workshop 2016 February 9, 2016 D. Bankmann (TU Berlin) Control of IDEs February

More information

8 Periodic Linear Di erential Equations - Floquet Theory

8 Periodic Linear Di erential Equations - Floquet Theory 8 Periodic Linear Di erential Equations - Floquet Theory The general theory of time varying linear di erential equations _x(t) = A(t)x(t) is still amazingly incomplete. Only for certain classes of functions

More information

A note on linear differential equations with periodic coefficients.

A note on linear differential equations with periodic coefficients. A note on linear differential equations with periodic coefficients. Maite Grau (1) and Daniel Peralta-Salas (2) (1) Departament de Matemàtica. Universitat de Lleida. Avda. Jaume II, 69. 251 Lleida, Spain.

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Module 07 Controllability and Controller Design of Dynamical LTI Systems Module 07 Controllability and Controller Design of Dynamical LTI Systems Ahmad F. Taha EE 5143: Linear Systems and Control Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ataha October

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Discussion on: Measurable signal decoupling with dynamic feedforward compensation and unknown-input observation for systems with direct feedthrough

Discussion on: Measurable signal decoupling with dynamic feedforward compensation and unknown-input observation for systems with direct feedthrough Discussion on: Measurable signal decoupling with dynamic feedforward compensation and unknown-input observation for systems with direct feedthrough H.L. Trentelman 1 The geometric approach In the last

More information

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous): Chapter 2 Linear autonomous ODEs 2 Linearity Linear ODEs form an important class of ODEs They are characterized by the fact that the vector field f : R m R p R R m is linear at constant value of the parameters

More information

Non-zero Sum Stochastic Differential Games of Fully Coupled Forward-Backward Stochastic Systems

Non-zero Sum Stochastic Differential Games of Fully Coupled Forward-Backward Stochastic Systems Non-zero Sum Stochastic Differential Games of Fully Coupled Forward-Backward Stochastic Systems Maoning ang 1 Qingxin Meng 1 Yongzheng Sun 2 arxiv:11.236v1 math.oc 12 Oct 21 Abstract In this paper, an

More information

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77 1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t),

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

An Introduction to Noncooperative Games

An Introduction to Noncooperative Games An Introduction to Noncooperative Games Alberto Bressan Department of Mathematics, Penn State University Alberto Bressan (Penn State) Noncooperative Games 1 / 29 introduction to non-cooperative games:

More information

Reduced-order Model Based on H -Balancing for Infinite-Dimensional Systems

Reduced-order Model Based on H -Balancing for Infinite-Dimensional Systems Applied Mathematical Sciences, Vol. 7, 2013, no. 9, 405-418 Reduced-order Model Based on H -Balancing for Infinite-Dimensional Systems Fatmawati Department of Mathematics Faculty of Science and Technology,

More information

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function: Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic

More information