Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games"

Transcription

1 Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics, North Carolina State University, USA s: September 7, 6 Abstract We consider a non-cooperative game in infinite time horizon, with linear dynamics and exponentially discounted quadratic costs. Assuming that the state space is onedimensional, we prove that the Nash equilibrium solution in feedback form is stable under nonlinear perturbations. The analysis shows that, in a generic setting, the linear-quadratic game can have either one or infinitely many feedback equilibrium solutions. For each of these, a nearby solution of the perturbed nonlinear game can be constructed. Introduction Noncooperative differential games with linear dynamics and quadratic costs have been widely studied in recent literature [,,, 3, 4, 6, 9]. The existence of explicit solutions, determined by a finite set of algebraic equations, makes them a very attractive modeling tool. They have been used in a variety of applications, particularly in economics and management science. In most of these applications, the underlying dynamical equations are nonlinear and the cost functions are not exactly quadratic polynomials. Prices, interest rates, inventory levels, etc... assume only positive values, while in a linear-quadratic game all functions must be defined on the whole real line. In spite of these discrepancies, linear-quadratic games can be regarded as a natural approximation to more realistic models. In an attempt to justify this approximation, one is led to the following Question: Starting from a linear-quadratic differential game for two players, suppose that some small, nonlinear perturbations are added to the dynamics of the system and to the cost

2 functions. Does the perturbed game still have a Nash equilibrium solution in feedback form, close to the original one? For noncooperative differential games on a finite time interval [, T ], the analysis in [9] and [5] shows that the answer is largely negative. Indeed, the value functions for the two players are determined by a system of two Hamilton-Jacobi equations, with given terminal conditions at t = T. When the state space has dimension n =, in some cases this leads to a hyperbolic system, which has unique solutions [8]. However, in dimension n the system is not hyperbolic and the backward Cauchy problem is generically ill posed. A small nonlinear perturbation of the terminal costs may produce a large change in the solution. Aim of the present paper is to prove some positive results, in the case of differential games for two players in infinite time horizon, with exponentially discounted costs. Let u, u be the controls implemented by the two players, and assume that the state of the system x IR n evolves according to Moreover, let ẋ = fx, u, u ), u t) U, u t) U..) J i = e γt φ i xt), u t), u t) ) dt i =,,.) be the exponentially discounted costs. Here and in the sequel, an upper dot denotes a derivative w.r.t. time, while the constant γ > is a discount rate. Since the dynamics and the payoffs do not depend explicitly on time, it is natural to seek a Nash equilibrium solution consisting of time-independent feedbacks. Definition. A pair of functions x u x) U, x u x) U will be called a Nash equilibrium solution to the non cooperative game.)-.) provided that: i) The map u ) is an optimal feedback, in connection with the optimal control problem for the first player: subject to minimize e γt φ xt), u t), u xt)) ) dt ẋ = fx, u, u x)) u t) U. ii) The map u ) is an optimal feedback, in connection with the optimal control problem for the second player: subject to minimize e γt φ xt), u xt)), u ) dt ẋ = fx, u x), u ) u t) U.

3 This equilibrium solution can be found by solving the corresponding PDEs for the value functions. Assuming that the feedbacks u, u implemented the two players are sufficiently regular, let t xt, x ) be the trajectory of the system starting from an initial state x. This is obtained by solving the Cauchy problem ẋ = fx, u x), u x)), x) = x..3) The value functions for the two players are then computed by V i x ). = In the following we assume: ) e γt φ i xt, x ), u xt, x )), u xt, x )) dt, i =,..4) H) For any x IR n and every pair of vectors ξ, ξ ) IR n IR n, there exist a unique pair u, u ) U U such that { } u x, ξ, ξ ) = arg min ξ fx, ω, u ) + φ x, ω, u ),.5) ω U { } u x, ξ, ξ ) = arg min ξ fx, u, ω) + φ x, u, ω)..6) ω U On an open region where V, V are continuously differentiable, a dynamic programming argument see for example [, 6, 5]) shows that these value functions satisfy a system of Hamilton- Jacobi equations: { γv = H ) x, V, V ), γv = H ).7) x, V, V ), where, for i =,, ) ) H i). x, ξ, ξ ) = ξ i f x, u x, ξ, ξ ), u x, ξ, ξ ) + φ i x, u x, ξ, ξ ), u x, ξ, ξ ). If a solution to the system.7) can be found, then the Nash equilibrium feedback controls are computed by u i x) = u i x, V x), V x)), i =,..8) In general, the system.7) is highly nonlinear and difficult to solve globally on the entire space IR n ). Even in one space dimension, examples are known where this system admits infinitely many solutions, or no solution at all [7, 7]. In the case of linear dynamics ẋ = Ax + B u + B u,.9) and quadratic cost functionals ) J i = e γt a T xt) + x T t)r i xt) + u T i t)s i u i t) dt, i =,,.) a particular solution to the system of PDEs.7) can be found in terms of quadratic polynomials. Namely V i x) = xt M i x + n T i x + e i, i =,, 3

4 where the superscript T denotes transposition. To determine this particular solution, it suffices to solve a system of algebraic equations for the coefficients of the matrices M, M, the vectors n, n, and the scalars e, e. Our present goal is to understand whether this solution is stable w.r.t. small nonlinear perturbations of the dynamics and of the cost functionals. This issue can be studied in two separate settings: either on a bounded domain, or on the entire space IR n. I) Let u, u be the affine feedback controls in a Nash equilibrium for the linear quadratic game.9)-.), and consider any bounded domain Ω which is positively invariant for the corresponding dynamical system ẋ = Ax + B u x) + B u x)..) We recall that Ω is positively invariant if, for every solution x ) of.), one has the implication x) Ω = xt) Ω for all t. In this setting a natural question is whether, for a suitably small perturbation of the dynamics and of the cost functions, the system of H-J equations.7) admits a solution defined on Ω, close to the original one. The uniqueness of this solution is also an important issue, both for the linear-quadratic game and for the nonlinear perturbation. II) A further question is whether a solution to the perturbed game can be constructed on the entire space IR n, under suitable assumptions on how the perturbation grows as x. We remark that, for many applications, the models are meaningful only on a proper subset of IR n. For example, state variables should often satisfy non-negativity constraints. It is thus natural to construct solutions to the system of H-J equations.7) on a bounded domain Ω, as long as this domain is positively invariant for the corresponding dynamics. In this case, the feedback controls u i : Ω IRm will still provide a Nash equilibrium for a game where the state is required to remain inside Ω, assuming that the cost functions φ i in.) satisfy φ i x) = + if x / Ω. In this paper we confine the analysis to the one-dimensional case, under generic assumptions i.e., valid as the coefficients range on an open dense set). Denoting by ξ i = V i IR, i =,, the gradients of the value functions, the Hamilton-Jacobi equations.7) can be reduced to a system of two implicit ODEs, of the form ) ) ξ Λx, ξ, ξ ) ψ x, ξ ξ =, ξ ),.) ψ x, ξ, ξ ) where Λ is a suitable matrix. In the linear-quadratic case, one can find a solution ξ, ξ ) of.), where each ξ i x) = α ix+ β i is an affine function. Moreover, the determinant det Λx, ξ, ξ ) is a homogeneous quadratic polynomial of the three variables x, ξ, ξ. It vanishes on the surface of a double cone Γ, shown in Fig.. Looking at the relative position of the line ξ = { x, ξ x), ξ x)) ; x IR} w.r.t. the cone Γ, under generic conditions two main cases can occur. 4

5 Γ _ ξx) ξ x) ξ x) x x x Figure : The position of the line ξ = ξ x), relative to the double cone Γ where detλx, ξ, ξ ). Left: the line is always outside Γ. Center: the line is inside Γ for x in some bounded interval [a, b]. Right: the line is inside Γ for x outside a bounded interval ]a, b[. Notice that trajectories of the implicit ODE.) typically become singular as they approach the boundary of Γ, and cannot be prolonged any further left figure). ξ ξ * x) ξ x Figure : If det Λx, ξ x), ξ x)) for all x IR, then, the linear-quadratic game has infinitely many solutions. In one of these solutions the feedbacks u, u are affine, while in all the others they are fully nonlinear functions. All these solutions are stable under small nonlinear perturbations of the dynamics and of the cost functions. Σ Σ γ ξ M M γ P* ξ * x) P * ξ _ x _ x x Figure 3: A case where det Λx, ξ x), ξ x)) vanishes at two points x < x and the linear-quadratic game has a unique solution, which is stable under small nonlinear perturbations of the dynamics and of the cost functions. The graph of ξ ) contains a heteroclinic orbit of a related dynamical system. 5

6 CASE : The line ξ lies entirely outside the cone Fig., left). As a consequence, the matrix Λx, ξ, ξ ) is invertible in a neighborhood of ξ. The system of ODEs.) can thus be written in the standard form ξ ) ) = Λ ψ x, ξ x, ξ, ξ ), ξ )..3) ψ x, ξ, ξ ) ξ In this case, by solving.3) with slightly different initial data at x =, one can construct infinitely many Nash equilibrium solutions in feedback form for the same linear-quadratic game Fig. ). These solutions vary continuously under small nonlinear perturbations of the data. CASE : the line ξ is partly inside, partly outside the cone Γ. Fig., center and right), crossing the surface of the cone at two points Pi = x i, ξ x i), ξ x i)), i =,, with x < x. In this case, the equation detλx, ξ, ξ ) = locally defines two surfaces Σ, Σ, near the points P, P, respectively see Fig. 3). Always for the linear-quadratic game, a detailed analysis shows that a smooth solution of.) can cross the surface Σ only along a special curve γ Σ. By the same argument, it can cross the surface Σ only along a curve γ Σ. In general, the graph of a solution to.) can be constructed as a concatenation of orbits of a related dynamical system on IR 3, where all point on the curves γ and γ are steady states. Smooth solutions correspond to heteroclinic orbits, connecting a point on γ to a point on γ. Depending on the properties of the two equilibrium points P, P source, saddle, sink), there can be a single orbit, or else a - or a -parameter family of heteroclinic orbits. In a typical case, shown in Fig. 3, the points P γ and P γ are both saddle points. The -dimensional center-stable manifold M through P and the -dimensional center-unstable manifold M through P are transversal. Their intersection is the unique heteroclinic orbit joining γ with γ. Under generic assumptions, all of the above configurations are structurally stable. Namely, the curves γ i and the manifolds M i are still well defined in the presence of a small nonlinear perturbation. The family of heteroclinic orbits is preserved as well. The remainder of the paper is organized as follows. In Section we recall the system of Hamilton-Jacobi equations for a noncooperative -player game in infinite time horizon, while in Section 3 we review the construction of affine feedback solutions for the linear-quadratic case. Solutions of.) on a bounded interval are studied in Section 4. The main tool in the analysis is a representation of their graph as a concatenation of orbits, for a related dynamical system. The case of a heteroclinic orbit joining two saddle points, illustrated in Fig. 3, is worked out in detail in Section 5. This is the most pleasing case, where the linear-quadratic game has a unique feedback solution, which is stable under small nonlinear perturbations. In Section 6 we prove that, under suitable growth conditions, feedback solutions to the perturbed nonlinear game can be extended to the entire real line. Up to this section, all of our analysis is concerned with the system of ODEs.) for the derivatives of the value functions. Finally, in Section 7 we show that, by constructing a solution to.), one can recover the value functions V, V in.7) and provide a Nash equilibrium solution to the non-cooperative differential game. Remark. All of our results hold under generic assumptions on the coefficients a, b i, R i, S i of the linear-quadratic game 3.)-3.3). In other words, they are valid as these coefficients range in an open dense set. In some cases, this set can be precisely determined. In general, 6

7 this set is described by a finite set of inequalities: Φ j a, b, b, R, R, S, S ), j =,..., N, where the Φ j are analytic functions. Having constructed examples where these inequalities do hold, by analyticity we conclude that they must hold on an open dense set. In earlier literature, a few examples of feedback equilibria for nonlinear infinite horizon differential games have been studied in [4, 7, 7] The basic setting Throughout the following, we consider a system whose dynamics is linear w.r.t. the controls u, u. The state x IR thus evolves according to ẋ = fx) + g x)u + g x)u, u t), u t) IR..) Here f, g, g : IR IR are smooth maps with sublinear growth: fx) + g x) + g x) C + x ),.) for some constant C. The cost functionals for the two players are [ J i = e γt ϕ i xt)) + u i t) ] dt..3) Call V, V the value functions in a Nash equilibrium in feedback form. By the previous analysis these functions satisfy the system of Hamilton-Jacobi equations.7), where ] H i). x, ξ, ξ ) = ξ i [fx) + g x)u + g x)u + ϕ i x) + u i ), i =,..4) The optimal feedback controls are given by the formulas } u x, ξ, ξ ) = argmin {ξ g x) ω + ω ω } u x, ξ, ξ ) = argmin {ξ g x) ω + ω ω = ξ g x), = ξ g x)..5) Inserting.5) in.4), from.7) one obtains the implicit system of ODEs [ ] γv = fx) g x)v g x)v V + ϕ, [ ] γv = fx) g x)v g x)v V + ϕ..6) Differentiating.6) w.r.t. x we obtain a system of two ODEs for the derivatives of the value functions: ξ = V, ξ = V. Namely ξ ψ x, ξ, ξ ) Λx, ξ, ξ ) =,.7) ψ x, ξ, ξ ) ξ 7

8 where Λx, ξ, ξ ) = ) Λ Λ Λ Λ ψ x, ξ, ξ ) = ψ x, ξ, ξ ) = f g ξ g ξ g ξ g ξ f g ξ g ξ,.8) γ f )ξ + g g ξ + g g ξ ξ ϕ )..9) γ f )ξ + g g ξ + g g ξ ξ ϕ ) When both players adopt the equilibrium feedback strategies.5), by.) the system evolves according to ẋ = f g ξ g ξ..) 3 Review of the linear-quadratic case In this section we review the construction of feedback equilibrium solutions for the linearquadratic case. Assume that in.) and.3) one has fx) = a x, g i x) = b i, ϕ i x) = R i x + S ix, i =,, 3.) for some constants a i, b i, R i, S i. The dynamics and cost functionals thus become J i = ẋ = a x + b u + b u, 3.) e [R γt i x + ] S ix + u i dt. 3.3) Notice that, if a, the more general case where fx) = a x + b can be reduced to 3.) by a translation of coordinates. The following assumptions will be used throughout. A) The coefficients S, S, γ are strictly positive. Moreover, one of the following alternatives holds. Case : a > γ > and S b + S b > a γ) /. Case : a <. In the linear-quadratic case, the matrix Λ and the right hand side ψ in the system of ODEs.7) take the special form Λ ) ) Λ Λ x, ξ, ξ = Λ Λ = ) ψ x, ξ, ξ ) = ψ x, ξ, ξ a x b ξ b ξ b ξ b ξ a x b ξ b ξ, 3.4) γ a )ξ R S x. 3.5) γ a )ξ R S x 8

9 Under the assumptions A), one can construct an explicit solution where ξ, ξ are affine functions, namely ξ x) = α x + β, ξ x) = α x + β. 3.6) Indeed, inserting 3.4) 3.6) in.7) one obtains ) a x b α x + β ) b α x + β ) α b α x + β )α = γ a )α x + β ) R + S x), ) a x b α x + β ) b α x + β ) α b α x + β )α = γ a )α x + β ) R + S x). Since these equalities must hold for every x, we obtain a system of four quadratic equations for the unknowns α, α, β, β. b α + b α α + γ a )α = S, b α + b α α + γ a )α = S, b α + b α a + γ)β + b α 3.7) β = R, b α + b α a + γ)β + b α β = R. To solve 3.7), it is convenient to introduce the variables A =.. b a γ, X i = i A α. S i b i, K i = i, i =,. 3.8) A The first two equations in 3.7) can then be written as { X + X X X = K, X + X X X = K. 3.9) Equivalently, { X + X )X X ) = K K, 3X + X ) X X ) X + X ) = K + K ). 3.) Introducing the further variables we obtain { Y. = X + X, Y. = X X, Y Y = K K, 3Y + 4Y Y = K + K ). 3.) By the first equation, Y = K K )/Y. Inserting this expression in the second equation, we find. GY ) = 3Y 4 + 4Y 3 + K + K ) ) Y = K K ). 3.) By the assumption A) and the definitions 3.8), two cases can arise. 9

10 Case : a > γ > and K + K > /. This implies G) = G ) =, G ) = 4K + K ) <, lim Gs) = ) s + Hence the equation 3.) has a solution Y >. Case : a <. This implies G ) = K + K ), lim Gs) = ) s Hence the equation 3.) has a solution Y <. In both cases, reverting to the original variables we find α = A [ b Y + + K ] K Y, α = A [ b Y + K ] K Y. 3.5) As soon as α, α have been determined, the last two equations in 3.7) provide a linear system for the variables β, β. This can be uniquely solved under the condition b α + b α a + γ) b α b α. 3.6) We observe that, under the assumption A), this inequality always holds. Indeed, dividing by A, 3.6) can be rewritten as Equivalently, Two cases are considered: X + X + a A ) X X. Y + a ) A 4 Y + ) + 4 Y ). 3.7) If a > γ > and K + K > / then Y > and a A = a a γ >. Thus If a < then Y + a A > Y + ) >. Y and < a A = a a γ <. Hence Y + a A < Y + ) <.

11 Therefore, Y + a ) A 4 Y + ) > 3.8) in Case as well as in Case. Hence 3.7) holds. Having determined the constants α i, β i, the affine functions ξ, ξ in 3.6) yield a solution to the system.7). According to.5), the corresponding Nash equilibrium feedback controls are u x) = b α x + β ), u x) = b α x + β ). 3.9) In the following, given a solution α, α, β, β ) of the algebraic system 3.7), recalling 3.8) we shall write X = b A α, X = b A α, Y = X + X, Y = X X. 3.) Remark. When the Nash equilibrium feedbacks 3.9) are implemented, the system 3.) evolves according to ẋ = a x b α x + β ) b α x + β ) = a b α b α ) x + b β b β ). 3.) Under the assumption A), if either a < or < γ a, then the stationary point x = b β + b β a b α b α 3.) is an asymptotically stable equilibrium for the ODE 3.). However, this equilibrium is unstable when a = γ/. To prove the above claims, we examine various cases. If a <, then A = a γ <, while X + X = Y + <. Therefore a b α b α = a AX + X ) a <. If < γ a and K + K > /, recalling that X + X a γ >, we obtain = Y + > and A = a b α b α = a AX + X ) < a A = γ a. On the other hand, if < γ = a, then A = and a b α b α = a AX + X ) = a >. Remark 3. Under the assumption A) one has α, α >. Therefore, the value functions of both players approach + as x ±. Indeed, from 3.9) it follows X X + X ) > and X X + X ) >. 3.3) Without loss of generality we can assume that X X. There are two cases:

12 If a > γ > and K + K > / then A = a γ > and X + X = Y + >. This implies that X > and X + X = X + Y >. Recalling 3.3) we have X >. Thus α i = AX i b i > i =,. 3.4) If a < then A = a γ < and X + X = Y + <. This implies that X < and X + X = X + Y <. Recalling 3.3) we have X <. Hence 3.4) again holds. 4 Perturbed solutions on a bounded domain Together with the linear-quadratic game.9)-.), we now consider a perturbed game, where the dynamics and the cost functions have the form ẋ = a x + f x)) + b + h x))u + b + h x))u, 4.) J i = + ) e R γt i x + S i x + η i x) + u i dt, 4.) for some perturbations f, h, h, η, η. The gradients of the value functions ξ i x) = V i x) will again satisfy the implicit system of ODEs ξ ψ x, ξ, ξ ) Λx, ξ, ξ ) =, 4.3) ψ x, ξ, ξ ) ξ where now Λ ) ) Λ Λ x, ξ, ξ = Λ Λ a x + f b + h ) ξ b + h ) ξ b + h ) ξ. =, b + h ) ξ a x + f b + h ) ξ b + h ) ξ ) 4.4) ψ x, ξ, ξ γ a f )ξ R S x η ) =. 4.5) ψ x, ξ, ξ γ a f )ξ R S x η Throughout the following, we assume that the conditions A) hold, and let ξ i x) = V i x) = α i x + β i, i =,, 4.6)

13 be the gradients of the value functions in the affine solution of the linear-quadratic game. We denote by a x b Λ. x) = Λx, ξx), ξx)) ξ b ξ b ξ = 4.7) b ξ a x b ξ b ξ the corresponding matrix at 3.4). Its determinant is det Λ x) = [ a x b α x + β ) b α x + β ) ] b b α x + β )α x + β ). 4.8) Under generic assumptions on the coefficients, det Λ x) is a polynomial of degree two, either with no real root, or with two distinct roots. These two cases are qualitatively very different. 4. Det Λ has no real roots. In this case, the matrix Λ x) is invertible for every x IR. In a neighborhood of the line ξ the system 4.3) can be written in explicit form as ξ ) = Λ ) ψ x, ξ, ξ x, ξ, ξ ). 4.9) ψ x, ξ, ξ ξ In this setting, the standard theory of uniqueness and continuous dependence for solutions to ODEs yields Theorem. Assume that the quadratic polynomial det Λ x) in 4.8) has no real roots, and let Ω IR be any bounded interval. Given ε >, there exist δ, δ > such that the following holds. Consider any point x Ω, any perturbations f, η, η C Ω), h, h C Ω), and any data ξ, ξ IR, satisfying ξ ξ x ) δ, ξ ξ x ) δ, 4.) f C + η C + η C + h C + h C δ. 4.) Then the ODE 4.3) 4.5) has a unique solution on Ω, with initial data Moreover, this solution satisfies ξ x ) = ξ, ξ x ) = ξ. ξ x) ξ x) + ξ x) ξ x) ε for all x Ω. 4.) Notice that in this case, setting f = h = h = η = η =, one can already find infinitely many solutions to the implicit system of ODEs 4.3) on the interval Ω. One of these solutions is affine, given by 3.6), while the others are fully nonlinear. In Section 6 we shall analyze whether these nonlinear solutions can be extended beyond Ω, to the entire real line. 3

14 Remark 4. Assume that the stationary point x in 3.) is an asymptotically stable equilibrium for the linear ODE 3.), contained in the interior of Ω. Then, choosing δ, δ > sufficiently small in 4.)-4.), the interval Ω will be positively invariant for the corresponding feedback dynamics ẋ = a x + f x) b + h x) ) ξ x) b + h x) ) ξ x). 4.3) Example. Consider the linear system ẋ = 3x + u + u. 4.4) and the cost functions J = e t 5x + 3x + ) u, J = The corresponding constants in 3.8) are e t 5x 3x + ) u. 4.5) Solving 3.) and 3.) we obtain A = 5 and K = K =. Y =, Y =, X = X =. After a few computations, we find that 4.3) admits a linear solution The corresponding dynamic is ξ x) = 5x +, ξ x) = 5x. ẋ = 3x ξ x) ξ x) = 7x, having x = as asymptotically stable equilibrium point. By 4.8), the determinant of the corresponding matrix Λ is which is always positive. det Λ x) = 7x) 5x + ) 5x ) = 4x +, 4. Det Λ has two real roots. We now assume that the determinant of the matrix Λ x) at 4.7) vanishes at two distinct points x < x and so that x, x x and px). = det Λ x) = C Λ x x )x x ) 4.6) where x is the stationary point in 3.) and C Λ is a nonzero constant. Notice that, if this assumption holds, the coefficients R i in 3.3) and β i in 4.6) must satisfy R, R ), ), β, β ), ). 4.7) Indeed, if R, R ) =, ), then the last two equations of 3.7) together with 3.6) imply β, β ) =, ). By 3.6), for i =, we thus have ξ i x) = α ix. This yields det Λ x) = [ a b α b α ) b b α α ] x. 4.8) 4

15 Hence x = x =, against the assumption. We rewrite 4.3) as a Pfaffian system { Λ dξ + Λ dξ ψ dx =, Λ dξ + Λ dξ ψ dx =. 4.9) Consider the vectors v. = ψ Λ, w =. Λ ψ Λ, 4.) Λ and denote by v w their wedge product. To construct trajectories of 4.3), we seek continuously differentiable functions x ξ x), ξ x)) whose graph is obtained by concatenating trajectories of the system ẋ Λ Λ Λ Λ ξ = v w = Λ ψ Λ ψ. 4.) ξ Λ ψ Λ ψ Let us first consider the unperturbed linear-quadratic case, where the matrix Λ and the vector ψ are given by 3.4) and 3.5), respectively. By 4.6), the equation det Λ x) = has two solutions. These correspond to the two points P = x, ξ x ), ξ x )), P = x, ξ x ), ξ x )). 4.) Moreover the derivative d dx det Λ x) does not vanish at x = x and at x = x. By the implicit function theorem, there exist two surfaces Σ, Σ IR 3, containing the above two points, where the determinant of Λx, ξ, ξ ) vanishes see Fig. 3). The following analysis will show that, under generic conditions, there exist Two curves γ Σ and γ Σ, containing the points P and P the right hand side of 4.) vanishes. respectively, where Two -dimensional invariant manifolds M, M, containing γ and γ respectively, which intersect transversally. As shown in Fig. 3, the transversal intersection M M defines a heteroclinic orbit connecting P with P. Since this configuration is structurally stable, it is preserved by small C perturbations of the vector fields v, w in 4.). The next lemma provides generic conditions on the coefficients R i, S i, b i which guarantee the existence of two smooth curves γ, γ where the vector fields v, w are parallel. Lemma. Together with A) and 4.6), assume that at least one of the following equalities does not hold: R = S = b R S b. 4.3) Then the Jacobian matrix of the vector field v w has rank at both points P and P. As a consequence, the equation v w = IR 3 4.4) 5

16 defines two smooth curves γ, γ : [ s, s ] IR 3, which we can parameterize by arc length, such that γ i [ s, s ] ) Σ i and γ i ) = P i, i =,. 4.5) Proof.. For i =,, since we are assuming x i x, at the point P i Λ = Λ = a b ξ b ξ. we have Indeed, by 3.) the above quantity coincides with the time derivative ẋ, which vanishes only at the equilibrium point x. In turn, detλ x i ) = implies Λ Λ as well. By continuity, all four coefficients Λ jk in 4.9) 4.) are nonzero on a neighborhood of Pi. As a consequence, if the first and second components or the first and third) of the wedge product in 4.) vanish, then the remaining component vanishes as well. Next, by 4.6) it follows Λ Λ Λ Λ )P i ), i =,. By the implicit function theorem, it thus suffices to show that the Jacobian matrix of the map x Λ Λ Λ Λ ξ Λ ψ Λ ψ ξ Λ ψ Λ ψ has rank at the points P and P. In the following we denote by J J = J =. Λ Λ Λ Λ ) Λ ψ Λ ψ ) 4.6) J 3 Λ ψ Λ ψ ) this 3 3 Jacobian matrix. Moreover, for a fixed index i {, }, we denote by J i this particular matrix computed at the point P i in 4.).. We claim that the range of the linear map u J i u has dimension. Indeed, since the function ξ = ξ, ξ ) in 3.6) is an affine solution of 4.3) through P i, the system 4.) has a corresponding solution of form α xt) + β. α β This solution satisfies ẋ = Therefore, as xt) x i, we have the convergence det Λx, ξ x), ξ x)). ẍt) ẋt) p x i ) where p is defined in 4.6). This implies J i α = p x i ) α, 4.7) α α 6

17 Next, we observe that all functions Λ = Λ, Λ, Λ, ψ + R, ψ + R, are linear homogeneous w.r.t. the variables x, ξ, ξ. In general, if a function φ : IR 3 IR is homogeneous of degree β, so that φtz) = t β φz), its directional derivative satisfies φz), z = d dt φtz) t= = βφz). 4.8) Moreover, since all Λ ij are linear, for every z IR 3 we trivially have Λij z), z = Λ ij z). 4.9) For convenience, for a fixed i {, } we shall write We now claim that J i Λ jk = Λ jkp i ) and ψ j = ψ j P i ). x i ξ x i) =. ξ x i) κ κ = κ 3 R Λ R Λ. 4.3) R Λ R Λ Indeed, applying 4.8) to the function φ = Λ Λ Λ Λ at the point Pi we obtain x i κ = Λ Λ Λ Λ ) ξ x i) = Λ Λ Λ Λ ) =. ξ x i) Using 4.8)-4.9), we then compute x i κ = Λ ψ Λ ψ ) ξ x i) ξ x i) [ = Λ ψ + R ) Λ ψ + R )) R Λ R Λ )] x i ξ x i) ξ x i) = R Λ R Λ ) R Λ R Λ ). 4.3) After an entirely similar computation yields the value of κ 3 given in 4.3). If κ, κ 3 ), ), comparing 4.7) with 4.3) we conclude that the Jacobian matrix J i has rank. Otherwise, if κ, κ 3 ) =, ), recalling that Λ = Λ, by 4.3) we obtain ) ) ) Λ Λ R R = Λ Λ Λ. 4.3) R R By 4.3) it follows ) ) Λ Λ ψ ) ψ ) ) Λ Λ α ) ψ Λ Λ ψ = Λ ψ and Λ Λ α = ψ. 7

18 Since R, R ), ), we have ) ) ψ R ) ) Λ Λ α R ) ψ = C R and Λ Λ α = CΛ R 4.33) for some constant C. Two cases are considered: If C then 4.3) and 4.33) imply R R = ψ ψ = Λ Λ = α α >. 4.34) To achieve a contradiction, let Jk i 4.6). Using the relations be the k-th row of the Jacobian matrix J i, as in Λ ψ = Λ ψ, Λ ψ = Λ ψ, one obtains Λ J i + Λ J i 3 =, ψ b Λ, ψ b Λ ), J i 3 =, b Λ + b Λ, b Λ + b Λ ). If the Jacobian matrix J i has rank, then the above two vectors must be parallel. Hence b 4 Λ Λ = b 4 Λ Λ. This yields Recalling 4.34), we obtain Λ Λ = ξ x i) ξ x i) = b b. R = α = β = b R α β b. Thus, from the first two equations of 3.7), we deduce contradicting the assumption 4.3). If C = then 4.33) implies the identities S = b α + b α α + γ a )α S b α + b α = b α + γ a )α b, ψ = ψ =, Λ α = Λ α, Λ α = Λ α. Therefore ) ) Λ Λ α Recalling 4.3), we obtain Λ Λ α = Λ R α = R α. 8 α α ).

19 Since α, α >, this implies R, R. Recalling 4.3) and 4.7), we thus have Λ. On the other hand, using the identities ψ = ψ =, a direct computation yields γ a )Λ γ a )Λ J i =. b Λ + b Λ b Λ + b Λ We claim that this matrix J at P i has rank. If not, then Λ b Λ + b Λ ) + Λ b Λ + b Λ ) =. This implies b Λ + b Λ =, b Λ + b Λ =. 4.35) Recalling 4.3) we obtain α α = R R = Λ Λ = b b. 4.36) Hence Moreover, 4.35) also implies b α = b α. 4.37) which yields b ξ x i ) = b ξ x i ), b β = b β. 4.38) Inserting 4.37) and 4.38) in the last two equations of the system 3.7), we obtain b R = b R, reaching a contradiction with 4.36). In all cases, we conclude that the Jacobian matrix J has rank at the points P and P. Hence the curves γ, γ are well defined. The same conclusion remains true also in the presence of sufficiently small nonlinear perturbations. Indeed, a straightforward application of the implicit function theorem yields: Lemma. Under the same assumptions as in Lemma, there exists δ > sufficiently small such that the following holds. Let f, η, η, h, h, be perturbations such that f C + η C + η C + h C + h C. = δ δ. 4.39) Then there exists equilibrium points P, P for the ODE 4.), with Λ, ψ given at 4.4)-4.5), such that P i P i C δ. 9

20 Moreover, there exist two surfaces Σ, Σ IR 3, containing the above two points, where the determinant of Λx, ξ, ξ ) vanishes. The equation 4.4) defines two curves γ i : [ s, s ] Σ i, parameterized by arc-length, such that γ i ) = P i and for some constant C >. γ i ) γ i ) Cδ 4.4) We observe that Since the vector field v w vanishes on the curve γ i and P i γ i, the curve γ i describes the unique local center manifold for P i. At every point Q Σ i \ γ i, the vector field v w is vertical i.e., its first component is zero while the last two components do not both vanish). Hence, a trajectory through Q cannot be the graph of a C solution of 4.3). To construct a global, continuously differentiable solution to 4.3), we need to concatenate orbits of 4.) which cross the surfaces Σ i through some point along the curves γ i, i =,. This can be achieved by a trajectory which lies in the intersection of two invariant manifolds M, M, as shown in Fig Constructing a solution as a concatenation of orbits. Let us first consider the unperturbed, linear-quadratic case. As before, for i =,, we denote by J i the Jacobian matrix J at the point Pi = x i, ξ x i), ξ x i) ). Two of the eigenvalues of J i are known, namely λ i, =, λ, = p x ) = C Λ x x ) and λ, = p x ) = C Λ x x ). 4.4) On the other hand, at an arbitrary point x, ξ, ξ ), by 3.4) the Jacobian matrix in 4.6) has the form a Λ J = b ψ + b ψ + γ a )Λ. b ψ + b ψ + γ a )Λ At the point P i, the trace of the matrix J i = J P i ) is TraceJ i ) = γλ i. The third eigenvalue of J i is thus computed as λ i,3 = TraceJ i ) λ i, = γλ i p x i ). 4.4) To proceed, we now impose generic conditions implying that, at Pi, the three eigenvalues of the Jacobian matrix J are distinct. By 4.4) and 4.4), this will be the case if { γλ i p x i ), i =,. 4.43) γλ i p x i ),

21 Let x = ˆx i be the solution to the equation The first condition in 4.43) is equivalent to γλ x) = p x i ) = λ i, for i =,. 4.44) ˆx i x i for i =,. We observe that, since x i and ˆx i depend on the coefficients of the dynamics and the cost functions 3.)-3.3), the inequalities 4.43) will be satisfied for generic values of these coefficients. In order to construct a solution ξ ) as a concatenation of orbits of 4.), one needs to study the flow in a neighborhood of the equilibrium points P and P. We recall that, since all points on the curve γ i, i =,, are steady states, the 3 3 Jacobian matrix J in 4.6) always has a zero eigenvalue at Pi. Different cases arise, depending on the sign of the two remaining eigenvalues λ i,, λ i,3. In turn, these signs depend on the relative position of ˆx i with respect to x i. CASE : If C Λ >, then 4.4) implies Recalling that λ, < < λ,. Λ x) = a b α b α ) x b β + b β ), 4.45) with a b α b α ) <, from 4.44) we obtain ˆx > ˆx. Moreover, for i =,, Three sub-cases can occur: λ i,3 > if x i < ˆx i, λ i,3 < if x i > ˆx i. 4.46) If x < ˆx and x > ˆx, then λ, < < λ,3 and λ,3 < < λ,, hence both P and P are saddle points. If x > ˆx, then x > ˆx and λ,, λ,3 < and λ,3 < < λ,. In this case P is a stable point while P is a saddle point. if x < ˆx then x < ˆx and λ, < < λ,3 and λ,, λ,3 >. In this case P is a a saddle point and P is an unstable source.

22 CASE : If C Λ < then 4.4) implies λ, < < λ, and ˆx < ˆx. Recalling 4.45), with a b α b α ) <, from 4.44) we obtain ˆx < ˆx, together with 4.46). Four sub-cases can now occur: If [ x, x ] [ˆx, ˆx ], then λ,3 < < λ, and λ, < < λ,3, In this case, both P and P are saddle points. If [ˆx, ˆx ] [ x, x ], then λ,3, λ, > and λ,3, λ, <. In this case, P is an unstable source while P is stable. If ˆx < x ˆx < x, then λ,3 < < λ, and λ,3, λ, <. Hence P is a saddle point while P is stable. If x < ˆx x < ˆx, then λ,3, λ, > and λ, < < λ,3. Hence P is an unstable source while P is a saddle point. Consider again the affine solution ξ = ξ, ξ ) of the system 4.3) in the linear-quadratic case, constructed in 3.6). Depending on the signs of the eigenvalues λ i,, λ i,3, by looking at the dynamics generated by 4.) we see that the following cases can occur, under generic assumptions. I) Assume that λ, λ,3 < and λ, λ,3 <, so that both P and P are saddle points. Then in the linear-quadratic case ξ is the unique globally smooth solution to the system 4.3). If a small nonlinear perturbation is present, so that Λ, ψ are given by 4.4)-4.5), then 4.3) has a unique globally smooth solution close to ξ see Fig. 3). II) Assume that λ, < < λ,3 and λ,, λ,3 <, so that P is a saddle and P is a sink. Then in the linear-quadratic case the system 4.3) has a -parameter family of globally smooth solutions close to ξ. The same holds in the presence of a small nonlinear perturbation. III) Assume that λ,, λ,3 > and λ,, λ,3 <, so that P is a source and P is a sink. Then in the linear-quadratic case the system 4.3) has a -parameter family of globally smooth solutions close to ξ. The same holds in the presence of a small nonlinear perturbation.

23 To conclude this section we give three examples corresponding to three cases I, II and III. Example two saddle points). Consider the linear system 4.4), but with cost functions J = e t 5x + 3x + ) u, J = In this case, 4.3) admits the affine solution ξ x) = ξ x) = 5x +. e t 5x + 3x + ) u. 4.47) This yields equilibrium point x = 7, which is asymptotically stable for the dynamics ẋ = 3x ξ x) ξ x) = 7x. The determinant of the corresponding matrix Λ is computed by det Λ x) = 7x ) 5x + ) = 4 x + ) x + ). 4 It vanishes at the two points x =, x = 4. We thus have x ] x, x [. Moreover, the equilibrium points P = ), 3, 3 and P = ) 4, 4,. 4 On the other hand, the constant in 4.6) is C Λ = 4 >. Hence, 4.4) implies that λ, = 6 and λ, = 6. By 4.4), the remaining eigenvalues are then computed as λ,3 = 9 and λ,3 = 3. Therefore, both P and P are saddle points. Example 3 one saddle point). Consider the linear system 4.4), but with cost functions J = e 4t 4x + 7x + ) u, J = In this case, 4.3) admits the affine solution ξ x) = ξ x) = x +. e 4t 4x + 7x + ) u. 4.48) This yields equilibrium point x =, which is asymptotically stable for the dynamics ẋ = 3x ξ x) ξ x) = x. The determinant of the corresponding matrix Λ is computed by det Λ x) = x ) x + ) = 3x )x + ). It vanishes at the two points x =, x =. Notice that now x / [ x, x ]. At the equilibrium points P =,, ), P =, 3, 3), the second eigenvalues of the Jacobian matrix J are found to be λ, = 6 and λ, = 6. Moreover, recalling 4.4), the third eigenvalues are computed as λ,3 = 4 and λ,3 = 8. We thus conclude that P is a saddle and P is a sink. 3

24 Example 4 no saddle points). Consider the linear system 4.4), but with cost functions J = e 4t 4x 3x + ) u, J = In this case, we find that 4.3) admits the affine solution ξ x) = x, ξ x) = x +. e 4t 4x + 3x + ) u. 4.49) This yields equilibrium point x =, which is asymptotically stable for the dynamics ẋ = 3x ξ x) ξ x) = x. The determinant of the corresponding matrix Λ is computed by det Λ x) = x) x )x + ) = 3x. It vanishes at the two points x = / 3, x = / 3. Notice that x [ x, x ]. Moreover, P =,, ) +, P = , 3, ) +. 3 The eigenvalues of J at P and at P are found to be λ, = 3, λ,3 = 3 3, λ, = 3, λ,3 = 3 3. Therefore, P is a source and P is a sink. 5 The case with unique solutions In this section we study in greater detail the case where both P and P are saddle points, for the auxiliary dynamical system 4.). We consider first the linear-quadratic case. To fix the ideas, let λ, = λ, = and assume that λ, < < λ,3 and λ,3 < < λ,. 5.) Notice that this will be the case if the constant C Λ in 4.6) is positive and the equilibrium point x is asymptotically stable and lies in the interior of the interval [ x, x ]. Indeed, in this case λ, = p x ) = C Λ x x ) < and λ, = p x ) = C Λ x x ) >. Moreover, the matrix Λ in 4.7) satisfies Therefore det Λ x ) >, det Λ x ) <. λ,3 = γλ x ) λ, >, λ,3 = γλ x ) λ, <. 4

25 By continuity, we can then choose s ], s ] sufficiently small such that, for every s [ s, s] the Jacobian matrix J at the equilibrium point γ i s) defined at 4.5) has three real distinct eigenvalues λ i,j s), j =,, 3, which satisfy λ i,j ) = λ i,j, together with λ i, s) =, λ, s) < < λ,3 s) and λ,3 s) < < λ, s). 5.) Let M be the center-stable manifold for the equilibrium point P. In other words, M is an invariant -dimensional manifold, whose tangent space at P is spanned by the eigenvectors v = γ s) s, v = P P, s= corresponding to the eigenvalues λ, = and λ, <, respectively. We observe that, for general systems, a center manifold is not uniquely defined [3,, 8]. However, in the present setting it is clear that the local center manifold trough P is unique, because it must coincide with the curve of steady states γ. Indeed, the -dimensional manifold M can be uniquely determined as the union of the -dimensional stable manifolds of all equilibrium points γ s). Similarly, let M be the center-unstable manifold at P. Notice that M is uniquely determined as the union of the -dimensional unstable manifolds of all equilibrium points γ s). Its tangent space at P is spanned by the two eigenvectors v = γ s) s, v = P P. s= The segment joining P with P with endpoints removed) is a heteroclinic orbit of the system 4.), contained in the intersection M M. To proceed, we now make the key assumption that this intersection is transversal: A) At any point P along the segment joining P with P, the manifolds M and M intersect transversally. Namely, the union of the tangent spaces to M and M spans the whole space IR 3. Clearly, if the intersection is transversal at one such point P, then it is transversal at every other point. For reader s convenience, we collect all the assumptions that will be used in our main theorem. i) The assumption A) holds, providing the existence of the affine solution ξ, ξ ) in 3.6). ii) The determinant of the matrix Λx, ξ, ξ ) at 3.4) vanishes at the two points P i = x i, ξ x i), ξ x i)), i =,. iii) At least one of the equalities in 4.3) does NOT hold. steady states γ, γ are thus well defined. By Lemma, the curves of iv) At the points P and P, the eigenvalues of the Jacobian matrix J in 4.6) satisfy the sign conditions 5.). 5

26 v) As in A), the manifolds M, M intersect transversally. We are now ready to state our main uniqueness and stability theorem, for feedback solutions to the noncooperative differential game. Theorem. Let the above assumptions i) v) hold, and let Ω IR be any bounded interval containing x, x in its interior. Then, for any ε > there exists δ > such that the following holds. If the perturbations f, h, h, η, η are small enough, i.e. if they satisfy 4.39), then the system 4.3) 4.5) has a unique C solution ξ, ξ ) defined on the whole interval Ω, such that ξ x) ξ x) + ξ x) ξ x) ε for all x Ω. 5.3) Remark 5. In the above theorem, the interval Ω can be taken large enough so that it contains the stable equilibrium point x in 3.). Choosing δ > sufficiently small, the set Ω will then be positively invariant for the corresponding feedback dynamics 4.3). Proof of Theorem. Consider the vectors v, w introduced at 4.). To prove the result, it suffices to check that all steps in the construction of the heteroclinic orbit connecting P with P remain valid also in the presence of a small C perturbation of the vector fields v, w.. Thanks to the assumption iii), by the implicit function theorem the two curves γ, γ where v w = i.e., where v and w are parallel) are still well defined also in the presence of a small C perturbation. By continuity, at every point along these curves, the eigenvalues of the Jacobian matrix J in 4.6) still satisfy the strict inequalities 5.).. Given the dynamical system ẋ = vx) wx), 5.4) let M be the -dimensional manifold obtained as the union of all the -dimensional stable manifolds of the points along γ. Similarly, let M be the -dimensional manifold obtained as the union of all the -dimensional unstable manifolds of the points along γ. By the regularity theory of invariant manifolds see Chapter 4 in []), the tangent space to these manifolds varies continuously with the vector field v w, w.r.t. the C norm. In particular, if a small C perturbation is added to v and w, then the intersection M M is still transversal. This -dimensional intersection provides the unique heteroclinic orbit connecting a point P γ with a point P γ. 3. The graph of the the desired solution x ξ, ξ )x) of 4.3) 4.5) is now obtained concatenating the -dimensional stable manifold for P with the -dimensional unstable manifold for P. We conclude this section by providing an example where the crucial transversality condition A) can be directly checked. Example continued). Consider again the linear-quadratic system with dynamics 4.4) 6

27 and cost functionals 4.47). In this case, we have and S = S = 5, R = R = 3, b = b =, a = 3, ξ x) = ξ x) = 5x +, ψ x) = ψ x) = 6x ) Λ x) = 7x, Λ x) = Λ x) = 5x. At the point P x) = x, ξ x), ξ x)), the Jacobian matrix takes the form 4x 9x + 3 9x + 3 J x) = 8x + 5 4x + 4 5x + 3. The eigenvalues are 8x + 5 5x + 3 4x + 4 λ x) = 94x + ), λ x) = 48x + 8, λ 3 x) = 3x + ) with corresponding eigenvectors v x) =, v x) = 5 v 3 x) = 5 8x+6 6x. Notice that v and v are independent of x. As in Fig. 3, let M be the center-stable manifold for the point P, and let M be the center-unstable manifold for the point P. Then the line {P x) = x, ξ x), ξ x)) ; x IR} lies in the intersection M M. To check that this intersection is transversal we observe that, at every point P x) = x, ξ x), ξ x)), the tangent space to the manifold M is spanned by the vectors v and v, independent of x. On the other hand, at P x ), the tangent space to M is spanned by v and v 3 x ). Hence M and M have transversal intersection. 6 Perturbed solutions on the entire real line In the previous sections we studied the existence of solutions to the perturbed system 4.3) on a bounded interval Ω IR, possibly containing the two points where det Λ x) vanishes. We now examine whether these perturbed solutions can be extended to the whole real line. The heart of the matter is illustrated in Fig.. In the linear-quadratic case, the determinant of the matrix Λx, ξ, ξ ) in 3.4) is negative inside the double cone Γ. = { x, ξ, ξ ) ; a x b ξ b ξ ) b b ξ ξ < }. 6.) To fix the ideas, assume that detλ x) for all x [x, + [. Notice that trajectories of the implicit ODE 4.3) can become singular near the boundary of Γ. In this case, they cannot be prolonged any further Fig., left). To make sure that a small perturbation of the trajectory ξ ) is well defined for all x [x, + [ we must check that it never approaches the surface where det Λx, ξ, ξ ) =. 7

28 . Recalling 3.8), set ā = a A. By 3.6) and 4.7) all entries of the matrix Λ x) are polynomials of degree one. Recalling 3.), the matrix of leading coefficients is computed as Λ. Λ a x) x b α b α b ξ = lim = x x b α a x b α b ξ X = A ā X b b X b b X ā X X. 6.) Throughout the following, we assume that Λ is invertible, i.e.,. = detλ = [ ] A ā X X) X X p. 6.3) The inverse matrix of Λ is Λ ) = ā X A [ā b X ) X ] X X b X b b X ā X X Let λ, λ be the eigenvalues of Λ ) and denote by r, r the corresponding normalized eigenvectors, so that r = r =. We have ā X λ + λ = X ) A [ā X ) X ] X, X 6.4) λ λ = A [ā X ) X ] X. X We introduce the constant. λ max. = max { Reλ ), Reλ )) }. 6.5) Lemma 3. Under the assumption A), if p = [ ] A ā X X) X X >. 6.6) then there exists x > such that the affine solution 3.6) satisfies x, ξ x)), ξ x)) Γ +. = { x, ξ, ξ ) ; det Λx, ξ, ξ ) > } for all x ], x ] [x, [. Moreover, one has λ max <. 6.7) Proof. For x large, the determinant of Λ x) has the same sign as the determinant of the matrix of leading order coefficients Λ. Hence the first assertion of the lemma is clear. To prove 6.7), we consider two cases. 8

29 If a <, then A <, ā > and X + X <. Recalling 6.6) and 6.4), we obtain λ λ > and λ + λ <. 6.8) If < γ < a and K + K > /, then A >, ā >, X + X > > ā and X i >. By 6.6) and 6.4) one again obtains 6.8). In both cases, 6.8) implies 6.7). Lemma 4. Under the assumptions A), if p = [ ] A ā X X) X X <, 6.9) then there exists x > such that the affine solution 3.6) satisfies x, ξ x), ξ x)) Γ. = { x, ξ, ξ ) ; det Λx, ξ, ξ ) < } for all x ], x ] [x, [. Moreover, the eigenvalues λ and λ of Λ ) are real, with opposite signs. In particular, λ max = max{λ, λ } >. 6.) In the case a > there exists a constant < γ < a, depending on S, S, α, α, such that { γ λmax < if < γ < γ, γ λ max > if γ < γ < a. 6.) Proof. As in Lemma 3, the first statement is clear, because for x large detλ x) and detλ have the same sign. By 6.4) and 6.9) we have λ λ = A [ā X ) X ] X <. X Hence the eigenvalues are real and 6.) holds. To prove the last statement, we observe that γ λ max = γ a γ ā X X + X X ). This yields 6.), for some γ ], a [. 9

30 The following theorem provides the existence of solutions to the perturbed ODE 4.3) on an unbounded interval [x, + [. An entirely similar result holds on a domain of the form ], x ]. We recall that λ max is the constant introduced at 6.5). Theorem 3. Let the assumption A) hold together with 6.3). If γ λ max <, 6.) then there exist δ, δ > sufficiently small and x > such that the following holds. For any perturbations such that and any initial data f C + η C + η C + h C + h C δ 6.3) ξ i x ) = ξ i with ξ i ξ i x ) δ x, i =,, 6.4) the implicit ODE in 4.3)-4.5) admits a unique solution ξ, ξ ) which is defined for all x [x, + [. Remark 6. There are two main cases where Theorem 3 applies. i) If 6.6) holds, then Lemma 3 implies that γλ max < <. In this case, for x sufficiently large, we have detλ x) >, and the solution ξ lies outside the cone Γ. This is illustrated in Fig., left and center. ii) If 6.9) holds, then Lemma 4 yields γλ max <, provided that γ < a. In this case, for x sufficiently large, we have detλ x) <, and the solution ξ lies inside the cone Γ. This is illustrated in Fig., right. Proof of Theorem 3. By a rescaling of coordinates, without loss of generality we can assume that x =. The proof will be given in several steps.. We first consider the unperturbed case, where f = g = g = h = h =. From 4.3) it follows Λ ) x, ξ, ξ ξ ξ Λ x) ξ ) ξ ) = γ a )ξ ξ ). γ a )ξ ξ ) Notice that ξ is determined by the implicit ODE 4.3), while the derivative ξ ) = α α ) is constant, according to 3.6). Introducing the variable ζ = ξ ξ, by a direct computation we obtain Λ ) ζ b ζ + b ζ b ζ α γ a )ζ x, ξ, ξ =. b ζ b ζ + b ζ α γ a )ζ ζ 3

31 This can be written as Λ ) x, ξ, ξ ζ = γi Λ ) ζ ζ ζ, where Λ is the matrix of leading order terms 6.) and I is the identity matrix. We thus have ζ = [ Λ ) x, ξ, ξ Λ x) ][ Λ x)) γi Λ ) ] ζ. 6.5) ζ To bring out the leading order terms in the right hand side of 6.5), we estimate ζ and Λ x)) γ I Λ ) = x [γλ ) I ] + O) Λ x, ξ, ξ ) Λ x) = b I Λ x)) ζ + b ζ b ζ b ζ b ζ + b ζ x 6.6), where the Landau symbol O) denotes a uniformly bounded matrix-valued function. Observing that Λ x)) C/x, for some constant C and all x sufficiently large, we can thus write 6.5) as ζ = x [γλ ) I ] ζ + Ψx, ζ, ζ ) ζ. 6.7) ζ Here the remainder term satisfies the bound ζ ζ as long as Ψx, ζ, ζ ) C x + ζ + ζ ), 6.8) ζ + ζ x δ, for some large constant C > and for δ > sufficiently small, depending on all coefficients a i, b i.. Given any solution ξ ) to the implicit ODE.7), we introduce the auxiliary variables. t = ) [ ) )] x, zt) = t ζ = t ξ ξ. 6.9) t t t Here x [, + [, t [, [, while ζ = ζ, ζ )x) is a solution to 6.5). Denoting the derivative w.r.t. t by an upper dot, we obtain ż = t zt) ) t ζ. 6.) t 3

32 To bring out the leading order terms in the above equation, we use 6.7) and obtain γλ żt) = ) ) I + Kt, z) zt) t [, [, 6.) t where Kt, z) = t Ψ t, z t, z ). t By 6.8), the matrix Kt, z) remains uniformly bounded, as long as z = ζ / x remains sufficiently small. Notice that γλ ) I is a matrix with eigenvalues γλ i, i =,. By the assumption 6.) the real parts of these eigenvalues satisfy max { Reγλ ), Reγλ ) } < λ. 6.) for some constant λ >. By a classical result on the stability of linear systems see for example Theorem.6 in []), there exists an equivalent norm on IR such that every solution to the homogeneous equation ż = [ γλ ) I ] z 6.3) satisfies d zt) < λ zt). 6.4) dt Let C 3 be an upper bound on the corresponding operator norm of the matrix K, as long as z δ 3, for some constant δ 3 >. Then the solution of 6.) satsfies d dt zt) λ ) t + C 3 zt), t [, [. 6.5) as long as zt) δ 3. We now choose t < such that For t [t, [, 6.5) implies hence d dt zt) zt) C 3 λ t. t t λ zt), 6.6) t λ/ zt ). If now z ) is sufficiently small, then zt ) δ 3. In turn, 6.6) implies that zt) is well defined for all t [t, [ and zt) as t. Returning to the original variables x, ξ, we conclude that if ζ) = ξ) ξ ) is sufficiently small, then the solution x ξx) of 4.3) is well defined for all x [, + [. This proves the theorem in the unperturbed case, where f = h = h = η = η =. 3. In the remainder of the proof, we check that the same conclusion holds for the perturbed problem, where the dynamics is determined by 4.3) 4.5). A minor modification of the argument in step yields 3