The Convergence of the Minimum Energy Estimator

Size: px
Start display at page:

Download "The Convergence of the Minimum Energy Estimator"

Transcription

1 The Convergence of the Minimum Energy Estimator Arthur J. Krener Department of Mathematics, University of California, Davis, CA , USA, Summary. We show that under suitable hypothesis that the minimum energy estimate of the state of a partially observed dynamical system converges to the true state. The main assumption is that the system is uniformly observable for any input. Key wor: Nonlinear Observer, State Estimation, Nonlinear Filtering, Minimum Energy Estimation, High Gain Observers, Extended Kalman Filter, Uniformly Observable for Any Input. 1 Introduction We consider the problem of estimating the current state x(t) R n of a nonlinear system ẋ = f(x, u) y = h(x, u) (1) x() = x from the past controls u(s) U R m, s t, past observations y(s) R p, s t and some information about the initial condition x. The functions f, h are assumed to be known. We assume that f, h are Lipschitz continuous on R n and satisfy linear growth conditions f(x, u) f(z, u) L x z h(x, u) h(z, u) L x z f(x, u) L(1 + x ) h(x, u) L(1 + x ) () for some L > and all x R n and u U. We also assume that u(s), s t is piecewise continuous. Piecewise continuous means continuous from the left with limits from the right (collor) and with a finite number of discontinuities Research supported in part by NSF DMS-439 and AFOSR F

2 Arthur J. Krener in any bounded interval. The symbol denotes the Euclidean norm. The equations (1) model a real system which probably operates over some compact subset of R n. Therefore we may only need () to hold on this compact set as we may be able to extend f, h so that () hol on all of R n. To construct an estimator, we follow an approach introduced by Mortenson [9] and refined by Hijab [5], [6]. To account for possible inaccuracies in the model (1), we add deterministic but unknown noises, ẋ = f(x, u) + g(x)w y = h(x, u) + k(x)v (3) where w(t) R l, v(t) R p are L [, ) functions. The driving noise, w(t), represents modeling errors in f and other possible errors in the dynamics. The observation noise, v(t), represents modeling errors in h and other possible errors in the observations. We assume that g(x) g(z) L x z k(x) k(z) L x z g(x) L k(x) L. Note that g(x), k(x) are matrices so g(x), k(x) denote the induced Euclidean matrix norms. Define Γ (x) = g(x)g (x) R(x) = k(x)k (x) and assume that there exist positive constants m 1, m such that for all x R n, (4) m 1 I p p R(x) m I p p. (5) In particular this implies that k(x) and R(x) are invertible for all x. The initial condition x of (1) is also unknown and viewed as another noise. We are given a function Q (x ) which is a measure of the minimal amount of energy in the past that it would take to put the system in state x at time. We shall assume that Q is Lipschitz continuous on every compact subset of R n. Given the output y(s), s t, we define the minimum discounted energy necessary to reach the state x at time t as { Q(x, t) = inf e αt Q (z()) + 1 e α(t s) ( w(s) + v(s) ) } (6) where the infimum is over all triples w( ), v( ), z( ) satisfying ż(s) = f(z(s), u(s)) + g(z(s))w(s) y(s) = h(z(s), u(s)) + k(z(s))v(s) z(t) = x. (7)

3 The Convergence of the Minimum Energy Estimator 3 The discount rate is α. Notice that Q(x, t) depen on the past control u(s), s t and past output y(s), s t. A minimum energy estimate ˆx(t) of x(t) is a state of minimum discounted energy given the system (3), the initial energy Q (z) and the observations y(s), s t, ˆx(t) arg min x Q(x, t). (8) Of course the minimum need not be unique but we assume that there is a piecewise continuous selection ˆx(t). Clearly Q satisfies Q(x, ) = Q (x). (9) In the next section we shall show that Q(x, t) is locally Lipschitz continuous and it satisfies, in the viscosity sense, the Hamilton Jacobi PDE = αq(x, t) + Q t (x, t) + Q x (x, t)f(x, u(t)) (1) + 1 Q x(x, t) Γ 1 y(t) h(x, u(t)) R 1 where the subscripts x, t, x i, etc. denote partial derivatives and Q x (x, t) Γ = Q x (x, t)γ (x)q x (x, t) y(t) h(x, u(t)) R = (y(t) h(x, 1 u(t))) R 1 (x)(y(t) h(x, u(t))). To simplify the notation we have suppressed the arguments of Γ, R 1 on the left but they should be clear from context. In the next section we introduce the concept of a viscosity solution to the Hamilton Jacobi PDE (1) and show that Q(x, t) defined by (6) is one. Section 3 is devoted to the properties of smooth solutions to (1) and its relationship with the extended Kalman filter [4]. The principal result of this paper is presented in Section 4, that, under suitable hypothesis, any piecewise continuous selection of (8) globally converges to the corresponding trajectory of the noise free system (1) and this convergence is exponential if α >. We close with some remarks. Viscosity Solutions The following is a slight modification of the standard definition []. Definition 1. A viscosity solution of the partial differential equation (1) is a continuous function Q(x, t) which is Lipschitz continuous with respect to x on every compact subset of R n+1 and such that for each x R n, t > the following conditions hold.

4 4 Arthur J. Krener 1. If Φ(ξ, ) is any C function such that for ξ, near x, t then Φ(x, t) Q(x, t) e α(t ) (Φ(ξ, ) Q(ξ, )). αφ(x, t) + Φ t (x, t) + Φ x (x, t)f(x, u(t)) + 1 Φ x(x, t) Γ 1 y(t) h(x, u(t)) R 1.. If Φ(ξ, ) is any C function such that for ξ, near x, t then Φ(x, t) Q(x, t) e α(t ) (Φ(ξ, ) Q(ξ, )). αφ(x, t) + Φ t (x, t) + Φ x (x, t)f(x, u(t)) + 1 Φ x(x, t) Γ 1 y(t) h(x, u(t)) R 1. Theorem 1. The function Q(x, t) defined by (6) is a viscosity solution of the Hamilton Jacobi PDE (1) and it satisfies the initial condition (9). Proof. Clearly the initial condition is satisfied and Q(, ) is Lipschitz continuous with respect to x on compact subsets of R n. We start by showing that Q(, t) is Lipschitz continuous with respect to x on compacta. Let K be a compact subset of R n, x K, T > and t T. Now ( Q(x, t) e αt Q (z()) + 1 e α(t s) y(s) h(z(s), u(s)) R )(11) 1 where ż = f(z, u) z(t) = x. By standard arguments, z(s), s t is a continuous function of x K and the right side of (11) is a continuous functional of z(s), s t. Hence the composition is bounded on the compact set K and there exists c large enough so that K {x : Q(x, t) c for all t T }. Fix x K and t [, T ], given ɛ > we know that there exists w(s) such that where Q(x, t) + ɛ e αt Q (z()) (1) + 1 e α(t s) ( w(s) + y(s) h(z(s), u(s)) R 1 )

5 The Convergence of the Minimum Energy Estimator 5 ż = f(z, u) + g(z)w z(t) = x. Now w(s) e αs w(s) e αt e α(t s) w(s) e αt (c + ɛ). Using the Cauchy Schwarz inequality we also have where ( w(s) ( t 1 w(s) M M = ( T e αt (c + ɛ). Notice that this bound does not depend on the particular x K and t T, only that w( ) has been chosen so that (13) hol. Let ξ K, define ζ(s), s t by ζ = f(ζ, u) + g(ζ)w ζ(t) = ξ. where w( ) is the above. Now for s t we have ζ(s) ζ(t) + ζ(t) + s so using Gronwall s inequality s f(ζ(r), u(r)) + g(ζ(r)) w(r) dr L(1 + ζ(r) + w ) dr ζ(s) e LT ( ξ + LT + LM). Since ξ lies in a compact set we conclude that there is a compact set containing ζ(s) for s t T for all ξ K. Now so Q(ξ, t) e αt Q (ζ()) + 1 e α(t s) ( w(s) + y(s) h(ζ(s), u(s)) R 1 )

6 6 Arthur J. Krener Q(ξ, t) Q(x, t) ɛ + e αt ( Q (ζ()) Q (z()) ) (13) Again by Gronwall for s t e α(t s) y(s) h(ζ(s), u(s)) R 1 e α(t s) y(s) h(z(s), u(s)) R 1 z(s) ζ(s) e (LT +LM) x ξ. The trajectories z(s), ζ(s), s t lie in a compact set where Q and the integran are Lipschitz continuous so there exists L 1 such that But ɛ was arbitrary so Q(ξ, t) Q(x, t) ɛ + L 1 x ξ. Q(ξ, t) Q(x, t) L 1 x ξ. Reversing the roles of (x, t) and (ξ, t) yiel the other inequality. We have shown that Q(, t) is Lipschitz continuous on K for t T. Next we show that Q(x, t) is continuous with respect to t > for fixed x K. Suppose x, K. If < t, let w( ) satisfy (13) and define Then ζ(s) = z(s + t ) and w(s) = w(s + t ) ζ = f(ζ, u) + g(ζ) w ζ() = x so Q(x, ) e α Q (ζ()) + 1 e α( s) ( w(s) + y(s) h(ζ(s), u(s)) R 1 ) Q(x, ) Q(x, t) ɛ + e α Q (z(t )) e αt Q (z()) 1 1 e α(t s) w(s) e α(t s) y(s) h(z(s), u(s)) R t e α(t s) ( y(s + t) h(z(s), u(s)) R 1 y(s) h(z(s), u(s)) R 1 )

7 Clearly the quantities 1 The Convergence of the Minimum Energy Estimator 7 e α Q (z(t )) e αt Q (z()), e α(t s) y(s) h(z(s), u(s)) R, 1 ( 1 t ) t e α(t s) y(s + t) h(z(s), u(s)) R y(s) h(z(s), u(s)) 1 R 1 all go to zero as t. Let χ t (s) be the characteristic function of [, t] and T >. For t T 1 t e α(t s) w(s) 1 w(s) 1 T χ t (s) w(s) which goes to zero as t by the Lebesgue dominated convergence theorem so lim Q(x, ) Q(x, t) < ɛ. t If > t, let w( ) satisfy (13) and define { if s < t w(s) = w(s + t ) if t s ζ = f(ζ, u) + g(ζ) w ζ() = x Then ζ(s) = x(s + t ) and so Q(x, ) e αt Q (ζ()) + 1 e α( s) ( w(s) + y(s) h(ζ(s), u(s)) R 1 ) Q(x, ) Q(x, t) ɛ + e α Q (ζ()) e αt Q (ζ( t)) + 1 t e α( s) y(s) h(ζ(s), u(s)) R t e α( s) ( y(s) h(ζ(s), u(s)) R 1 y(s + t ) h(ζ(s), u(s)) R 1 ) This clearly goes to ɛ as t so we conclude that lim Q(x, ) Q(x, t) < ɛ. t +

8 8 Arthur J. Krener But ɛ was arbitrary and we can reverse t and so lim Q(x, ) = Q(x, t). t Now by the Lipschitz continuity with respect to x for all x, ξ K,, t T Q(ξ, ) Q(x, t) Q(ξ, ) Q(x, ) + Q(x, ) Q(x, t) L 1 ξ x + Q(x, ) Q(x, t) and this goes to zero as (ξ, ) (x, t). We conclude that Q(x, t) is continuous. Next we show that 1 and of Definition 1 hold. Let < t then the principle of optimality implies that Q(x, t) = inf { e α(t ) Q(z(), ) + 1 e α(t s) ( w(s) + v(s) ) } where the infimum is over all w( ), v( ), z( ) satisfying on [, t] ż = f(z, u) + g(z)w y = h(z, u) + k(z)v z(t) = x. Let Φ(ξ, ) be any C function such that near x, t (14) Φ(x, t) Q(x, t) e α(t ) (Φ(ξ, ) Q(ξ, )). (15) Suppose w(s) = w, a constant, on [, t] and let ξ = z() where v( ), z( ) satisfy (14). For any constant w we have Q(x, t) e α(t ) Q(ξ, ) + 1 so adding (15, 16) together yiel Φ(x, t) e α(t ) Φ(ξ, ) + 1 e α(t s) ( w + v(s) ). (16) e α(t s) ( w + v(s) ) Recall that u(t) is continuous from the left. Assume t is small then for any constant w Φ(x, t) (1 α(t )) Φ(x (f(x, u(t)) + g(x)w)(t )) + 1 ( w + y(t) h(x, u(t)) ) R (t ) + o(t ) 1 Φ(x, t) Φ(x, t) αφ(x, t)(t ) Φ t (x, t)(t ) Φ x (x, t)(f(x, u(t)) + g(x)w)(t ) + 1 ( w + y(t) h(x, u(t)) ) R (t ) + o(t ) 1 αφ(x, t) + Φ t (x, t) + Φ x (x, t)(f(x, u(t)) + g(x)w) 1 ( w + y(t) h(x, u(t)) ) R 1

9 The Convergence of the Minimum Energy Estimator 9 We let to obtain w = g (x)φ x (x, t) αφ(x, t) + Φ t (x, t) + Φ x (x, t)f(x, u(t)) + 1 Φ x(x, t) Γ 1 y(t) h(x, u(t)) R 1. On the other hand, suppose (17) Φ(x, t) Q(x, t) e α(t ) (Φ(ξ, ) Q(ξ, )) (18) in some neighborhood of x, t. Given any ɛ > and < t there is a w(s) such that Q(x, t) e α(t ) Q(ξ, ) (19) + 1 e α(t s) ( w(s) + v(s) ) + ɛ(t ) where ξ = z() from (14). Adding (18, 19) together yiel for some w(s) Φ(x, t) e α(t ) Φ(ξ, ) + 1 e α(t s) ( w(s) + v(s) ) + ɛ(t ), Φ(x, t) αφ(x, t)(t ) Φ t (x, t)(t ) Φ x (x, t)f(x, u(t))(t ) Φ x (x(s), s)g(x(s))w(s) + 1 +o(t ) + ɛ(t ). w(s) + 1 y(t) h(x, u(t)) R 1(t ) At each s [, t], the minimum of the right side with respect to w(s) occurs at w(s) = g (x(s))φ x (x(s), s) so we obtain αφ(x, t) + Φ t (x, t) + Φ x (x, t)f(x, u(t)) + 1 Φ x(x, t) Γ 1 y(t) h(x, u(t)) R 1. () Note that we have an initial value problem (9) for the Hamilton Jacobi PDE (1) and this determines the directions of the inequalities (17, ).

10 1 Arthur J. Krener 3 Smooth Solutions In this section we review some known facts about viscosity solutions in general and Q(x, t) in particular. If Q is differentiable at x, t then it satisfies the Hamilton Jacobi PDE (1) in the classical sense []. There is at most one viscosity solution to the Hamilton Jacobi PDE (1) []. Furthermore [9], [5], if Q is differentiable at (ˆx(t), t) then If, in addition, ˆx is differentiable at t then so this and (1) imply that = Q x (ˆx(t), t). (1) d dt Q(ˆx(t), t) = Q t(ˆx(t), t) + Q x (ˆx(t), t) ˆx(t) = Q t (ˆx(t), t) d dt Q(ˆx(t), t) = αq(ˆx, t) + 1 y(t) h(ˆx(t), u(t)). () Suppose that Q is C in a neighborhood of (ˆx(t), t) and ˆx is differentiable in a neighborhood of t. We differentiate (1) with respect to t to obtain = Q xit(ˆx(t), t) + Q xix j (ˆx(t), t) ˆx j (t). We are using the convention of summing on repeated indices. We differentiate the Hamilton Jacobi PDE (1) with respect to x i at ˆx(t) to obtain = Q txi (ˆx(t), t) + Q xj x i (ˆx(t), t)f j (ˆx(t)) +h rxi (ˆx(t), u(t))r 1 rs (ˆx(t)) (y s (t) h r (ˆx(t), u(t))) so by the commutativity of mixed partials Q xi x j (ˆx(t), t) ˆx j (t) = Q xi x j (ˆx(t), t)f j (ˆx(t), u(t)) +h rxi (ˆx(t), u(t))r 1 rs (ˆx(t)) (y s (t) h r (ˆx(t), u(t))) If Q xx (ˆx(t), t) is invertible, we define P (t) = Q 1 xx (ˆx(t), t) and obtain an ODE for ˆx(t), ˆx(t) = f(ˆx(t), u(t)) + P (t)h x(ˆx(t), u(t))r 1 (ˆx(t)) (y(t) h(ˆx(t), u(t)))(3) Suppose that Γ (x), R(x) are constant, f, h are C in a neighborhood of ˆx(t) and Q is C 3 in a neighborhood of (ˆx(t), t) then we differentiate the PDE (1) twice with respect to x i and x j at ˆx(t) to obtain

11 The Convergence of the Minimum Energy Estimator 11 = αq xix j (ˆx(t), t) + Q xix jt(ˆx(t), t) +Q xix k (ˆx(t), t)f kxj (ˆx(t), u(t)) + Q xjx k (ˆx(t), t)f kxi (ˆx(t), u(t)) +Q xix jx k (ˆx(t), t)f k (ˆx(t), u(t)) + Q xix k (ˆx(t), t)γ kl Q xl x j (ˆx(t), t) h rxi (ˆx(t), u(t))r 1 rs h sxj (ˆx(t), u(t)) +h rxi x j (ˆx(t), u(t))r 1 rs (y s (t) h s (ˆx(t), u(t))) If we set to zero α, the second partials of f, h and the third partials of Q then we obtain = Q xi x j t(x, t) + Q xi x k (x, t)f kxj (ˆx(t), u(t)) + Q xj x k (ˆx(t), t)f kxi (ˆx(t), u(t)) +Q xi x k (ˆx(t), t)γ kl Q xl x j (ˆx(t), t) h rxi (ˆx(t), u(t))r 1 rs h sxj (ˆx(t), u(t)) and so, if it exists, P (t) satisfies P (t) = f x (ˆx(t), u(t))p (t) + P (t)f x(ˆx(t), u(t)) +Γ P (t)h x(ˆx(t), u(t))r 1 h x (ˆx(t), u(t))p (t) (4) We recognize (3, 4) as the equations of the extended Kalman filter [4]. Baras, Bensoussan and James [1] have shown that, under suitable assumptions, the extended Kalman filter converges to the true state provided that the initial error is not too large. Their conditions are quite restrictive and hard to verify. Recently Krener [8] proved the extended Kalman filter is locally convergent under broad and verifiable conditions. There is a typographical error in the proof, the corrected version is available from the web. 4 Convergence In this section we shall prove the main result of this paper, that is, under certain conditions, the minimum energy estimate converges to the true state. Lemma 1. Suppose Q(x, t) is defined by (6) and ˆx(t) is a piecewise continuous selection of (8). Then for any t Q(ˆx(t), t) = e α(t ) Q(ˆx(), ) + 1 e α(t s) y(s) h(ˆx(s), u(s)) R 1 Proof. With sufficient smoothness the lemma follows from (). If Q, ˆx are not smooth we proceed as follows. Let s i 1 < s i t then { Q(ˆx(s i ), s i ) = inf e α(si si 1) Q(z(s i 1 ), s i 1 ) } + 1 si s i 1 e α(si s) ( w(s) + v(s) ) where the infimum is over all w( ), v( ), z( ) satisfying

12 1 Arthur J. Krener ż = f(z, u) + g(z)w y = h(z, u) + k(z)v z(s i ) = ˆx(s i ). If ˆx(s), u(s) are continuous on [s i 1, s i ] then { } Q(ˆx(s i ), s i ) inf e α(s i s i 1 ) Q(z(s i 1 ), s i 1 ) { 1 si + inf e α(s i s) ( } w(s) + v(s) ) s i 1 e α(si si 1) Q(ˆx(s i 1 ), s i 1 ) { } 1 si + inf e α(si s) v(s) s i 1 e α(si si 1) Q(ˆx(s i 1 ), s i 1 ) + 1 y(s i) h(ˆx(s i ), u(s i )) R 1(s i s i 1 ) + o(s i s i 1 ). Since ˆx(s), u(s) are piecewise continuous on [, t], they have only a finite number of discontinuities. Let = s < s 1 <... < s k = t then for most i the above hol so Q(ˆx(t), t) e α(t ) Q(ˆx(), ) On the other hand Q(ˆx(s i ), s i ) inf + 1 e α(t s) y(s) h(ˆx(s), u(s)) R. 1 { e α(si si 1) Q(z(s i 1 ), s i 1 ) + 1 for any w( ), v( ), z( ) satisfying si s i 1 e α(s i s) ( w(s) + v(s) ) ż = f(z, u) + g(z)w y = h(z, u) + k(z)v z(s i 1 ) = ˆx(s i 1 ). In particular if we set w = and assume ˆx(s) is continuous on [s i 1, s i ] then Q(ˆx(s i ), s i ) e α(si si 1) Q(ˆx(s i 1 ), s i 1 ) + 1 Q(ˆx(s i ), s i ) e α(s i s i 1 ) Q(ˆx(s i 1 ), s i 1 ) si } s i 1 e α(si s) v(s) + 1 y(s i) h(ˆx(s i 1 ), u(s i 1 )) R 1(s i s i 1 ) + o(s i s i 1 ).

13 The Convergence of the Minimum Energy Estimator 13 Therefore since ˆx(s), u(s) are piecewise continuous on [, t], with only a finite number of discontinuities then Q(ˆx(t), t) e α(t ) Q(ˆx(), ) + 1 Definition. [3] The system e α(t s) y(s) h(ˆx(s), u(s)) R 1 ż = f(z, u) + g(z)w y = h(z, u) is uniformly observable for any input if there exist coordinates {x ij : i = 1,..., p, j = 1,..., l i }. (5) where 1 l 1... l p and l i = n such that in these coordinates the system takes the form y i = x i1 + h i (u) ẋ i1 = x i + f i1 (x 1, u) + g i1 (x 1 )w.. ẋ ij = x ij+1 + f ij (x j, u) + g ij (x j )w. ẋ ili 1 = x ili + f i,li 1(x li 1, u) + g i,li 1(x li 1)w ẋ ili = f i,li (x li, u) + g i,li (x li )w (6) for i = 1,..., p where x j is defined by x j = (x 11,..., x 1,j l1, x 1,..., x pj ). (7) Notice that in x j the indices range over i = 1,..., p; k = 1,..., j l i = min{j, l i } and the coordinates are ordered so that right index moves faster than the left. We also require that each f ij, g ij be Lipschitz continuous and satisfy growth conditions, there exists an L such that for all x, z R n, u U, f ij (x j, u) f ij (z j, u) L x j z j g ij (x j ) g ij (z j ) L x j z j f ij (x j, u) (L + 1) x j g ij (x j ) L. (8) A system as above but without inputs is said to be uniformly observable [3].

14 14 Arthur J. Krener Let l 1... i l i n n 1... A 1 A i =... A = A p... p n C i = [ 1... ] C 1 1 l i C =... C p l f i1 (x 1, u) i 1 f 1 (x, u) f i (x, u) =. f(x, u) =. f ili (x li, u) f p (x, u) l g i1 (x 1 ) i l n l ḡ 1 (x) ḡ i (x) = ḡ(x) =. g ili (x li ). ḡ p (x) n 1 (9) h(u) = [ h 1 (u),..., h p (u) ] (3) then (6) becomes ẋ = Ax + f(x, u) + ḡ(x)w y = Cx + h(u) (31) We recall the high gain observer of Gauthier, Hammouri and Othman [3]. Their estimate x(t) of x(t) given x, y(s), s t is given by the observer x = A x + f( x, u) + ḡ( x)w + S 1 (θ)c (y C x h(u)) x() = x (3) where θ > and S(θ) is the solution of A S(θ) + S(θ)A C C = θs(θ). (33) It is not hard to see that S(θ) is positive definite for θ > for it satisfies the Lyapunov equation ( A θ ) I S(θ) + S(θ) ( A θ ) I = C C where C, ( A θ I) is an observable pair and ( A θ I) has all eigenvalues equal to θ. Gauthier, Hammouri and Othman [3] showed that when θ is sufficiently large, p = 1, u= and w( ) is L [, ) then x(t) x(t) exponentially as t. We shall modify their proof to show when θ is sufficiently large, p is

15 The Convergence of the Minimum Energy Estimator 15 arbitrary, u( ) is piecewise continuous and w( ) is L [, ) then x(t) x(t) exponentially as t. The key to both results is the following lemma. We define x θ = x S(θ)x. Since S(θ) is positive definite, for each θ >, there exists constants M 1 (θ), M (θ) so that M 1 (θ) x x θ M (θ) x. (34) Lemma. [3] Suppose ḡ is of the form (9) and satisfies the Lipschitz conditions (8). Then there exists a Lipschitz constant L which is independent of θ such that for all x, z R n, ḡ(x) ḡ(z) θ L x z θ. (35) Note that ḡ(x) is an n l matrix so ḡ(x) ḡ(z) θ denotes the induced operator norm. Proof. It follows from (33) that S ij,rs (θ) = S ij,rs(1) θ j+s 1 = ( 1)j+s θ j+s 1 Let C = 1 M 1 (1) then x C x 1. Let σ = max{ S ij,rs (1) } then for each constant w R l ḡ(x)w ḡ(z)w θ (ḡ ij (x)w ḡ ij (z)w) S ij,rs (1) θ j+s 1 Define ξ ij = x ij θ j, and ξ j, ζ j as with x j, z j. Then σl 1 θ j+s 1 x j z j x s z s w. ζ ij = z ij θ j ( ) j + s. (36) j 1 (ḡ rs(x)w ḡ rs (z)w) and so 1 θ j x j z j ξ j ζ j ḡ(x)w ḡ(z)w θ σl θ ξ j ζ j ξ s ζ s w σl θn ξ ζ w σl θcn ξ ζ 1 w σl Cn x z θ w ḡ(x) ḡ(z) θ σl Cn x z θ.

16 16 Arthur J. Krener Notice that for each u U, f(, u) also satisfies the hypothesis of the above lemma so Theorem. Suppose f(x, u) f(z, u) θ L x z θ. the system (5) is uniformly observable for any input so that it can be transformed to (31) which satisfies the Lipschitz and growth conditions (8), u( ) is piecewise continuous, w( ) is L [, ), i.e., w(s) < x(t), y(t) are any state and output trajectories generated by system (31) with inputs u( ) and w( ), θ is sufficiently large x(t) is the solution of (3). Then x(t) x(t) exponentially as t. Proof. Let x(t) = x(t) x(t) then d dt x θ = x S(θ) x Define = x S(θ) ( A x + f(x, u) f( x, u) + (ḡ(x) ḡ( x)) w S 1 (θ)c C x ) θ x θ + x θ f(x, u) f( x, u) + (ḡ(x) ḡ( x)) w θ ( θ + L(1 + w ) ) x θ. β(t, ) = θ + L(1 + w(s) ). We choose θ 5 L and large enough so that ( w(s) 1. Then for t 1 β(t, ) = ( θ + L)(t ) + L w(s) ( ( θ + L)(t ) + L 1 ( ( θ + L)(t ) + L (t L(t ) ( w(s) w(s)

17 The Convergence of the Minimum Energy Estimator 17 By Gronwall s inequality for < + 1 t x(t) θ e β(t,) x() θ e L(t ) x() θ and we conclude that x(t) x(t) exponentially as t. Theorem 3. (Main Theorem) Suppose the system (1) is uniformly observable for any input and so without loss of generality we can assume that is in the form ẋ = Ax + f(x, u) y = Cx + h(u) (37) with A, C, f, h as above, g(x) has been chosen so that (5) is uniformly observable for any input and WLOG (5) is in the form (31) with A, C, f, ḡ, h as above, k(x) has been chosen to satisfy condition (5), x(t), u(t), y(t) are any state, control and output trajectories generated by the noise free system (37), Q(x, t) is defined by (6) with α for the system ẋ = Ax + f(x, u) + ḡ(x)w y = Cx + h(u) + k(x)v (38) where Q (x ) is Lipschitz continuous on compact subsets of R n, ˆx(t) is a piecewise continuous minimizing selection of Q(x, t), (8). Then x(t) ˆx(t) as t. If α > then the convergence is exponential. Proof. Let x(t) be the solution of the high gain observer (3) with ḡ =, driven by u(t), y(t) where x = ˆx and the gain is high enough to insure exponential convergence, x(t) x(t) as t. (39) We know that for any T there exists w T (t) such that the solution z T (t) of satisfies for T e α(t ) Q(z T (), ) + 1 ż T = Az T + f(z T, u) + ḡ(z T )w T z T (T ) = ˆx(T ) T e α(t s) ( w T (s) + y(s) Cz T (s) h(u(s)) R 1 ) e αt T +1 + Q(ˆx(T ), T )

18 18 Arthur J. Krener From Lemma 1 we have for T, so Q(ˆx(T ), T ) = e α(t ) Q(ˆx(), ) + 1 T e α(t s) y(s) C ˆx(s)) h(u(s)) R. 1 By the definition (6) of Q since x(s), y(s) satisfy the noise free system (37), Q(x(), ) e α Q (x()), Q(ˆx(), ) Q(x(), ) e α Q (x()). Hence Q(ˆx(T ), T ) is bounded if α = and goes to zero exponentially as T if α >. From the definition (8) of ˆx() we have From these we conclude that Q(ˆx(), ) Q(z T (), ). 1 T and it follows that e αs ( w T (s) + y(s) Cz T (s)) h(u(s)) R 1 ) 1 T eα Q(ˆx(), ) + 1 T 1 T Q (x()) e αs y(s) C ˆx(s)) h(u(s)) R 1 e αs y(s) C ˆx(s)) h(u(s)) R 1 <. Therefore given any ɛ there is a large enough so for all T and 1 T e αs y(s) C ˆx(s)) h(u(s)) R < ɛ 1 1 T w T (s) + y(s) Cz T (s)) h(u(s)) R 1 (4) ( ) 1 < e α T Q (x ) + ɛ. Let z T (t) be the solution of the following high gain observer for z T (t) driven by u(t), w T (t) and Cz T (t),

19 The Convergence of the Minimum Energy Estimator 19 z T = A z T + f( z T, u) + ḡ( z T )w T + S 1 (θ)c (Cz T C z T ) z T () = ˆx (41) then the error z T (t) = z T (t) z T (t) satisfies z T = A z T + f(z T ) f( z T, u) + (ḡ(z T ) ḡ( z T )) w S 1 (θ)c C z T Proceeding as in the proof of Theorem we obtain where β T (t, ) = d dt z T θ β(t, ) z T θ θ + L(1 + w T (s) ). We choose θ 5 L and large enough so that for any T Then for t 1 ( T w T (s) β T (t, ) = ( θ + L)(t ) + L 1. w T (s) ( ( θ + L)(t ) + L 1 ( T ( θ + L)(t ) + L (t L(t ) By Gronwall s inequality for T z T (T ) θ e β T (T,) z T () θ ( e L(T ) z T () θ w T (s) w T (s) and we conclude that z T (T ) z T (T ) exponentially as T hence ˆx(T ) = z T (T ) z T (T ) exponentially. The last step of the proof is to show x(t ) z T (T ) (exponentially if α > ). Now d dt ( x z T ) = A x + f( x, u) + S 1 (θ)c (y C x h(u)) ( A z T + f( z T, u) + ḡ( z T )w T + S 1 (θ)c (Cz T C z T ) ) = ( A S 1 (θ)c C ) ( x z T ) + f( x, u) f( z T, u) ḡ( z T )w T + S 1 (θ)c (y Cz T h(u))

20 Arthur J. Krener so d dt x z T θ = ( x z T ) S(θ) (( A S 1 (θ)c C ) ( x z T ) + f( x, u) f( z T, u) ḡ( z T )w T + S 1 (θ)c (y Cz T h(u)) ) θ x z T θ + x z T θ f( x, u) f( z T, u) ḡ( z T )w θ + x z T C ( y Cz T h(u) ) ( θ + L) x θ + LM (θ) x z T θ w T + x z T C ( y Cz T h(u)) ). We have chosen θ 5 L. Using (8) and (34) we conclude that there is an M 3 (θ) > such that x z T C C (x z T ) M 3 (θ) x z T θ y Cz T h(u) R 1 Therefore d dt x z T θ ( θ + L) x z T θ + LM (θ) x z T θ w T +M 3 (θ) x z T θ y Cz T h(u)) R 1 d dt x z T θ L x z T θ + LM (θ) w T + M 3 (θ) y Cz T h(u) R 1. By Gronwall s inequality for t T x(t) z T (t) θ e L(t ) x() z T () θ + + e L(t s) LM (θ) w T e L(t s) M 3 (θ) y(s) Cz T (s) h(u(s)) R 1 e L(t ) x() z T () θ ( ( +LM (θ) e Ls t ( +M 3 (θ) e Ls e L(t ) x() z T () θ ( +LM (θ) e Ls ( +M 3 (θ) e Ls ( ( w T y(s) Cz T (s) h(u(s)) R 1 ( w T y(s) Cz T (s) h(u(s)) R 1

21 The Convergence of the Minimum Energy Estimator 1 e L(t ) x() z T () θ ( ( 1 t +LM (θ) w T L ( ( 1 t +M 3 (θ) y(s) Cz T (s) L h(u(s)) R. 1 As before, from (4) we see that given any δ we can choose large enough so that for all α and all t T we have x(t) z T (t) θ e L(t ) x() z T () θ + δ so we conclude that x(t) z T (t) as t. In particular, x(t ) z T (T ) as t. If α > then (4) implies that for T = x(t ) z T (T ) θ e L T T x( ) z T ( T ) θ ( ( 1 +LM (θ) e α T L ( ( 1 +M 3 e α T L ( ) 1 T Q (x ) + ɛ ( 1 T Q (x ) + ɛ ). Since we have already shown that x( T ) z T ( T ) as T, we conclude that x(t ) z T (T ) exponentially as T. 5 Conclusion We have shown the global convergence of the minimum energy estimate to the true state under suitable assumptions. The proof utilized a high gain observer but it should be emphasized that the minimum energy estimator is not necessarily high gain. It is low gain if the discount rate α is small and the observation noise is substantial, i.e. R(x) is not small relative to Γ (x). It becomes higher gain as the α is increased, Γ (x) is increased or R(x) is decreased. For any size gain, the minimum energy estimator can make instantaneous transitions in the estimate as the location of the minimum of Q(x, t) jumps around. The principal drawback of the minimum energy estimator is that it requires the solution in the viscosity sense of the Hamilton Jacobi PDE (1) that is driven by the observations. This is very challenging numerically in all but the smallest state dimensions and the accuracy of the estimate is limited by the fineness of the spatial and temporal mesh. Krener and Duarte [7] have offered a hybrid approach to this difficulty. The solution of (1) is computed on a very coarse grid and this is used to initiate multiple extended Kalman filters (3) which track the local minima of Q(, t). The one that best explains the observations is taken as the estimate.

22 Arthur J. Krener References 1. Baras JS, Bensoussan A, James MR (1988) SIAM J on Applied Mathematics 48: Evans LC (1998) Partial Differential Equations. American Mathematical Society, Providence, RI 3. Gauthier JP, Hammouri H, Othman S (199) IEEE Trans Auto Contr 37: Gelb A (1974) Applied Optimal Estimation. MIT Press, Cambridge, MA 5. Hijab O (198) Minimum Energy Estimation PhD Thesis, University of California, Berkeley, California 6. Hijab O (1984) Annals of Probability, 1: Krener AJ, Duarte A (1996) A hybrid computational approach to nonlinear estimation in Proc. of 35th Conference on Decision and Control, , Kobe, Japan 8. Krener AJ () The convergence of the extended Kalman filter. In: Rantzer A, Byrnes CI (e) Directions in Mathematical Systems Theory and Optimization, Springer, Berlin Heidelberg New York, also at 9. Mortenson RE (1968) J. Optimization Theory and Applications, :

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener NONLINEAR OBSERVERS A. J. Krener University of California, Davis, CA, USA Keywords: nonlinear observer, state estimation, nonlinear filtering, observability, high gain observers, minimum energy estimation,

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

Observability for deterministic systems and high-gain observers

Observability for deterministic systems and high-gain observers Observability for deterministic systems and high-gain observers design. Part 1. March 29, 2011 Introduction and problem description Definition of observability Consequences of instantaneous observability

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Trajectory Tracking Control of Bimodal Piecewise Affine Systems

Trajectory Tracking Control of Bimodal Piecewise Affine Systems 25 American Control Conference June 8-1, 25. Portland, OR, USA ThB17.4 Trajectory Tracking Control of Bimodal Piecewise Affine Systems Kazunori Sakurama, Toshiharu Sugie and Kazushi Nakano Abstract This

More information

The Entropy Penalized Minimum Energy Estimator

The Entropy Penalized Minimum Energy Estimator The Entropy Penalized Minimum Energy Estimator Sergio Pequito, A. Pedro Aguiar Adviser Diogo A. Gomes Co-Adviser Instituto Superior Tecnico Institute for System and Robotics Department of Electronic and

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

STATE OBSERVERS FOR NONLINEAR SYSTEMS WITH SMOOTH/BOUNDED INPUT 1

STATE OBSERVERS FOR NONLINEAR SYSTEMS WITH SMOOTH/BOUNDED INPUT 1 K Y B E R N E T I K A V O L U M E 3 5 ( 1 9 9 9 ), N U M B E R 4, P A G E S 3 9 3 4 1 3 STATE OBSERVERS FOR NONLINEAR SYSTEMS WITH SMOOTH/BOUNDED INPUT 1 Alfredo Germani and Costanzo Manes It is known

More information

Observer design for a general class of triangular systems

Observer design for a general class of triangular systems 1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Observer design for a general class of triangular systems Dimitris Boskos 1 John Tsinias Abstract The paper deals

More information

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1 Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium

More information

The Important State Coordinates of a Nonlinear System

The Important State Coordinates of a Nonlinear System The Important State Coordinates of a Nonlinear System Arthur J. Krener 1 University of California, Davis, CA and Naval Postgraduate School, Monterey, CA ajkrener@ucdavis.edu Summary. We offer an alternative

More information

High-Gain Observers in Nonlinear Feedback Control

High-Gain Observers in Nonlinear Feedback Control High-Gain Observers in Nonlinear Feedback Control Lecture # 1 Introduction & Stabilization High-Gain ObserversinNonlinear Feedback ControlLecture # 1Introduction & Stabilization p. 1/4 Brief History Linear

More information

Model Predictive Regulation

Model Predictive Regulation Preprints of the 19th World Congress The International Federation of Automatic Control Model Predictive Regulation Cesar O. Aguilar Arthur J. Krener California State University, Bakersfield, CA, 93311,

More information

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Christian Ebenbauer Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany ce@ist.uni-stuttgart.de

More information

Converse Lyapunov theorem and Input-to-State Stability

Converse Lyapunov theorem and Input-to-State Stability Converse Lyapunov theorem and Input-to-State Stability April 6, 2014 1 Converse Lyapunov theorem In the previous lecture, we have discussed few examples of nonlinear control systems and stability concepts

More information

A conjecture on sustained oscillations for a closed-loop heat equation

A conjecture on sustained oscillations for a closed-loop heat equation A conjecture on sustained oscillations for a closed-loop heat equation C.I. Byrnes, D.S. Gilliam Abstract The conjecture in this paper represents an initial step aimed toward understanding and shaping

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t

More information

IN [1], an Approximate Dynamic Inversion (ADI) control

IN [1], an Approximate Dynamic Inversion (ADI) control 1 On Approximate Dynamic Inversion Justin Teo and Jonathan P How Technical Report ACL09 01 Aerospace Controls Laboratory Department of Aeronautics and Astronautics Massachusetts Institute of Technology

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

L 2 -induced Gains of Switched Systems and Classes of Switching Signals

L 2 -induced Gains of Switched Systems and Classes of Switching Signals L 2 -induced Gains of Switched Systems and Classes of Switching Signals Kenji Hirata and João P. Hespanha Abstract This paper addresses the L 2-induced gain analysis for switched linear systems. We exploit

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

Sequential State Estimation (Crassidas and Junkins, Chapter 5) Sequential State Estimation (Crassidas and Junkins, Chapter 5) Please read: 5.1, 5.3-5.6 5.3 The Discrete-Time Kalman Filter The discrete-time Kalman filter is used when the dynamics and measurements are

More information

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Time-varying Systems ẋ = f(t,x) f(t,x) is piecewise continuous in t and locally Lipschitz in x for all t 0 and all x D, (0 D). The origin

More information

1 Relative degree and local normal forms

1 Relative degree and local normal forms THE ZERO DYNAMICS OF A NONLINEAR SYSTEM 1 Relative degree and local normal orms The purpose o this Section is to show how single-input single-output nonlinear systems can be locally given, by means o a

More information

Half of Final Exam Name: Practice Problems October 28, 2014

Half of Final Exam Name: Practice Problems October 28, 2014 Math 54. Treibergs Half of Final Exam Name: Practice Problems October 28, 24 Half of the final will be over material since the last midterm exam, such as the practice problems given here. The other half

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 5. Input-Output Stability DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Input-Output Stability y = Hu H denotes

More information

High gain observer for a class of implicit systems

High gain observer for a class of implicit systems High gain observer for a class of implicit systems Hassan HAMMOURI Laboratoire d Automatique et de Génie des Procédés, UCB-Lyon 1, 43 bd du 11 Novembre 1918, 69622 Villeurbanne, France E-mail: hammouri@lagepuniv-lyon1fr

More information

On the Bellman equation for control problems with exit times and unbounded cost functionals 1

On the Bellman equation for control problems with exit times and unbounded cost functionals 1 On the Bellman equation for control problems with exit times and unbounded cost functionals 1 Michael Malisoff Department of Mathematics, Hill Center-Busch Campus Rutgers University, 11 Frelinghuysen Road

More information

Quasi-ISS Reduced-Order Observers and Quantized Output Feedback

Quasi-ISS Reduced-Order Observers and Quantized Output Feedback Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference Shanghai, P.R. China, December 16-18, 2009 FrA11.5 Quasi-ISS Reduced-Order Observers and Quantized Output Feedback

More information

OUTPUT FEEDBACK STABILIZATION FOR COMPLETELY UNIFORMLY OBSERVABLE NONLINEAR SYSTEMS. H. Shim, J. Jin, J. S. Lee and Jin H. Seo

OUTPUT FEEDBACK STABILIZATION FOR COMPLETELY UNIFORMLY OBSERVABLE NONLINEAR SYSTEMS. H. Shim, J. Jin, J. S. Lee and Jin H. Seo OUTPUT FEEDBACK STABILIZATION FOR COMPLETELY UNIFORMLY OBSERVABLE NONLINEAR SYSTEMS H. Shim, J. Jin, J. S. Lee and Jin H. Seo School of Electrical Engineering, Seoul National University San 56-, Shilim-Dong,

More information

Ordinary Differential Equations II

Ordinary Differential Equations II Ordinary Differential Equations II February 9 217 Linearization of an autonomous system We consider the system (1) x = f(x) near a fixed point x. As usual f C 1. Without loss of generality we assume x

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

Weighted balanced realization and model reduction for nonlinear systems

Weighted balanced realization and model reduction for nonlinear systems Weighted balanced realization and model reduction for nonlinear systems Daisuke Tsubakino and Kenji Fujimoto Abstract In this paper a weighted balanced realization and model reduction for nonlinear systems

More information

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R

More information

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points Nonlinear Control Lecture # 3 Stability of Equilibrium Points The Invariance Principle Definitions Let x(t) be a solution of ẋ = f(x) A point p is a positive limit point of x(t) if there is a sequence

More information

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

Measure and Integration: Solutions of CW2

Measure and Integration: Solutions of CW2 Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost

More information

Deterministic Minimax Impulse Control

Deterministic Minimax Impulse Control Deterministic Minimax Impulse Control N. El Farouq, Guy Barles, and P. Bernhard March 02, 2009, revised september, 15, and october, 21, 2009 Abstract We prove the uniqueness of the viscosity solution of

More information

Dynamical Systems as Solutions of Ordinary Differential Equations

Dynamical Systems as Solutions of Ordinary Differential Equations CHAPTER 3 Dynamical Systems as Solutions of Ordinary Differential Equations Chapter 1 defined a dynamical system as a type of mathematical system, S =(X, G, U, ), where X is a normed linear space, G is

More information

Feedback stabilization methods for the numerical solution of ordinary differential equations

Feedback stabilization methods for the numerical solution of ordinary differential equations Feedback stabilization methods for the numerical solution of ordinary differential equations Iasson Karafyllis Department of Environmental Engineering Technical University of Crete 731 Chania, Greece ikarafyl@enveng.tuc.gr

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

Convergent systems: analysis and synthesis

Convergent systems: analysis and synthesis Convergent systems: analysis and synthesis Alexey Pavlov, Nathan van de Wouw, and Henk Nijmeijer Eindhoven University of Technology, Department of Mechanical Engineering, P.O.Box. 513, 5600 MB, Eindhoven,

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

Stabilization of persistently excited linear systems

Stabilization of persistently excited linear systems Stabilization of persistently excited linear systems Yacine Chitour Laboratoire des signaux et systèmes & Université Paris-Sud, Orsay Exposé LJLL Paris, 28/9/2012 Stabilization & intermittent control Consider

More information

High-Gain Observers in Nonlinear Feedback Control. Lecture # 2 Separation Principle

High-Gain Observers in Nonlinear Feedback Control. Lecture # 2 Separation Principle High-Gain Observers in Nonlinear Feedback Control Lecture # 2 Separation Principle High-Gain ObserversinNonlinear Feedback ControlLecture # 2Separation Principle p. 1/4 The Class of Systems ẋ = Ax + Bφ(x,

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS. 1. Introduction

AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS. 1. Introduction AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS JAMES C HATELEY Abstract. There is a voluminous amount of literature on Hamilton-Jacobi equations. This paper reviews some of the existence and uniqueness

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

1 Basic Second-Order PDEs

1 Basic Second-Order PDEs Partial Differential Equations A. Visintin a.a. 2011-12 These pages are in progress. They contain: an abstract of the classes; notes on some (few) specific issues. These notes are far from providing a

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations Journal of Statistical Physics, Vol. 122, No. 2, January 2006 ( C 2006 ) DOI: 10.1007/s10955-005-8006-x Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations Jan

More information

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Sean P. Meyn Joint work with Darshan Shirodkar and Prashant Mehta Coordinated Science Laboratory and the Department of

More information

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u) Observability Dynamic Systems Lecture 2 Observability Continuous time model: Discrete time model: ẋ(t) = f (x(t), u(t)), y(t) = h(x(t), u(t)) x(t + 1) = f (x(t), u(t)), y(t) = h(x(t)) Reglerteknik, ISY,

More information

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P

More information

Global output regulation through singularities

Global output regulation through singularities Global output regulation through singularities Yuh Yamashita Nara Institute of Science and Techbology Graduate School of Information Science Takayama 8916-5, Ikoma, Nara 63-11, JAPAN yamas@isaist-naraacjp

More information

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions

More information

On Stability of Linear Complementarity Systems

On Stability of Linear Complementarity Systems On Stability of Linear Complementarity Systems Alen Agheksanterian University of Maryland Baltimore County Math 710A: Special Topics in Optimization On Stability of Linear Complementarity Systems Alen

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

Chain differentials with an application to the mathematical fear operator

Chain differentials with an application to the mathematical fear operator Chain differentials with an application to the mathematical fear operator Pierre Bernhard I3S, University of Nice Sophia Antipolis and CNRS, ESSI, B.P. 145, 06903 Sophia Antipolis cedex, France January

More information

Hybrid Control and Switched Systems. Lecture #11 Stability of switched system: Arbitrary switching

Hybrid Control and Switched Systems. Lecture #11 Stability of switched system: Arbitrary switching Hybrid Control and Switched Systems Lecture #11 Stability of switched system: Arbitrary switching João P. Hespanha University of California at Santa Barbara Stability under arbitrary switching Instability

More information

Passivity-based Stabilization of Non-Compact Sets

Passivity-based Stabilization of Non-Compact Sets Passivity-based Stabilization of Non-Compact Sets Mohamed I. El-Hawwary and Manfredi Maggiore Abstract We investigate the stabilization of closed sets for passive nonlinear systems which are contained

More information

A brief introduction to ordinary differential equations

A brief introduction to ordinary differential equations Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s)

More information

arxiv: v3 [math.ds] 22 Feb 2012

arxiv: v3 [math.ds] 22 Feb 2012 Stability of interconnected impulsive systems with and without time-delays using Lyapunov methods arxiv:1011.2865v3 [math.ds] 22 Feb 2012 Sergey Dashkovskiy a, Michael Kosmykov b, Andrii Mironchenko b,

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

On Convergence of Nonlinear Active Disturbance Rejection for SISO Systems

On Convergence of Nonlinear Active Disturbance Rejection for SISO Systems On Convergence of Nonlinear Active Disturbance Rejection for SISO Systems Bao-Zhu Guo 1, Zhi-Liang Zhao 2, 1 Academy of Mathematics and Systems Science, Academia Sinica, Beijing, 100190, China E-mail:

More information

Partial Differential Equations

Partial Differential Equations Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,

More information

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Time-varying Systems ẋ = f(t,x) f(t,x) is piecewise continuous in t and locally Lipschitz in x for all t 0 and all x D, (0 D). The origin

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Jan Wehr and Jack Xin Abstract We study waves in convex scalar conservation laws under noisy initial perturbations.

More information

SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS

SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS By Ari Arapostathis, Anup Biswas, and Vivek S. Borkar The University of Texas at Austin, Indian Institute of Science

More information

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

ON OPTIMAL ESTIMATION PROBLEMS FOR NONLINEAR SYSTEMS AND THEIR APPROXIMATE SOLUTION. A. Alessandri C. Cervellera A.F. Grassia M.

ON OPTIMAL ESTIMATION PROBLEMS FOR NONLINEAR SYSTEMS AND THEIR APPROXIMATE SOLUTION. A. Alessandri C. Cervellera A.F. Grassia M. ON OPTIMAL ESTIMATION PROBLEMS FOR NONLINEAR SYSTEMS AND THEIR APPROXIMATE SOLUTION A Alessandri C Cervellera AF Grassia M Sanguineti ISSIA-CNR National Research Council of Italy Via De Marini 6, 16149

More information

The principle of least action and two-point boundary value problems in orbital mechanics

The principle of least action and two-point boundary value problems in orbital mechanics The principle of least action and two-point boundary value problems in orbital mechanics Seung Hak Han and William M McEneaney Abstract We consider a two-point boundary value problem (TPBVP) in orbital

More information

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Section 2.6 (cont.) Properties of Real Functions Here we first study properties of functions from R to R, making use of the additional structure

More information

New Identities for Weak KAM Theory

New Identities for Weak KAM Theory New Identities for Weak KAM Theory Lawrence C. Evans Department of Mathematics University of California, Berkeley Abstract This paper records for the Hamiltonian H = p + W (x) some old and new identities

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

LINEAR-CONVEX CONTROL AND DUALITY

LINEAR-CONVEX CONTROL AND DUALITY 1 LINEAR-CONVEX CONTROL AND DUALITY R.T. Rockafellar Department of Mathematics, University of Washington Seattle, WA 98195-4350, USA Email: rtr@math.washington.edu R. Goebel 3518 NE 42 St., Seattle, WA

More information

6 OUTPUT FEEDBACK DESIGN

6 OUTPUT FEEDBACK DESIGN 6 OUTPUT FEEDBACK DESIGN When the whole sate vector is not available for feedback, i.e, we can measure only y = Cx. 6.1 Review of observer design Recall from the first class in linear systems that a simple

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 7. Feedback Linearization IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs1/ 1 1 Feedback Linearization Given a nonlinear

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016 ACM/CMS 17 Linear Analysis & Applications Fall 216 Assignment 4: Linear ODEs and Control Theory Due: 5th December 216 Introduction Systems of ordinary differential equations (ODEs) can be used to describe

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

Richard F. Bass Krzysztof Burdzy University of Washington

Richard F. Bass Krzysztof Burdzy University of Washington ON DOMAIN MONOTONICITY OF THE NEUMANN HEAT KERNEL Richard F. Bass Krzysztof Burdzy University of Washington Abstract. Some examples are given of convex domains for which domain monotonicity of the Neumann

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

Cauchy s Theorem (rigorous) In this lecture, we will study a rigorous proof of Cauchy s Theorem. We start by considering the case of a triangle.

Cauchy s Theorem (rigorous) In this lecture, we will study a rigorous proof of Cauchy s Theorem. We start by considering the case of a triangle. Cauchy s Theorem (rigorous) In this lecture, we will study a rigorous proof of Cauchy s Theorem. We start by considering the case of a triangle. Given a certain complex-valued analytic function f(z), for

More information

Global Practical Output Regulation of a Class of Nonlinear Systems by Output Feedback

Global Practical Output Regulation of a Class of Nonlinear Systems by Output Feedback Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 2-5, 2005 ThB09.4 Global Practical Output Regulation of a Class of Nonlinear

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability

More information

Extended-Kalman-Filter-like observers for continuous time systems with discrete time measurements

Extended-Kalman-Filter-like observers for continuous time systems with discrete time measurements Extended-Kalman-Filter-lie observers for continuous time systems with discrete time measurements Vincent Andrieu To cite this version: Vincent Andrieu. Extended-Kalman-Filter-lie observers for continuous

More information

Networked Control Systems, Event-Triggering, Small-Gain Theorem, Nonlinear

Networked Control Systems, Event-Triggering, Small-Gain Theorem, Nonlinear EVENT-TRIGGERING OF LARGE-SCALE SYSTEMS WITHOUT ZENO BEHAVIOR C. DE PERSIS, R. SAILER, AND F. WIRTH Abstract. We present a Lyapunov based approach to event-triggering for large-scale systems using a small

More information

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation

High-Gain Observers in Nonlinear Feedback Control. Lecture # 3 Regulation High-Gain Observers in Nonlinear Feedback Control Lecture # 3 Regulation High-Gain ObserversinNonlinear Feedback ControlLecture # 3Regulation p. 1/5 Internal Model Principle d r Servo- Stabilizing u y

More information

(z 0 ) = lim. = lim. = f. Similarly along a vertical line, we fix x = x 0 and vary y. Setting z = x 0 + iy, we get. = lim. = i f

(z 0 ) = lim. = lim. = f. Similarly along a vertical line, we fix x = x 0 and vary y. Setting z = x 0 + iy, we get. = lim. = i f . Holomorphic Harmonic Functions Basic notation. Considering C as R, with coordinates x y, z = x + iy denotes the stard complex coordinate, in the usual way. Definition.1. Let f : U C be a complex valued

More information

Nonlinear Control Lecture # 1 Introduction. Nonlinear Control

Nonlinear Control Lecture # 1 Introduction. Nonlinear Control Nonlinear Control Lecture # 1 Introduction Nonlinear State Model ẋ 1 = f 1 (t,x 1,...,x n,u 1,...,u m ) ẋ 2 = f 2 (t,x 1,...,x n,u 1,...,u m ).. ẋ n = f n (t,x 1,...,x n,u 1,...,u m ) ẋ i denotes the derivative

More information

EXISTENCE AND UNIQUENESS THEOREMS FOR ORDINARY DIFFERENTIAL EQUATIONS WITH LINEAR PROGRAMS EMBEDDED

EXISTENCE AND UNIQUENESS THEOREMS FOR ORDINARY DIFFERENTIAL EQUATIONS WITH LINEAR PROGRAMS EMBEDDED EXISTENCE AND UNIQUENESS THEOREMS FOR ORDINARY DIFFERENTIAL EQUATIONS WITH LINEAR PROGRAMS EMBEDDED STUART M. HARWOOD AND PAUL I. BARTON Key words. linear programs, ordinary differential equations, embedded

More information

Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions

Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions Tingshu Hu Abstract This paper presents a nonlinear control design method for robust stabilization

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Alberto Bressan Department of Mathematics, Penn State University University Park, Pa 1682, USA e-mail: bressan@mathpsuedu

More information

Chap. 1. Some Differential Geometric Tools

Chap. 1. Some Differential Geometric Tools Chap. 1. Some Differential Geometric Tools 1. Manifold, Diffeomorphism 1.1. The Implicit Function Theorem ϕ : U R n R n p (0 p < n), of class C k (k 1) x 0 U such that ϕ(x 0 ) = 0 rank Dϕ(x) = n p x U

More information