An Overview of Risk-Sensitive Stochastic Optimal Control

Size: px
Start display at page:

Download "An Overview of Risk-Sensitive Stochastic Optimal Control"

Transcription

1 An Overview of Risk-Sensitive Stochastic Optimal Control Matt James Australian National University Workshop: Stochastic control, communications and numerics perspective UniSA, Adelaide, Sept

2 Outline A brief history Approaches to controller design Relationships among approaches Some limit results (state feedback) Robust interpretation Partially observed case Quantum systems References 2

3 A brief history The risk-sensitive criterion introduced by Jacobson Linear system with Gaussian noise: dx = (Ax + Bu)dt + dw [Linear-Exponential-Quadratic-Gaussian (LEQG)] Minimize average of exponential of quadratic cost: ( )] T J µ 1 = E x,t [expµ 2 [x Qx u 2 ] ds x Rx µ > 0 - risk-sensitive µ < 0 - risk-seeking t 3

4 solution Optimal state feedback control is u (x, t) = B P µ t x where P µ t = P µ t A A P µ t + P µ t (BB µi) P µ t Q, P µ T = R 0 t T [control type Riccati equation] 4

5 Limit µ 0: recover standard LQG control (risk-neutral). Linear system with Gaussian noise: dx = (Ax + Bu)dt + dw Minimize average of quadratic cost: E x,t [ T t 1 2 [x Qx u 2 ] ds x Rx Solution: Optimal state feedback control is u (x, t) = B P t x [Linear-Quadratic-Gaussian (LQG)] ] where P t = P t A A P t + P t BB P t Q, P T = R 5

6 partially observed case Linear system with Gaussian noise: dx dy = (Ax + Bu)dt + dw = Cxdt + dv Minimize average of exponential of quadratic cost ( )] T J µ 1 = E x,t [expµ 2 [x Qx u 2 ] ds x Rx over causal maps y( ) u( ). t 6

7 Solution: Whittle 1981, Bensoussan-van Schuppen Optimal partially observed control is u t = B P µ t ˆx µ t where dˆx µ = ([A + µy µ Q]ˆx µ + Bu)dt + Y µ C (dy Cˆx µ dt) and Ẏ µ = AY µ + Y µ A + µy µ QY µ + I Y µ C CY µ Significantly, this is not the Kalman filter. When µ 0, LEQG LQG, whose solution is expressed in terms of the Kalman filter. 7

8 Some subsequent developments: LEQG. Connection with dynamic games. (Jacobson, 1973) LEQG. Connection with robust control (H ). (Glover-Doyle, 1988) Risk-sensitive control for nonlinear systems, state feedback. Connection with dynamic games. (Fleming-McEneaney 1992, James, 1992, Nagai 1996) Solution of risk-sensitive control for nonlinear systems, output feedback. Solution and connection with dynamic games and nonlinear robust control. (James-Baras-Elliott 1994, Charalambous-Hibey 1996, Dai Pra-Meneghini-Runggaldier ) Robustness properties of risk-sensitive criterion. Also stochastic IQC. (James, Fleming, Boel, Dupuis, Petersen, 1995, 1998) Risk-sensitive control for finance. (Nagai 2000) Risk-sensitive control for quantum systems. (James 2004) 8

9 Approaches to controller design The basic aim of robust control is: Design a controller to achieve satisfactory performance in the presence of disturbances and uncertainty. Stochastic control has similar aims. 9

10 Dynamics ẋ s = f(x s ) + g(x s )d s, 0 < s < T, z s = h(x s ) H robust control Disturbance Output Map f(0) = 0, h(0) = 0, x 0 = 0 H Norm Note: T d, z γ iff T d, z : d( ) z( ) T d, z = T 0 z(t) 2 dt sup d L 2 [0,T] T 0 z 2 d 2 γ 2 d(t) 2 dt d L 2 [0, T] 10

11 D-H 2 deterministic optimal control disturbance: none controller: Optimal cost gives optimal performance without special consideration of disturbances. W(x, t) = inf u [ T t L(x s, u s ) ds + Φ(x T ) ] Dynamics ẋ s = b(x s, u s ), t < s < T, x t = x 11

12 S-H 2 (RN) stochastic optimal control disturbance: controller: Optimal cost noise gives optimal average performance W ε (x, t) = inf u E x,t [ T t L(x ε s, u s ) ds + Φ(x T ) ] Dynamics dx ε s = b(x ε s, u s ) ds + ε db s, t < s < T, x ε t = x (ε > 0 - noise intensity) 12

13 RS disturbance: noise stochastic risk-sensitive optimal control controller: gives optimal average performance using exponential cost (heavily penalizes large values) Optimal cost S µ,ε (x, t) = inf u E x,t [ exp µ ε ( T t )] L(x ε s, u s ) ds + Φ(x ε T) Dynamics dx ε s = b(x ε s, u s ) ds + ε db s, t < s < T, x ε t = x (µ > 0 - risk sensitivity) 13

14 H H robust control disturbance: L 2 signal controller: Find u( ) such that Dynamics achieves specified H norm constraint (worst-case design) T d, z γ ẋ s = b(x s, u s ) + d s t < s < T x t = x z s = h(x s ) (γ = 1/ µ - H constraint) 14

15 D-DG deterministic differential game disturbance: L 2 signal controller: Optimal cost Dynamics W µ (x, t) = sup d gives optimal performance subject to opposing efforts of disturbance (worst-case design) ẋ s x t inf u [ T t ] L(x s, u s ) 1 d 2µ s 2 ds + Φ(x T ) = b(x s, u s ) + d s, t < s < T = x 15

16 S-DG stochastic differential game disturbances: controller: Optimal cost noise W µ,ε (x, t) = sup d L 2 signal gives optimal average performance subject to opposing efforts of disturbance (worst-case design) inf u E x,t [ T t L(x ε s, u s ) 1 2µ d s 2 ds + Φ(x ε T) ] Dynamics dx ε s x ε t = (b(x ε s, u s ) + d s ) ds + ε db s, t < s < T = x 16

17 Relationships among approaches RS µ 0 S-H 2 ε 0 ε 0 D-DG µ 0 D-H 2 RS, S-DG, D-DG, and S-H 2 are perturbations of D-H 2. 17

18 (James, Fleming-McEneaney, Whittle) RS S-DG D-H 2 + (noise variance term) + (disturbance energy term) Equivalence via logarithmic transformation, dynamic programming PDEs, and representation theorem Expansion valid for small noise intensity and small risk-sensitivity 18

19 S-H 2 D-H 2 + (noise variance term) H D-H 2 + (disturbance energy term) RS H 2 + H??? 19

20 Some limit results (state feedback) Assumptions b(x, u) = f(x) + g(x)u, where f : R n R n, g : R m R n and their first order derivatives Df, Dg are bounded and uniformly Lipschitz continuous. Control values: U R m is compact and convex. Disturbance values: D = R n. Φ : R n R is non negative; Φ and DΦ are bounded and uniformly Lipschitz continuous. 20

21 L : R n R m R is C 2, non negative; L(, u) and D x L(, u) are bounded and uniformly Lipschitz continuous, uniformly in u U; and D 2 ul(x, ) > 0, uniformly in x R n. There exists locally Lipschitz U (x, p) achieving maximum in min u U [p b(x, u) + L(x, u)] Use Elliott Kalton formulation for two player zero sum differential games (strategies, upper and lower values, etc) For later use, assume also L(x, u) = 1 2 u 2 + V (x), and the functions f, g, V and Φ are of class C and have compact support. U (x, p) is of class C 2. 21

22 Viscosity Solutions Viscosity solutions introduced by Crandall-Lions The definition for a fully nonlinear parabolic PDE v t + F(x, Dv, D2 v) = 0 in R n (0, T) v(x, T) = Φ(x) in R n. is as follows. 22

23 Definition An upper semicontinuous (u.s.c.) function v (resp. l.s.c. function v) is called a viscosity subsolution (resp. supersolution) if and v Φ (resp. v Φ) on R n {T } t φ(x, t) + F(x, Dφ(x, t), D2 φ(x, t)) 0 (resp. 0) for every smooth function φ and any local maximum (resp. minimum) (x, t) R n [0, T) of v φ (resp. v φ). A continuous function is called a viscosity solution if it is both a subsolution and a supersolution. 23

24 Comparison Theorem If v is a subsolution and if v is a supersolution, then v v in R n [0, T]. Thus any continuous viscosity solution is unique. 24

25 Dynamic Programming PDEs RS (ε > 0, µ > 0) S µ,ε t + ε 2 Sµ,ε + min u U [DS µ,ε b(x, u) + µ ε L(x, u)sµ,ε ] = 0 in R n (0, T) S µ,ε (x, T) = exp µ ε Φ(x) in Rn Logarithmic transformation: (Fleming, 1970 s) W µ,ε (x, t) = ε µ log Sµ,ε (x, t) 25

26 S-DG (ε > 0, µ > 0) W µ,ε t + ε 2 W µ,ε + min u U max d D [DW µ,ε (b(x, u) + d) + L(x, u) 1 2µ d 2 ] = 0 in R n (0, T) W µ,ε (x, T) = Φ(x) in R n. Note: min u U max d D [ p (b(x, u) + d) + L(x, u) 1 2µ d 2 ] = min u U [p b(x, u) + L(x, u)] µ p 2 26

27 Representation theorem Viscosity formulation of stochastic differential games developed by Fleming Souganidis (1989). Theorem (James, 1992; Fleming-McEneany 1992) The function W µ,ε defined by the logarithmic transformation is the optimal cost function of the stochastic differential game defined above. i.e. RS S-DG Proof : Methods of Fleming (1971) imply DW µ,ε (x, t) C for all (x, t) R n [0, T]. In view of this bound, the set of disturbance values D can be taken as bounded. The result now follows from Fleming Souganidis (1989). 27

28 Limit PDEs Obtained as ε 0 and/or µ 0. D-DG (ε 0, µ > 0) W µ t + min u U max d D [DW µ (b(x, u) + d) + L(x, u) 1 2µ d 2 ] = 0 in R n (0, T) W µ (x, T) = Φ(x) in R n. 28

29 S-H 2 (ε > 0, µ 0) W ε t + ε 2 W ε + min u U [DW ε b(x, u) +L(x, u)] = 0 in R n (0, T) W ε (x, T) = Φ(x) in R n. D-H 2 (ε 0, µ 0) W t + min u U [DW b(x, u) + L(x, u)] = 0 in R n (0, T) W(x, T) = Φ(x) in R n. 29

30 Limit theorems depend on general viscosity limit techniques developed by Evans Ishii (1984) Barles Perthame (1987) Limit theorems Theorem (James, 1992) We have lim W µ,ε (x, t) = W µ (x, t) ε 0 uniformly on compact subsets, where W µ C(R n [0, T]) is the unique bounded continuous viscosity solution of the corresponding PDE and is the optimal cost function of the deterministic differential game defined above. i.e. S-DG D-DG as ε 0. 30

31 Proof: Assumptions imply The function 0 W µ,ε (x, t) C for all (x, t) R n [0, T], µ, ε > 0 v(x, t) = is u.s.c. and a viscosity subsolution. The function v(x, t) = is l.s.c and a viscosity supersolution. lim sup W µ,ε (y, s). ε 0, y x, s t lim inf W µ,ε (y, s) ε 0, y x, s t By construction v v By the comparison theorem v v 31

32 Therefore lim ε 0 W µ,ε (x, t) = v = v = v From Evans Souganidis (1984) the value W µ of the D-DG is the unique viscosity solution, and hence W µ = v. Expansion of optimal cost for S-DG W aε,bε (x, t) = W(x, t) + ε (W g (x, t), W n (x, t)) a b +... [methods of Fleming, 1970 s] 32

33 Robust interpretation (Boel, Dupuis, James, Petersen 1998, 2000) We wish to interpret the finiteness of the risk-sensitive criterion for a nominal system G 0 in terms of a performance bound for a perturbed system G. nominal system w G 0 Z 33

34 Nominal system dynamics: dx t = G 0 : b(x t )dt + ε 1 2 σ(x t )dw t Z t = h(x t ) Risk-sensitive criterion: S(x, T). = E x [ exp 1 2γ 2 ε T 0 Z t 2 dt ] γ 2 = 1/µ 34

35 Key tool is the representation [Dupuis-Ellis 1997] [ ] γ 2 1 T ( ε log S(x, T) = sup E x Zt 2 γ 2 v t 2) dt v V T 2 where X t is given by perturbed system dynamics: d G : X t = b( X t )dt + σ( X t )v t dt + ε 1 2 σ( X t )dw t Z t = h( X t ) 0 35

36 perturbed system w G Z v X perturbation E x [ 1 2 T 0 Z t 2 dt ] γ 2 E x [ 1 2 T 0 v t 2 dt ] robustness inequality + γ 2 ε log S(x, T) (analogous to H inequality) 36

37 Partially observed (output feedback) case (James-Baras-Elliott, 1994) Dynamics dx t dỹ t = b(x t, u t ) dt + ε dw t = h(x t ) dt + ε dv t Objective γ 2 = 1/µ [ ( )] J(u) = E exp µ T L(x t, u t ) dt + Φ(x T ) ε 0 37

38 Information state σ µ, ε solution of RS t (x) not cond prob σ µ,ε t, η = E 0 [η(x ε t)z ε t exp ( µ ε t 0 ) L(x ε s, u s ) ds Y t ] for all η Dynamics dσ µ,ε t = ( A u t + µ ) ε Lu t σ µ,ε t dt + 1 ε hσµ,ε t dỹt, ε σ µ,ε 0 = ρ in L 1 (R n ), stochastic PDE Representation J(u) = E 0 [ σ µ,ε T, exp µ ε Φ ] Linear case: Bensoussan-van Schuppen 38

39 Dynamic programming Value function S µ,ε (σ, t) = inf u Ũt,T E 0 σ,t [ ] σ µ,ε T, e µ ε Φ. Dyn prog equation t Sµ,ε + 1 2ε D2 S µ,ε (hσ, hσ) { + inf u U DS µ,ε (A u σ + µ ε Lu σ )} = 0 S µ,ε (σ, T) = σ, e µ ε Φ Optimal controller u t(σ) = achieves min in d.p.e. infinite dim, nonlinear, 2nd-order information state feedback 39

40 Small Noise limit Duality max-plus large deviations σ, ν = R n σ(x)ν(x) dx (p, q) = sup x R n { p(x) + q(x) } lim ε 0 ε µ log e µ ε p, e µ ε q = (p, q) 40

41 Information state Value function lim ε 0 lim ε 0 ε µ log σµ,ε t (x) = p µ t (x), ε µ log Sµ,ε (e µ ε p, t) = W µ (p, t) limit limit Some consequences Can solve output feedback deterministic dynamic games (D-DG) using analogous, though deterministic, information state methods (cf. max-plus). Output feedback nonlinear H control problem can be solved using the D-DG methods. 41

42 Risk-sensitive control of quantum systems (James, current) system (atom) H X bath (EM field) Γ db,db System: eg. atom, can be controlled, Hilbert space H Bath: e.g. electromagnetic field, continuously monitored, Hilbert space Γ (Fock space) 42

43 Dynamics on H Γ: [Hudson-Parthasarathy quantum stochastic DE, 1980 s] du(t) = { K(u(t))dt + LdB (t) L db(t) } U(t) with initial condition U(0) = I, where K(u) = i H(u) L L System operators evolve according to X(t) = j t (u, X) = U (t)x IU(t) i.e. dx(t) + (X(t)K(t) + K (t)x(t) L (t)x(t)l(t))dt = [X(t), L(t)]dB (t) [X(t), L (t)]db(t) where L(t) = j t (u, L), K(t) = j t (u, K(u(t))) 43

44 Measurements of real quadrature of field: Q t = B t + B t dy (t) = (L(t) + L (t))dt+ dq(t) output field input field Risk-sensitive cost: [James, 2004] J µ = π 0 vv, R (T)e µc2(t) R(T) where dr(t) dt [ A, B = tr(a B)] = µ 2 C 1(t)R(t), C 1 (t) = j t (u, C 1 (u(t))), C 2 (t) = j t (u, C 2 ) π 0 = initial system state vv = field vacuum state 44

45 Stochastic representation: J µ = E 0 [ σ µ T, eµc 2 ] where P 0 is standard Wiener measure, and the information state (an unnormalized density operator) satisfies dσ µ t +[Kσ µ t +σ µ t K Lσ t L ]dt = µ 1 2 [C 1σ µ t +σ µ t C 1 ]dt+[lσ µ t +σ µ t L ]dy(t) [modified stochastic master equation] π µ t = σµ t σ t, 1 [unnormalized state, new to physics] When µ 0 recover the Belavkin quantum filtering equation (stochastic master equation) (1980 s) [unnormalized state σ t, normalized state π t = σ t / σ t, 1 ]. 45

46 The dynamic programming equation is, formally, t Sµ (σ, t) + inf u U L µ;u S µ (σ, t) Discrete time (James 2004) = 0, 0 t < T S µ (σ, T) = σ, e µc 2 Some related current work Robustness interpretation, discrete time (James-Petersen 2004) Robustness, linear/quadratic case (Doherty, d Helon, James, Petersen, Wilson) 46

47 References [1] T. Basar and P. Bernhard. H Optimal Control and Related Minimax Design Problems. Birkhauser, Boston, [2] A. Bensoussan and J. H. van Schuppen. Optimal control of partially observable stochastic systems with an exponential-of-integral performance index. SIAM Journal on Control and Optimization, 23: , [3] P. Dupuis and R.S. Ellis. A Weak Convergence Approach to the Theory of Large Deviations. Wiley, New York, [4] W.H. Fleming and W.M. McEneaney. Risk-sensitive control on an infinite time horizon. SIAM J. Control and Optimization, 33: , [5] K. Glover and J. Doyle. State space formulea for all stabilizing controllers that statisfy an H norm bound and relations to risk-sensitivity. Systems and Control Letters, 11: , [6] J.W. Helton and M.R. James. Extending H Control to Nonlinear Systems: Control of Nonlinear Systems to Achieve Performance Objectives, volume 1 of Advances in Design and Control. SIAM, Philadelphia, [7] D.H. Jacobson. Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games. IEEE Trans. Aut. Control, 18(2): , [8] M. R. James, J.S. Baras, and R.J. Elliott. Risk-sensitive control and dynamic games for partially observed discrete-time nonlinear systems. IEEE Trans. Automatic Control, 39(4): , [9] M.R. James. Asymptotic analysis of nonlinear stochastic risk-sensitive control and differential games. Mathematics of Control, Signals and Systems, 5(4): , [10] M.R. James. Risk-sensitive optimal control of quantum systems. Phys. Rev. A, 69:032108,

48 [11] P. Dai Pra, L. Meneghini, and W.J. Runggaldier. Connections between stochastic control and dynamic games. Math. Control, Signals and Systems, 9(4): , [12] A.J. van der Schaft. L 2 -Gain and Passivity Techniques in Nonlinear Control. Springer Verlag, New York, [13] P. Whittle. Risk-sensitive linear quadratic Gaussian control. Advances in Applied Probability, 13: , [14] G. Zames. On the input-output stability of time-varying nonlinear feedback systems. part i: Conditions derived using concepts of loop gain, conicity, and positivity. part ii: Conditions involving circles in the frequency plane and sector nonlinearities. IEEE Trans. Aut. Control, 11: , , [15] G. Zames. Feedback and optimal sensitivity: Model reference transformation, multiplicative seminorms and approximate inverses. IEEE Trans. Aut. Control, 26: , [16] C.D. Charalambous and J.L. Hibey, Minimum principle for partially observable nonlinear risk-sensitive control problems using measure-valued decompositions, Stochastics and Stochastics Reports, vol. 57, no. 3+4, pp , August [17] W.H. Fleming and M.R. James, The Risk-Sensitive Index and the H 2 and H Norms for Nonlinear Systems, Mathematics of Control, Signals, and Systems, 8(3), 1995, [18] H. Nagai, Bellman Equation of Risk-Sensitive Control, SIAM J. Control Optim., 34(1), 1996, [19] M.G. Crandall, L.C. Evans and P.L. Lions, Some Properties of Viscosity Solutions of Hamilton Jacobi Equations, Trans. AMS, 282 (2) (1984) [20] M.G. Crandall, H. Ishii and P.L. Lions, User s Guide to Viscosity Solutions of Second Order Partial Differential Equations, CEREMADE Report No. 9039, (1990). [21] P. Dupuis, M.R. James, and I.R. Petersen, Robust Properties of Risk-Sensitive Control, Math. Control, Systems and Signals, 13, , [22] M. Bardi and I. Capuzzo-Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, Birkhauser, Boston,

49 [23] W.H. Fleming and H.M. Soner, Controlled Markov Processes and Viscosity Solutions, Springer, New York,

Feedback Control of Quantum Systems. Matt James ANU

Feedback Control of Quantum Systems. Matt James ANU Feedback Control of Quantum Systems Matt James ANU Dedicated to Slava Belavkin who pioneered the way Outline What is quantum control? A Little History Optimal Control Using Quantum Langevin Equations H-Infinity

More information

Risk-Sensitive Control with HARA Utility

Risk-Sensitive Control with HARA Utility IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 4, APRIL 2001 563 Risk-Sensitive Control with HARA Utility Andrew E. B. Lim Xun Yu Zhou, Senior Member, IEEE Abstract In this paper, a control methodology

More information

Robustness and Risk-Sensitive Control of Quantum Systems. Matt James Australian National University

Robustness and Risk-Sensitive Control of Quantum Systems. Matt James Australian National University Robustness and Risk-Sensitive Control of Quantum Systems Matt James Australian National University 1 Outline Introduction and motivation Feedback control of quantum systems Optimal control of quantum systems

More information

Continuous dependence estimates for the ergodic problem with an application to homogenization

Continuous dependence estimates for the ergodic problem with an application to homogenization Continuous dependence estimates for the ergodic problem with an application to homogenization Claudio Marchi Bayreuth, September 12 th, 2013 C. Marchi (Università di Padova) Continuous dependence Bayreuth,

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

Thuong Nguyen. SADCO Internal Review Metting

Thuong Nguyen. SADCO Internal Review Metting Asymptotic behavior of singularly perturbed control system: non-periodic setting Thuong Nguyen (Joint work with A. Siconolfi) SADCO Internal Review Metting Rome, Nov 10-12, 2014 Thuong Nguyen (Roma Sapienza)

More information

Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations

Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations R Anguelov 1,2, S Markov 2,, F Minani 3 1 Department of Mathematics and Applied Mathematics, University of Pretoria 2 Institute of

More information

Linear-Quadratic Stochastic Differential Games with General Noise Processes

Linear-Quadratic Stochastic Differential Games with General Noise Processes Linear-Quadratic Stochastic Differential Games with General Noise Processes Tyrone E. Duncan Abstract In this paper a noncooperative, two person, zero sum, stochastic differential game is formulated and

More information

On the Bellman equation for control problems with exit times and unbounded cost functionals 1

On the Bellman equation for control problems with exit times and unbounded cost functionals 1 On the Bellman equation for control problems with exit times and unbounded cost functionals 1 Michael Malisoff Department of Mathematics, Hill Center-Busch Campus Rutgers University, 11 Frelinghuysen Road

More information

Risk-Sensitive Filtering and Smoothing via Reference Probability Methods

Risk-Sensitive Filtering and Smoothing via Reference Probability Methods IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 11, NOVEMBER 1997 1587 [5] M. A. Dahleh and J. B. Pearson, Minimization of a regulated response to a fixed input, IEEE Trans. Automat. Contr., vol.

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

Lipschitz continuity for solutions of Hamilton-Jacobi equation with Ornstein-Uhlenbeck operator

Lipschitz continuity for solutions of Hamilton-Jacobi equation with Ornstein-Uhlenbeck operator Lipschitz continuity for solutions of Hamilton-Jacobi equation with Ornstein-Uhlenbeck operator Thi Tuyen Nguyen Ph.D student of University of Rennes 1 Joint work with: Prof. E. Chasseigne(University of

More information

The principle of least action and two-point boundary value problems in orbital mechanics

The principle of least action and two-point boundary value problems in orbital mechanics The principle of least action and two-point boundary value problems in orbital mechanics Seung Hak Han and William M McEneaney Abstract We consider a two-point boundary value problem (TPBVP) in orbital

More information

Risk-Sensitive and Robust Mean Field Games

Risk-Sensitive and Robust Mean Field Games Risk-Sensitive and Robust Mean Field Games Tamer Başar Coordinated Science Laboratory Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL - 6181 IPAM

More information

Linear Quadratic Zero-Sum Two-Person Differential Games

Linear Quadratic Zero-Sum Two-Person Differential Games Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,

More information

A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations

A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations 2005 American Control Conference June 8-10, 2005. Portland, OR, USA WeB10.3 A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations Arik Melikyan, Andrei Akhmetzhanov and Naira

More information

Control Theory: From Classical to Quantum Optimal, Stochastic, and Robust Control

Control Theory: From Classical to Quantum Optimal, Stochastic, and Robust Control Control Theory: From Classical to Quantum Optimal, Stochastic, and Robust Control Notes for Quantum Control Summer School, Caltech, August 2005 M.R. James Department of Engineering Australian National

More information

H 1 optimisation. Is hoped that the practical advantages of receding horizon control might be combined with the robustness advantages of H 1 control.

H 1 optimisation. Is hoped that the practical advantages of receding horizon control might be combined with the robustness advantages of H 1 control. A game theoretic approach to moving horizon control Sanjay Lall and Keith Glover Abstract A control law is constructed for a linear time varying system by solving a two player zero sum dierential game

More information

H CONTROL FOR NONLINEAR STOCHASTIC SYSTEMS: THE OUTPUT-FEEDBACK CASE. Nadav Berman,1 Uri Shaked,2

H CONTROL FOR NONLINEAR STOCHASTIC SYSTEMS: THE OUTPUT-FEEDBACK CASE. Nadav Berman,1 Uri Shaked,2 H CONTROL FOR NONLINEAR STOCHASTIC SYSTEMS: THE OUTPUT-FEEDBACK CASE Nadav Berman,1 Uri Shaked,2 Ben-Gurion University of the Negev, Dept. of Mechanical Engineering, Beer-Sheva, Israel Tel-Aviv University,

More information

On differential games with long-time-average cost

On differential games with long-time-average cost On differential games with long-time-average cost Martino Bardi Dipartimento di Matematica Pura ed Applicata Università di Padova via Belzoni 7, 35131 Padova, Italy bardi@math.unipd.it Abstract The paper

More information

Bridging the Gap between Center and Tail for Multiscale Processes

Bridging the Gap between Center and Tail for Multiscale Processes Bridging the Gap between Center and Tail for Multiscale Processes Matthew R. Morse Department of Mathematics and Statistics Boston University BU-Keio 2016, August 16 Matthew R. Morse (BU) Moderate Deviations

More information

HJ equations. Reachability analysis. Optimal control problems

HJ equations. Reachability analysis. Optimal control problems HJ equations. Reachability analysis. Optimal control problems Hasnaa Zidani 1 1 ENSTA Paris-Tech & INRIA-Saclay Graz, 8-11 September 2014 H. Zidani (ENSTA & Inria) HJ equations. Reachability analysis -

More information

Nonlinear L 2 -gain analysis via a cascade

Nonlinear L 2 -gain analysis via a cascade 9th IEEE Conference on Decision and Control December -7, Hilton Atlanta Hotel, Atlanta, GA, USA Nonlinear L -gain analysis via a cascade Peter M Dower, Huan Zhang and Christopher M Kellett Abstract A nonlinear

More information

SUB-OPTIMAL RISK-SENSITIVE FILTERING FOR THIRD DEGREE POLYNOMIAL STOCHASTIC SYSTEMS

SUB-OPTIMAL RISK-SENSITIVE FILTERING FOR THIRD DEGREE POLYNOMIAL STOCHASTIC SYSTEMS ICIC Express Letters ICIC International c 2008 ISSN 1881-803X Volume 2, Number 4, December 2008 pp. 371-377 SUB-OPTIMAL RISK-SENSITIVE FILTERING FOR THIRD DEGREE POLYNOMIAL STOCHASTIC SYSTEMS Ma. Aracelia

More information

Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering h

Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering h Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering denghua@eng.umd.edu http://www.engr.umd.edu/~denghua/ University of Maryland

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Thomas Knispel Leibniz Universität Hannover

Thomas Knispel Leibniz Universität Hannover Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July

More information

Robust control and applications in economic theory

Robust control and applications in economic theory Robust control and applications in economic theory In honour of Professor Emeritus Grigoris Kalogeropoulos on the occasion of his retirement A. N. Yannacopoulos Department of Statistics AUEB 24 May 2013

More information

1. Numerical Methods for Stochastic Control Problems in Continuous Time (with H. J. Kushner), Second Revised Edition, Springer-Verlag, New York, 2001.

1. Numerical Methods for Stochastic Control Problems in Continuous Time (with H. J. Kushner), Second Revised Edition, Springer-Verlag, New York, 2001. Paul Dupuis Professor of Applied Mathematics Division of Applied Mathematics Brown University Publications Books: 1. Numerical Methods for Stochastic Control Problems in Continuous Time (with H. J. Kushner),

More information

Sébastien Chaumont a a Institut Élie Cartan, Université Henri Poincaré Nancy I, B. P. 239, Vandoeuvre-lès-Nancy Cedex, France. 1.

Sébastien Chaumont a a Institut Élie Cartan, Université Henri Poincaré Nancy I, B. P. 239, Vandoeuvre-lès-Nancy Cedex, France. 1. A strong comparison result for viscosity solutions to Hamilton-Jacobi-Bellman equations with Dirichlet condition on a non-smooth boundary and application to parabolic problems Sébastien Chaumont a a Institut

More information

The main motivation of this paper comes from the following, rather surprising, result of Ecker and Huisken [13]: for any initial data u 0 W 1,

The main motivation of this paper comes from the following, rather surprising, result of Ecker and Huisken [13]: for any initial data u 0 W 1, Quasilinear parabolic equations, unbounded solutions and geometrical equations II. Uniqueness without growth conditions and applications to the mean curvature flow in IR 2 Guy Barles, Samuel Biton and

More information

Mean-Field Games with non-convex Hamiltonian

Mean-Field Games with non-convex Hamiltonian Mean-Field Games with non-convex Hamiltonian Martino Bardi Dipartimento di Matematica "Tullio Levi-Civita" Università di Padova Workshop "Optimal Control and MFGs" Pavia, September 19 21, 2018 Martino

More information

Closed-Loop Impulse Control of Oscillating Systems

Closed-Loop Impulse Control of Oscillating Systems Closed-Loop Impulse Control of Oscillating Systems A. N. Daryin and A. B. Kurzhanski Moscow State (Lomonosov) University Faculty of Computational Mathematics and Cybernetics Periodic Control Systems, 2007

More information

Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables

Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables Joao Meireles joint work with Martino Bardi and Guy Barles University of Padua, Italy Workshop "New Perspectives in Optimal

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

On the Stability of the Best Reply Map for Noncooperative Differential Games

On the Stability of the Best Reply Map for Noncooperative Differential Games On the Stability of the Best Reply Map for Noncooperative Differential Games Alberto Bressan and Zipeng Wang Department of Mathematics, Penn State University, University Park, PA, 68, USA DPMMS, University

More information

Chain differentials with an application to the mathematical fear operator

Chain differentials with an application to the mathematical fear operator Chain differentials with an application to the mathematical fear operator Pierre Bernhard I3S, University of Nice Sophia Antipolis and CNRS, ESSI, B.P. 145, 06903 Sophia Antipolis cedex, France January

More information

Master Thesis. Nguyen Tien Thinh. Homogenization and Viscosity solution

Master Thesis. Nguyen Tien Thinh. Homogenization and Viscosity solution Master Thesis Nguyen Tien Thinh Homogenization and Viscosity solution Advisor: Guy Barles Defense: Friday June 21 th, 2013 ii Preface Firstly, I am grateful to Prof. Guy Barles for helping me studying

More information

Stochastic contraction BACS Workshop Chamonix, January 14, 2008

Stochastic contraction BACS Workshop Chamonix, January 14, 2008 Stochastic contraction BACS Workshop Chamonix, January 14, 2008 Q.-C. Pham N. Tabareau J.-J. Slotine Q.-C. Pham, N. Tabareau, J.-J. Slotine () Stochastic contraction 1 / 19 Why stochastic contraction?

More information

An Introduction to Noncooperative Games

An Introduction to Noncooperative Games An Introduction to Noncooperative Games Alberto Bressan Department of Mathematics, Penn State University Alberto Bressan (Penn State) Noncooperative Games 1 / 29 introduction to non-cooperative games:

More information

Viscosity Solutions for Dummies (including Economists)

Viscosity Solutions for Dummies (including Economists) Viscosity Solutions for Dummies (including Economists) Online Appendix to Income and Wealth Distribution in Macroeconomics: A Continuous-Time Approach written by Benjamin Moll August 13, 2017 1 Viscosity

More information

Controlled Markov Processes and Viscosity Solutions

Controlled Markov Processes and Viscosity Solutions Controlled Markov Processes and Viscosity Solutions Wendell H. Fleming, H. Mete Soner Controlled Markov Processes and Viscosity Solutions Second Edition Wendell H. Fleming H.M. Soner Div. Applied Mathematics

More information

APPROXIMATIONS FOR NONLINEAR FILTERING

APPROXIMATIONS FOR NONLINEAR FILTERING October, 1982 LIDS-P-1244 APPROXIMATIONS FOR NONLINEAR FILTERING Sanjoy K. Mitter Department of Electrical Engiieering and Computer Science and Laboratory for Information and Decision Systems Massachusetts

More information

Large Deviations Theory, Induced Log-Plus and Max-Plus Measures and their Applications

Large Deviations Theory, Induced Log-Plus and Max-Plus Measures and their Applications Large Deviations Theory, Induced Log-Plus and Max-Plus Measures and their Applications WM McEneaney Department of Mathematics North Carolina State University Raleigh, NC 7695 85, USA wmm@mathncsuedu Charalambos

More information

Calculating the domain of attraction: Zubov s method and extensions

Calculating the domain of attraction: Zubov s method and extensions Calculating the domain of attraction: Zubov s method and extensions Fabio Camilli 1 Lars Grüne 2 Fabian Wirth 3 1 University of L Aquila, Italy 2 University of Bayreuth, Germany 3 Hamilton Institute, NUI

More information

On the stability of receding horizon control with a general terminal cost

On the stability of receding horizon control with a general terminal cost On the stability of receding horizon control with a general terminal cost Ali Jadbabaie and John Hauser Abstract We study the stability and region of attraction properties of a family of receding horizon

More information

Asymptotic Perron Method for Stochastic Games and Control

Asymptotic Perron Method for Stochastic Games and Control Asymptotic Perron Method for Stochastic Games and Control Mihai Sîrbu, The University of Texas at Austin Methods of Mathematical Finance a conference in honor of Steve Shreve s 65th birthday Carnegie Mellon

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Noise is often considered as some disturbing component of the system. In particular physical situations, noise becomes

More information

Characterizing controllability probabilities of stochastic control systems via Zubov s method

Characterizing controllability probabilities of stochastic control systems via Zubov s method Characterizing controllability probabilities of stochastic control systems via Zubov s method Fabio Camilli Lars Grüne Fabian Wirth, Keywords: Stochastic differential equations, stochastic control system,

More information

Shape from Shading with discontinuous image brightness

Shape from Shading with discontinuous image brightness Shape from Shading with discontinuous image brightness Fabio Camilli Dip. di Matematica Pura e Applicata, Università dell Aquila (Italy), camilli@ing.univaq.it Emmanuel Prados UCLA Vision Lab. (USA), eprados@cs.ucla.edu

More information

A Model of Optimal Portfolio Selection under. Liquidity Risk and Price Impact

A Model of Optimal Portfolio Selection under. Liquidity Risk and Price Impact A Model of Optimal Portfolio Selection under Liquidity Risk and Price Impact Huyên PHAM Workshop on PDE and Mathematical Finance KTH, Stockholm, August 15, 2005 Laboratoire de Probabilités et Modèles Aléatoires

More information

An Uncertain Control Model with Application to. Production-Inventory System

An Uncertain Control Model with Application to. Production-Inventory System An Uncertain Control Model with Application to Production-Inventory System Kai Yao 1, Zhongfeng Qin 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China 2 School of Economics

More information

Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations

Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations Martino Bardi Italo Capuzzo-Dolcetta Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations Birkhauser Boston Basel Berlin Contents Preface Basic notations xi xv Chapter I. Outline

More information

A generalization of Zubov s method to perturbed systems

A generalization of Zubov s method to perturbed systems A generalization of Zubov s method to perturbed systems Fabio Camilli, Dipartimento di Matematica Pura e Applicata, Università dell Aquila 674 Roio Poggio (AQ), Italy, camilli@ing.univaq.it Lars Grüne,

More information

Mean Field Games on networks

Mean Field Games on networks Mean Field Games on networks Claudio Marchi Università di Padova joint works with: S. Cacace (Rome) and F. Camilli (Rome) C. Marchi (Univ. of Padova) Mean Field Games on networks Roma, June 14 th, 2017

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

I R TECHNICAL RESEARCH REPORT. Risk-Sensitive Probability for Markov Chains. by Vahid R. Ramezani and Steven I. Marcus TR

I R TECHNICAL RESEARCH REPORT. Risk-Sensitive Probability for Markov Chains. by Vahid R. Ramezani and Steven I. Marcus TR TECHNICAL RESEARCH REPORT Risk-Sensitive Probability for Markov Chains by Vahid R. Ramezani and Steven I. Marcus TR 2002-46 I R INSTITUTE FOR SYSTEMS RESEARCH ISR develops, applies and teaches advanced

More information

INFINITE TIME HORIZON OPTIMAL CONTROL OF THE SEMILINEAR HEAT EQUATION

INFINITE TIME HORIZON OPTIMAL CONTROL OF THE SEMILINEAR HEAT EQUATION Nonlinear Funct. Anal. & Appl., Vol. 7, No. (22), pp. 69 83 INFINITE TIME HORIZON OPTIMAL CONTROL OF THE SEMILINEAR HEAT EQUATION Mihai Sîrbu Abstract. We consider here the infinite horizon control problem

More information

AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS. 1. Introduction

AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS. 1. Introduction AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS JAMES C HATELEY Abstract. There is a voluminous amount of literature on Hamilton-Jacobi equations. This paper reviews some of the existence and uniqueness

More information

Further Results on the Bellman Equation for Optimal Control Problems with Exit Times and Nonnegative Instantaneous Costs

Further Results on the Bellman Equation for Optimal Control Problems with Exit Times and Nonnegative Instantaneous Costs Further Results on the Bellman Equation for Optimal Control Problems with Exit Times and Nonnegative Instantaneous Costs Michael Malisoff Systems Science and Mathematics Dept. Washington University in

More information

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term 1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes

More information

Controlled Markov Processes and Viscosity Solutions

Controlled Markov Processes and Viscosity Solutions Controlled Markov Processes and Viscosity Solutions Wendell H. Fleming, H. Mete Soner Controlled Markov Processes and Viscosity Solutions Second Edition Wendell H. Fleming H.M. Soner Div. Applied Mathematics

More information

A MAXIMUM PRINCIPLE FOR SEMICONTINUOUS FUNCTIONS APPLICABLE TO INTEGRO-PARTIAL DIFFERENTIAL EQUATIONS

A MAXIMUM PRINCIPLE FOR SEMICONTINUOUS FUNCTIONS APPLICABLE TO INTEGRO-PARTIAL DIFFERENTIAL EQUATIONS Dept. of Math. University of Oslo Pure Mathematics ISBN 82 553 1382 6 No. 18 ISSN 0806 2439 May 2003 A MAXIMUM PRINCIPLE FOR SEMICONTINUOUS FUNCTIONS APPLICABLE TO INTEGRO-PARTIAL DIFFERENTIAL EQUATIONS

More information

The Convergence of the Minimum Energy Estimator

The Convergence of the Minimum Energy Estimator The Convergence of the Minimum Energy Estimator Arthur J. Krener Department of Mathematics, University of California, Davis, CA 95616-8633, USA, ajkrener@ucdavis.edu Summary. We show that under suitable

More information

An iterative procedure for constructing subsolutions of discrete-time optimal control problems

An iterative procedure for constructing subsolutions of discrete-time optimal control problems An iterative procedure for constructing subsolutions of discrete-time optimal control problems Markus Fischer version of November, 2011 Abstract An iterative procedure for constructing subsolutions of

More information

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33

More information

A Narrow-Stencil Finite Difference Method for Hamilton-Jacobi-Bellman Equations

A Narrow-Stencil Finite Difference Method for Hamilton-Jacobi-Bellman Equations A Narrow-Stencil Finite Difference Method for Hamilton-Jacobi-Bellman Equations Xiaobing Feng Department of Mathematics The University of Tennessee, Knoxville, U.S.A. Linz, November 23, 2016 Collaborators

More information

A probabilistic approach to second order variational inequalities with bilateral constraints

A probabilistic approach to second order variational inequalities with bilateral constraints Proc. Indian Acad. Sci. (Math. Sci.) Vol. 113, No. 4, November 23, pp. 431 442. Printed in India A probabilistic approach to second order variational inequalities with bilateral constraints MRINAL K GHOSH,

More information

Markov Chain BSDEs and risk averse networks

Markov Chain BSDEs and risk averse networks Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs

More information

Zubov s method for perturbed differential equations

Zubov s method for perturbed differential equations Zubov s method for perturbed differential equations Fabio Camilli 1, Lars Grüne 2, Fabian Wirth 3 1 Dip. di Energetica, Fac. di Ingegneria Università de l Aquila, 674 Roio Poggio (AQ), Italy camilli@axcasp.caspur.it

More information

Exam February h

Exam February h Master 2 Mathématiques et Applications PUF Ho Chi Minh Ville 2009/10 Viscosity solutions, HJ Equations and Control O.Ley (INSA de Rennes) Exam February 2010 3h Written-by-hands documents are allowed. Printed

More information

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

mü/wm 9rwsm TITLE AND SUBTITLE COMPUTATIONAL NONLINEAR CONTROL

mü/wm 9rwsm TITLE AND SUBTITLE COMPUTATIONAL NONLINEAR CONTROL REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average l hour per response, including the time for reviewing instructions,

More information

Numerical methods for Mean Field Games: additional material

Numerical methods for Mean Field Games: additional material Numerical methods for Mean Field Games: additional material Y. Achdou (LJLL, Université Paris-Diderot) July, 2017 Luminy Convergence results Outline 1 Convergence results 2 Variational MFGs 3 A numerical

More information

Robust Control of Linear Quantum Systems

Robust Control of Linear Quantum Systems B. Ross Barmish Workshop 2009 1 Robust Control of Linear Quantum Systems Ian R. Petersen School of ITEE, University of New South Wales @ the Australian Defence Force Academy, Based on joint work with Matthew

More information

Prof. Erhan Bayraktar (University of Michigan)

Prof. Erhan Bayraktar (University of Michigan) September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled

More information

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt + ECE 553, Spring 8 Posted: May nd, 8 Problem Set #7 Solution Solutions: 1. The optimal controller is still the one given in the solution to the Problem 6 in Homework #5: u (x, t) = p(t)x k(t), t. The minimum

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Balancing of Lossless and Passive Systems

Balancing of Lossless and Passive Systems Balancing of Lossless and Passive Systems Arjan van der Schaft Abstract Different balancing techniques are applied to lossless nonlinear systems, with open-loop balancing applied to their scattering representation.

More information

HAMILTON-JACOBI EQUATIONS : APPROXIMATIONS, NUMERICAL ANALYSIS AND APPLICATIONS. CIME Courses-Cetraro August 29-September COURSES

HAMILTON-JACOBI EQUATIONS : APPROXIMATIONS, NUMERICAL ANALYSIS AND APPLICATIONS. CIME Courses-Cetraro August 29-September COURSES HAMILTON-JACOBI EQUATIONS : APPROXIMATIONS, NUMERICAL ANALYSIS AND APPLICATIONS CIME Courses-Cetraro August 29-September 3 2011 COURSES (1) Models of mean field, Hamilton-Jacobi-Bellman Equations and numerical

More information

The Important State Coordinates of a Nonlinear System

The Important State Coordinates of a Nonlinear System The Important State Coordinates of a Nonlinear System Arthur J. Krener 1 University of California, Davis, CA and Naval Postgraduate School, Monterey, CA ajkrener@ucdavis.edu Summary. We offer an alternative

More information

Optimal Control Using an Algebraic Method for Control-Affine Nonlinear Systems. Abstract

Optimal Control Using an Algebraic Method for Control-Affine Nonlinear Systems. Abstract Optimal Control Using an Algebraic Method for Control-Affine Nonlinear Systems Chang-Hee Won and Saroj Biswas Department of Electrical and Computer Engineering Temple University 1947 N. 12th Street Philadelphia,

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

L -Bounded Robust Control of Nonlinear Cascade Systems

L -Bounded Robust Control of Nonlinear Cascade Systems L -Bounded Robust Control of Nonlinear Cascade Systems Shoudong Huang M.R. James Z.P. Jiang August 19, 2004 Accepted by Systems & Control Letters Abstract In this paper, we consider the L -bounded robust

More information

Deterministic Minimax Impulse Control

Deterministic Minimax Impulse Control Deterministic Minimax Impulse Control N. El Farouq, Guy Barles, and P. Bernhard March 02, 2009, revised september, 15, and october, 21, 2009 Abstract We prove the uniqueness of the viscosity solution of

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points Nonlinear Control Lecture # 3 Stability of Equilibrium Points The Invariance Principle Definitions Let x(t) be a solution of ẋ = f(x) A point p is a positive limit point of x(t) if there is a sequence

More information

On the infinity Laplace operator

On the infinity Laplace operator On the infinity Laplace operator Petri Juutinen Köln, July 2008 The infinity Laplace equation Gunnar Aronsson (1960 s): variational problems of the form S(u, Ω) = ess sup H (x, u(x), Du(x)). (1) x Ω The

More information

AN ASYMPTOTIC MEAN VALUE CHARACTERIZATION FOR p-harmonic FUNCTIONS. To the memory of our friend and colleague Fuensanta Andreu

AN ASYMPTOTIC MEAN VALUE CHARACTERIZATION FOR p-harmonic FUNCTIONS. To the memory of our friend and colleague Fuensanta Andreu AN ASYMPTOTIC MEAN VALUE CHARACTERIZATION FOR p-harmonic FUNCTIONS JUAN J. MANFREDI, MIKKO PARVIAINEN, AND JULIO D. ROSSI Abstract. We characterize p-harmonic functions in terms of an asymptotic mean value

More information

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t

More information

Dynamical properties of Hamilton Jacobi equations via the nonlinear adjoint method: Large time behavior and Discounted approximation

Dynamical properties of Hamilton Jacobi equations via the nonlinear adjoint method: Large time behavior and Discounted approximation Dynamical properties of Hamilton Jacobi equations via the nonlinear adjoint method: Large time behavior and Discounted approximation Hiroyoshi Mitake 1 Institute of Engineering, Division of Electrical,

More information

High-Gain Observers in Nonlinear Feedback Control

High-Gain Observers in Nonlinear Feedback Control High-Gain Observers in Nonlinear Feedback Control Lecture # 1 Introduction & Stabilization High-Gain ObserversinNonlinear Feedback ControlLecture # 1Introduction & Stabilization p. 1/4 Brief History Linear

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s

More information

State-Feedback Optimal Controllers for Deterministic Nonlinear Systems

State-Feedback Optimal Controllers for Deterministic Nonlinear Systems State-Feedback Optimal Controllers for Deterministic Nonlinear Systems Chang-Hee Won*, Abstract A full-state feedback optimal control problem is solved for a general deterministic nonlinear system. The

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information