MEAN FIELD GAMES WITH MAJOR AND MINOR PLAYERS

Size: px
Start display at page:

Download "MEAN FIELD GAMES WITH MAJOR AND MINOR PLAYERS"

Transcription

1 MEAN FIELD GAMES WITH MAJOR AND MINOR PLAYERS René Carmona Department of Operations Research & Financial Engineering PACM Princeton University CEMRACS - Luminy, July 17, 217

2 MFG WITH MAJOR AND MINOR PLAYERS SET-UP R.C. - G. Zhu, R.C. - P. Wang State equations { dxt = b (t, Xt, µ t, αt )dt + σ (t, Xt, µ t, αt )dwt dx t = b(t, X t, µ t, Xt, α t, αt )dt + σ(t, X t, µ t, Xt, α t, αt dw t, Costs { J (α, α) J(α, α) = E [ T f (t, Xt, µ t, αt )dt + g (XT, µ T ) ] = E [ T f (t, Xt, µn t, Xt, α t, αt )dt + g(x T, µ T ) ],

3 OPEN LOOP VERSION OF THE MFG PROBLEM The controls used by the major player and the representative minor player are of the form: α t = φ (t, W [,T ]), and α t = φ(t, W [,T ], W [,T ] ), (1) for deterministic progressively measurable functions φ : [, T ] C([, T ]; R d ) A and φ : [, T ] C([, T ]; R d ) C([, T ]; R d ) A

4 THE MAJOR PLAYER BEST RESPONSE Assume representative minor player uses the open loop control given by φ : (t, w, w) φ(t, w, w), Major player minimizes [ T ] J φ, (α ) = E f (t, Xt, µ t, αt )dt + g (XT, µ T ) under the dynamical constraints: dxt = b (t, Xt, µ t, αt )dt + σ (t, Xt, µ t, αt )dwt dx t = b(t, X t, µ t, Xt, φ(t, W [,T ], W [,T ]), αt )dt +σ(t, X t, µ t, Xt, φ(t, W [,T ], W [,T ]), αt )dw t, µ t = L(X t W [,t]) conditional distribution of Xt given W[,t]. Major player problem as the search for: φ, (φ) = arg inf α t =φ (t,w Optimal control of the conditional McKean-Vlasov type! [,T ] ) J φ, (α ) (2)

5 THE REP. MINOR PLAYER BEST RESPONSE System against which best response is sought comprises a major player a field of minor players different from the representative minor player Major player uses strategy α t = φ (t, W [,T ] ) Representative of the field of minor players uses strategy α t = φ(t, W [,T ], W [,T ]). State dynamics dxt = b (t, Xt, µ t, φ (t, W [,T ] ))dt + σ (t, Xt, µ t, φ (t, W [,T ] ))dw t dx t = b(t, X t, µ t, X t, φ(t, W [,T ], W [,T ]), φ (t, W [,T ] ))dt +σ(t, X t, µ t, X t, φ(t, W [,T ], W [,T ]), φ (t, W [,T ] ))dwt, where µ t = L(X t W [,t]) is the conditional distribution of Xt given W[,t]. Given φ and φ, SDE of (conditional) McKean-Vlasov type

6 THE REP. MINOR PLAYER BEST RESPONSE (CONT.) Representative minor player chooses a strategy α t = φ(t, W [,T ], W [,T ]) to minimize J φ,φ (ᾱ) = E [ T f (t, X t, X t, µ t, ᾱ t, φ (t, W [,T ]))dt + g(x T, µ t) ], where the dynamics of the virtual state X t are given by: dx t = b(t, X t, µ t, X t, φ(t, W [,T ], W [,T ] ), φ (t, W [,T ]))dt + σ(t, X t, µ t, X t, φ(t, W [,T ], W [,T ] ), φ (t, W [,T ]))dw t, for a Wiener process W = (W t) t T independent of the other Wiener processes. Optimization problem NOT of McKean-Vlasov type. Classical optimal control problem with random coefficients φ (φ, φ) = arg inf J φ α t =φ(t,w [,T ],W [,T ] ),φ (ᾱ)

7 NASH EQUILIBRIUM Search for Best Response Map Fixed Point ( ˆφ, ˆφ) = ( φ, ( ˆφ), φ ( ˆφ, ˆφ) ). Fixed point in a space of controls, not measures!!!

8 CLOSED LOOP VERSIONS OF THE MFG PROBLEM Closed Loop Version Controls of the major player and the representative minor player are of the form: α t = φ (t, X [,T ], µ t), and α t = φ(t, X [,T ], µ t, X [,T ]), for deterministic progressively measurable functions φ : [, T ] C([, T ]; R d ) P 2 (R d ) A and φ : [, T ] C([, T ]; R d ) P 2 (R d ) C([, T ]; R d ) A. Markovian Version Controls of the major player and the representative minor player are of the form: α t = φ (t, X t, µ t), and α t = φ(t, X t, µ t, X t ), for deterministic feedback functions φ : [, T ] R d P 2 (R d ) A and φ : [, T ] R d P 2 (R d ) R d A.

9 NASH EQUILIBRIUM Search for Best Response Map Fixed Point ( ˆφ, ˆφ) = ( φ, ( ˆφ), φ ( ˆφ, ˆφ) ).

10 CONTRACT THEORY: A STACKELBERG VERSION R.C. - D. Possamaï - N. Touzi State equation dx t = σ(t, X t, ν t, α t)[λ(t, X t, ν t, α t)dt + dw t], X t Agent output α t agent effort (control) ν t distribution of output and effort (control) of agent Rewards {J (ξ) = E [ U P (X [,T ], ν T, ξ) ] J(ξ, α) = E [ T f (t, Xt, νt, αt)dt + U A(ξ) ], Given the choice of a contract ξ by the Principal Each agent in the field of exchangeable agents chooses an effort level α t meets his/her reservation price get the field of agents in a (MF) Nash equilibrium Principal chooses the contract to maximize his/her expected utility

11 LINEAR QUADRATIC MODELS State dynamics { dx t = (L Xt + B αt + F Xt)dt + D dwt dx t = (LX t + Bα t + F X t + GX t )dt + DdW t where X t = E[X t F t ], (F t ) t filtration generated by W Costs [ T J (α, α) = E [(Xt H Xt η ) Q (X t H Xt η ) + α t ] R αt ]dt [ T ] J(α, α) = E [(X t HXt H 1 Xt η) Q(X t HXt H 1 Xt η) + α t Rαt]dt in which Q, Q, R, R are symmetric matrices, and R, R are assumed to be positive definite.

12 EQUILIBRIA Open Loop Version Optimization problems + fixed point = large FBSDE affine FBSDE solved by a large matrix Riccati equation Closed Loop Version Fixed point step more difficult Search limited to controls of the form αt = φ (t, Xt, X t) = φ (t) + φ 1(t)Xt + φ 2(t) X t α t = φ(t, X t, Xt, X t) = φ (t) + φ 1 (t)x t + φ 2 (t)xt + φ 3 (t) X t Optimization problems + fixed point = large FBSDE affine FBSDE solved by a large matrix Riccati equation Solutions are not the same!!!!

13 APPLICATION TO BEE SWARMING V,N t velocity of the (major player) streaker bee at time t V i,n t Linear dynamics { the velocity of the i-th worker bee, i = 1,, N at time t Minimization of Quadratic costs [ J T = E dv,n t = α t dt + Σ dwt dv i,n t = α i t dt + ΣdW i t ( λ V,N t ν t 2 + λ 1 V,N t V ] t N 2 + (1 λ λ 1 ) α t 2) dt N V t := 1 N N i=1 V i,n t the average velocity of the followers, deterministic function [, T ] t νt R d (leader s free will) λ and λ 1 are positive real numbers satisfying λ + λ 1 1 [ J i T ( = E l V i,n t V,N t 2 + l 1 V i,n t l and l 1, l + l 1 1. V ] t N 2 + (1 l l 1 ) α i t 2) dt

14 SAMPLE TRAJECTORIES IN EQUILIBRIUM ν(t) := [ 2π sin(2πt), 2π cos(2πt)] k =.8 k1 =.19 l =.19 l1 =.8 k =.8 k1 =.19 l =.8 l1 = y y x x FIGURE: Optimal velocity and trajectory of follower and leaders

15 SAMPLE TRAJECTORIES IN EQUILIBRIUM ν(t) := [ 2π sin(2πt), 2π cos(2πt)] k =.19 k1 =.8 l =.19 l1 =.8 k =.19 k1 =.8 l =.8 l1 = y 1. y x x FIGURE: Optimal velocity and trajectory of follower and leaders

16 CONDITIONAL PROPAGATION OF CHAOS N = 5 1 N = 1 1 N = N = N = FIGURE: Conditional correlation of 5 followers velocities 1

17 FINITE STATE SPACES: A CYBER SECURITY MODEL Kolokolstov - Bensoussan, R.C. - P. Wang N computers in a network (minor players) One hacker / attacker (major player) Action of major player affect minor player states (even when N >> 1) Major player feels only µ N t the empirical distribution of the minor players states Finite State Space: each computer is in one of 4 states protected & infected protected & sucseptible to be infected unprotected & infected unprotected & sucseptible to be infected Continuous time Markov chain in E = {DI, DS, UI, US} Each player s action is intended to affect the rates of change from one state to another to minimize expected costs [ T ] J(α, α) = E (k D 1 D + k I 1 I )(X t )dt [ T J (α (, α) = E f (µ t ) + k H φ (µ t ) ) ] dt

18 FINITE STATE MEAN FIELD GAMES State Dynamics (X t ) t continuous time Markov chain in E with Q-matrix (q t (x, x )) t,x,x E. Mean Field structure of the Q-matrix Control Space q t (x, x ) = λ t (x, x, µ, α) A R k, sometime A finite, e.g. A = {, 1}, or a function space Control Strategies in feedback form α t = φ(t, X t ), for some φ : [, T ] E A Mean Field Interaction through Empirical Measures µ P(E) = (µ({x})) x E Kolmogorov-Fokker-Planck equation: if µ t = L(X t ) t µ t ({x}) = [L µ t,φ(t, ), t µ t ]({x}) = µ t ({x })ˆq µ t,φ(t, ) t (x, x), x E = µ t ({x })λ t (x, x, µ t, φ(t, x)) x E, x E

19 FINITE STATE MEAN FIELD GAMES: OPTIMIZATION Hamiltonian Hamiltonian minimizer Minimized Hamiltonian HJB Equation H(t, x, µ, h, α) = λ t (x, x, µ, α)h(x ) + f (t, x, µ, α). x E ˆα(t, x, µ, h) = arg inf H(t, x, µ, h, α), α A H (t, x, µ, h) = inf H(t, x, µ, h, α) = H(t, x, µ, h, ˆα(t, x, µ, h)). α A t u µ (t, x) + H (t, x, µ t, u µ (t, )) =, t T, x E, with terminal condition u µ (T, x) = g(x, µ T ).

20 TRANSITION RATES Q-MATRICES For α = λ t (,, µ, α, ) = DI DS UI US DI q D rec DS α qinf D + β DDµ({DI}) + β UD µ({ui}) UI qrec U US α qinf U + β UUµ({UI}) + β DU µ({di}) and for α = 1: λ t (,, µ, α, 1) = DI DS UI US DI q Drec λ DS α qinf D + β DDµ({DI}) + β UD µ({ui}) λ UI λ qrec U US λ α qinf U + β UUµ({UI}) + β DU µ({di}) where all the instances of should be replaced by the negative of the sum of the entries of the row in which appears on the diagonal.

21 E QUILIBRIUM D ISTRIBUTION OVER T IME WITH C ONSTANT ATTACKER time t µ(t).6.8 µ(t)[di] µ(t)[ds] µ(t)[ui] µ(t)[us]..2.4 µ(t).6.8 µ(t)[di] µ(t)[ds] µ(t)[ui] µ(t)[us] µ(t).6.8 µ(t)[di] µ(t)[ds] µ(t)[ui] µ(t)[us] 1. Time evolution of the state distribution µ(t) 1. Time evolution of the state distribution µ(t) 1. Time evolution of the state distribution µ(t) time t time t 8 1

22 EQUILIBRIUM OPTIMAL FEEDBACK φ(t, ) Time evolution of the optimal feedback function φ(t)[di] Time evolution of the optimal feedback function φ(t)[ds] Time evolution of the optimal feedback function φ(t)[ui] Time evolution of the optimal feedback function φ(t)[us] φ(t)[di] φ(t)[ds] φ(t)[ui] φ(t)[us] time t time t time t time t From left to right φ(t, DI), φ(t, DS), φ(t, UI), and φ(t, US).

23 CONVERGENCE MAY BE ELUSIVE From left to right, time evolution of the distribution µ t for the parameters given in the text, after 1, 5, 2, and 1 iterations of the successive solutions of the HJB equation and the Kolmogorov Fokker Planck equation.

24 THE MASTER EQUATION EQUATION t U + H (t, x, µ, U(t,, µ)) + x E h (t, µ, U(t,, µ))(x U(t, x, µ) ) µ({x =, }) where the R E -valued function h is defined on [, T ] P(E) R E by: h ( ) (t, µ, u) = λ t x,, µ, ˆα(t, x, µ, u) dµ(x) E = x E λ t ( x,, µ, ˆα(t, x, µ, u) ) µ({x}). System of Ordinary Differential Equations (ODEs) If and when the Master equation is solved t µ t ({x}) = h (t, µ t, U(t,, µ t ))(x)

25 µ t EVOLUTION FROM THE MASTER EQUATION Time evolution of the state distribution µ(t) Time evolution of the state distribution µ(t) µ(t) µ(t)[di] µ(t)[ds] µ(t)[ui] µ(t)[us] µ(t) µ(t)[di] µ(t)[ds] µ(t)[ui] µ(t)[us] time t time t As before, we used the initial conditions µ : µ = (.25,.25,.25,.25) in the left and µ = (1,,, ) on the right.

26 IN THE PRESENCE OF A MAJOR (HACKER) PLAYER Time evolution of the state distribution µ(t) Time evolution of the state distribution µ(t) Time evolution of the state distribution µ(t) µ(t) µ(t)[di] µ(t)[ds] µ(t)[ui] µ(t)[us] µ(t) µ(t)[di] µ(t)[ds] µ(t)[ui] µ(t)[us] µ(t) µ(t)[di] µ(t)[ds] µ(t)[ui] µ(t)[us] time t time t time t Time evolution in equilibrium, of the distribution µ t of the states of the computers in the network for the initial condition µ : µ = (.25,.25,.25,.25) when the major player is not rewarded for its attacks, i.e. when f (µ) (leftmost pane), in the absence of major player and v = (middle plot), and with f (µ) = k (µ({ui}) + µ({di})) with k =.5 (rightmost plot).

27 POA BOUNDS FOR CONTINUOUS TIME MFGS Price of Anarchy Bounds compare Social Welfare for NE to what a Central Planer would achieve Koutsoupias-Papadimitriou Usual Game Model for Cyber Security Zero-Sum Game between Attacker and Network Manager Compute Expected Cost to Network for Protection MFG Model for Cyber Security Let the individual computer owners take care of their security Hope for a Nash Equilibrium Compute Expected Cost to Network for Protection How much worse the NE does is the PoA

28 POA BOUNDS FOR CONTINUOUS TIME MFGS WITH FINITE STATE SPACES X t = (Xt 1,, X t N ) state at time t, with Xt i {e 1,, e d } Use distributed feedback controls, for state to be a continuous time Markov Chain Dynamics given by Q-matrices (q t (x, x ) t,x,x E Empirical measures µ N x = 1 N d δ N x i = p l δ el i=1 l=1 where p l = #{i; 1 i N, x i = e l }/N is the proportion of elements x i of the sample which are equal to e l. Cost Functionals Player i minimizes: [ T J i (α 1,, α N ) = E f (t, Xt i, α i X i t ) dt + g(x T i, µn 1 ) ], t X i T

29 SOCIAL COST If the N players use distributed Markovian control strategies of the form α i t = φ(t, X i t ) we define the cost (per player) to the system as the quantity J (N) φ J (N) φ In the limit N the social cost should be lim N J(N) φ = lim 1 N N 1 = lim N N = 1 N J i (α 1,, α N ) N j=1 N J i (α 1,, α N ) j=1 N [ T ] E f (t, Xt i X t, φ(t, Xt i T i, µn X T ), j=1 [ T = lim N < f (t,, µ N X t, φ(t, )), µ N X t > dt+ < g(, µ N X T ), µ N X T > if we use the notation < ϕ, ν > for the integral ϕ(z)ν(dz). Now if µ N X t converge toward a deterministic µ t, the social cost becomes: T SC φ (µ) = < f (t,, µ t, φ(t, )), µ t > dt + < g(, µ T ), µ T >, ], (3)

30 ASYMPTOTIC REGIME N = Two alternatives φ is the optimal feedback function for a MFG equilibrium for which µ is the equilibrium flow of statistical distributions of the state, in which case we use the notation SC MFG for SC φ (µ); [ T ] E f (t, X t, µ t, φ(t, X t ))dt + g(x T, µ T ) T = < f (t,, µ t, φ(t, )), L(X t ) > dt+ < g(, µ T ), L(X T ) > = SC φ (µ) = SC MFG in equilibrium φ is the feedback (chosen by a central planner) minimizing the social cost SC φ (µ) without having to be an MFG Nash equilibrium, in which case we use the notation SC MKV for SC φ (µ); T ˆφ = arg inf < f (t,, µ t, φ(t, )), µ t > dt+ < g(, µ T ), µ t > φ where µ t satisfies Kolmogorov-Fokker-Planck forward dynamics

31 POA: SOCIAL COST COMPUTATION Minimize T < f (t,, µ t, φ(t, )), µ t > dt+ < g(, µ T ), µ t > under the dynamical constraint t µ t ({x}) = [L µ t,φ(t, ), t µ t ]({x}) = µ t ({x })λ(t, x, x, µ t, φ(t, x)), x E, x E ODE in the d-dimensional probability simplex S d R d!!! Hamiltonian H(t, µ, ϕ, φ) by minimized Hamiltonian: H(t, µ, ϕ, φ) =< ϕ, [L µ,φ, t µ] > + < f (t,, µ, φ( )), µ > Assume infimum is attained for a unique ˆφ: =< L µ,φ t ϕ + f (t,, µ, φ( )), µ > H (t, µ, ϕ) = inf H(t, µ, ϕ, φ). φ Ã H (t, µ, ϕ) = H(t, µ, ϕ, ˆφ(t, µ, ϕ)) =< L µ, ˆφ(t,µ,ϕ) t ϕ + f (t,, µ, ˆφ(t, µ, ϕ)( )), µ >. HJB equation t v(t, µ) + H (t, µ, δv(t, µ) ) =, v(t, µ) =< g(, µ), µ >. δµ

32 REMARKS ON DERIVATIVES W.R.T. MEASURES Standard identification P(E) µ p = (p 1,, p d ) S d via µ p = (p 1,, p d ) with p i = µ({e i }) for i = 1,, d i.e. µ = d i=1 p i δ ei δv/δµ when v is defined on an open neighborhood of the probability simplex S d. v(t, µ)/ µ({x }) is the derivative of v with respect to the weight µ({x }). Important Remark L µ,φ t ϕ does not change if we add a constant to the function ϕ, so does ˆφ(t, µ, ϕ). Consequence (for numerical purposes): We can identify t v(t, µ) + H ( ( v(t, µ) t, µ, µ({x }) v(t, µ) v(t, µ) µ({x }) µ({x}), v(t, x, µ) ) ) =. µ({x}) x E for x x, with the partial derivative of v(t, ) with respect to µ({x }) whenever v(t, ) is regarded as a smooth function of the (d 1) tuple (µ({x })) x E\{x}, which we can see as an element of the (d 1)-dimensional domain S d 1, = {(p 1,, p d 1 ) [, 1] d 1 : d 1 p i 1} i=1

33 MFGS OF TIMING WITH MAJOR AND MINOR PLAYERS The Example of Corporate Bonds Major Player = bond issuer Bond is Callable Major Player (issuer) chooses a stopping time to pay-off the investors stop coupon payments to the investors refinance his debt with better terms Minor Players = field of investors Bond is Convertible Each Minor Player (investor) chooses a stopping time at which to convert the bond certificate in a fixed number (conversion ratio) of stock shares if and when owning the stock is more profitable creating Dilution of the stock

arxiv: v1 [math.pr] 18 Oct 2016

arxiv: v1 [math.pr] 18 Oct 2016 An Alternative Approach to Mean Field Game with Major and Minor Players, and Applications to Herders Impacts arxiv:161.544v1 [math.pr 18 Oct 216 Rene Carmona August 7, 218 Abstract Peiqi Wang The goal

More information

LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION

LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION René Carmona Department of Operations Research & Financial Engineering PACM

More information

Control of McKean-Vlasov Dynamics versus Mean Field Games

Control of McKean-Vlasov Dynamics versus Mean Field Games Control of McKean-Vlasov Dynamics versus Mean Field Games René Carmona (a),, François Delarue (b), and Aimé Lachapelle (a), (a) ORFE, Bendheim Center for Finance, Princeton University, Princeton, NJ 8544,

More information

Mean-Field optimization problems and non-anticipative optimal transport. Beatrice Acciaio

Mean-Field optimization problems and non-anticipative optimal transport. Beatrice Acciaio Mean-Field optimization problems and non-anticipative optimal transport Beatrice Acciaio London School of Economics based on ongoing projects with J. Backhoff, R. Carmona and P. Wang Robust Methods in

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Mean-Field optimization problems and non-anticipative optimal transport. Beatrice Acciaio

Mean-Field optimization problems and non-anticipative optimal transport. Beatrice Acciaio Mean-Field optimization problems and non-anticipative optimal transport Beatrice Acciaio London School of Economics based on ongoing projects with J. Backhoff, R. Carmona and P. Wang Thera Stochastics

More information

Risk-Sensitive and Robust Mean Field Games

Risk-Sensitive and Robust Mean Field Games Risk-Sensitive and Robust Mean Field Games Tamer Başar Coordinated Science Laboratory Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL - 6181 IPAM

More information

Production and Relative Consumption

Production and Relative Consumption Mean Field Growth Modeling with Cobb-Douglas Production and Relative Consumption School of Mathematics and Statistics Carleton University Ottawa, Canada Mean Field Games and Related Topics III Henri Poincaré

More information

Systemic Risk and Stochastic Games with Delay

Systemic Risk and Stochastic Games with Delay Systemic Risk and Stochastic Games with Delay Jean-Pierre Fouque (with René Carmona, Mostafa Mousavi and Li-Hsien Sun) PDE and Probability Methods for Interactions Sophia Antipolis (France) - March 30-31,

More information

MEAN FIELD GAMES AND SYSTEMIC RISK

MEAN FIELD GAMES AND SYSTEMIC RISK MEA FIELD GAMES AD SYSTEMIC RISK REÉ CARMOA, JEA-PIERRE FOUQUE, AD LI-HSIE SU Abstract. We propose a simple model of inter-bank borrowing and lending where the evolution of the logmonetary reserves of

More information

Introduction to Mean Field Game Theory Part I

Introduction to Mean Field Game Theory Part I School of Mathematics and Statistics Carleton University Ottawa, Canada Workshop on Game Theory, Institute for Math. Sciences National University of Singapore, June 2018 Outline of talk Background and

More information

Mean-Field Games with non-convex Hamiltonian

Mean-Field Games with non-convex Hamiltonian Mean-Field Games with non-convex Hamiltonian Martino Bardi Dipartimento di Matematica "Tullio Levi-Civita" Università di Padova Workshop "Optimal Control and MFGs" Pavia, September 19 21, 2018 Martino

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Explicit solutions of some Linear-Quadratic Mean Field Games

Explicit solutions of some Linear-Quadratic Mean Field Games Explicit solutions of some Linear-Quadratic Mean Field Games Martino Bardi Department of Pure and Applied Mathematics University of Padua, Italy Men Field Games and Related Topics Rome, May 12-13, 2011

More information

Thermodynamic limit and phase transitions in non-cooperative games: some mean-field examples

Thermodynamic limit and phase transitions in non-cooperative games: some mean-field examples Thermodynamic limit and phase transitions in non-cooperative games: some mean-field examples Conference on the occasion of Giovanni Colombo and Franco Ra... https://events.math.unipd.it/oscgc18/ Optimization,

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

An Introduction to Moral Hazard in Continuous Time

An Introduction to Moral Hazard in Continuous Time An Introduction to Moral Hazard in Continuous Time Columbia University, NY Chairs Days: Insurance, Actuarial Science, Data and Models, June 12th, 2018 Outline 1 2 Intuition and verification 2BSDEs 3 Control

More information

Mean Field Games: Basic theory and generalizations

Mean Field Games: Basic theory and generalizations Mean Field Games: Basic theory and generalizations School of Mathematics and Statistics Carleton University Ottawa, Canada University of Southern California, Los Angeles, Mar 2018 Outline of talk Mean

More information

Interest Rate Models:

Interest Rate Models: 1/17 Interest Rate Models: from Parametric Statistics to Infinite Dimensional Stochastic Analysis René Carmona Bendheim Center for Finance ORFE & PACM, Princeton University email: rcarmna@princeton.edu

More information

Introduction Optimality and Asset Pricing

Introduction Optimality and Asset Pricing Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our

More information

On the Multi-Dimensional Controller and Stopper Games

On the Multi-Dimensional Controller and Stopper Games On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper

More information

Large-population, dynamical, multi-agent, competitive, and cooperative phenomena occur in a wide range of designed

Large-population, dynamical, multi-agent, competitive, and cooperative phenomena occur in a wide range of designed Definition Mean Field Game (MFG) theory studies the existence of Nash equilibria, together with the individual strategies which generate them, in games involving a large number of agents modeled by controlled

More information

Non-Zero-Sum Stochastic Differential Games of Controls and St

Non-Zero-Sum Stochastic Differential Games of Controls and St Non-Zero-Sum Stochastic Differential Games of Controls and Stoppings October 1, 2009 Based on two preprints: to a Non-Zero-Sum Stochastic Differential Game of Controls and Stoppings I. Karatzas, Q. Li,

More information

Leader-Follower Stochastic Differential Game with Asymmetric Information and Applications

Leader-Follower Stochastic Differential Game with Asymmetric Information and Applications Leader-Follower Stochastic Differential Game with Asymmetric Information and Applications Jie Xiong Department of Mathematics University of Macau Macau Based on joint works with Shi and Wang 2015 Barrett

More information

Systemic Risk and Stochastic Games with Delay

Systemic Risk and Stochastic Games with Delay Systemic Risk and Stochastic Games with Delay Jean-Pierre Fouque Collaborators: R. Carmona, M. Mousavi, L.-H. Sun, N. Ning and Z. Zhang Mean Field Games 2017 IPAM at UCLA JP Fouque (UC Santa Barbara) Stochastic

More information

Dynamic Bertrand and Cournot Competition

Dynamic Bertrand and Cournot Competition Dynamic Bertrand and Cournot Competition Effects of Product Differentiation Andrew Ledvina Department of Operations Research and Financial Engineering Princeton University Joint with Ronnie Sircar Princeton-Lausanne

More information

General-sum games. I.e., pretend that the opponent is only trying to hurt you. If Column was trying to hurt Row, Column would play Left, so

General-sum games. I.e., pretend that the opponent is only trying to hurt you. If Column was trying to hurt Row, Column would play Left, so General-sum games You could still play a minimax strategy in general- sum games I.e., pretend that the opponent is only trying to hurt you But this is not rational: 0, 0 3, 1 1, 0 2, 1 If Column was trying

More information

Matthew Zyskowski 1 Quanyan Zhu 2

Matthew Zyskowski 1 Quanyan Zhu 2 Matthew Zyskowski 1 Quanyan Zhu 2 1 Decision Science, Credit Risk Office Barclaycard US 2 Department of Electrical Engineering Princeton University Outline Outline I Modern control theory with game-theoretic

More information

Prof. Erhan Bayraktar (University of Michigan)

Prof. Erhan Bayraktar (University of Michigan) September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Asymptotic Perron Method for Stochastic Games and Control

Asymptotic Perron Method for Stochastic Games and Control Asymptotic Perron Method for Stochastic Games and Control Mihai Sîrbu, The University of Texas at Austin Methods of Mathematical Finance a conference in honor of Steve Shreve s 65th birthday Carnegie Mellon

More information

A class of globally solvable systems of BSDE and applications

A class of globally solvable systems of BSDE and applications A class of globally solvable systems of BSDE and applications Gordan Žitković Department of Mathematics The University of Texas at Austin Thera Stochastics - Wednesday, May 31st, 2017 joint work with Hao

More information

Weak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations

Weak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations Weak Convergence of Numerical Methods for Dynamical Systems and, and a relation with Large Deviations for Stochastic Equations Mattias Sandberg KTH CSC 2010-10-21 Outline The error representation for weak

More information

From SLLN to MFG and HFT. Xin Guo UC Berkeley. IPAM WFM 04/28/17 Based on joint works with J. Lee, Z. Ruan, and R. Xu (UC Berkeley), and L.

From SLLN to MFG and HFT. Xin Guo UC Berkeley. IPAM WFM 04/28/17 Based on joint works with J. Lee, Z. Ruan, and R. Xu (UC Berkeley), and L. From SLLN to MFG and HFT Xin Guo UC Berkeley IPAM WFM 04/28/17 Based on joint works with J. Lee, Z. Ruan, and R. Xu (UC Berkeley), and L. Zhu (FSU) Outline Starting from SLLN 1 Mean field games 2 High

More information

Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm

Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm Gonçalo dos Reis University of Edinburgh (UK) & CMA/FCT/UNL (PT) jointly with: W. Salkeld, U. of

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Chapter 2 Event-Triggered Sampling

Chapter 2 Event-Triggered Sampling Chapter Event-Triggered Sampling In this chapter, some general ideas and basic results on event-triggered sampling are introduced. The process considered is described by a first-order stochastic differential

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Weak solutions to Fokker-Planck equations and Mean Field Games

Weak solutions to Fokker-Planck equations and Mean Field Games Weak solutions to Fokker-Planck equations and Mean Field Games Alessio Porretta Universita di Roma Tor Vergata LJLL, Paris VI March 6, 2015 Outlines of the talk Brief description of the Mean Field Games

More information

Robust Markowitz portfolio selection. ambiguous covariance matrix

Robust Markowitz portfolio selection. ambiguous covariance matrix under ambiguous covariance matrix University Paris Diderot, LPMA Sorbonne Paris Cité Based on joint work with A. Ismail, Natixis MFO March 2, 2017 Outline Introduction 1 Introduction 2 3 and Sharpe ratio

More information

ON MEAN FIELD GAMES. Pierre-Louis LIONS. Collège de France, Paris. (joint project with Jean-Michel LASRY)

ON MEAN FIELD GAMES. Pierre-Louis LIONS. Collège de France, Paris. (joint project with Jean-Michel LASRY) Collège de France, Paris (joint project with Jean-Michel LASRY) I INTRODUCTION II A REALLY SIMPLE EXAMPLE III GENERAL STRUCTURE IV TWO PARTICULAR CASES V OVERVIEW AND PERSPECTIVES I. INTRODUCTION New class

More information

Game Theory Extra Lecture 1 (BoB)

Game Theory Extra Lecture 1 (BoB) Game Theory 2014 Extra Lecture 1 (BoB) Differential games Tools from optimal control Dynamic programming Hamilton-Jacobi-Bellman-Isaacs equation Zerosum linear quadratic games and H control Baser/Olsder,

More information

Infinite-dimensional methods for path-dependent equations

Infinite-dimensional methods for path-dependent equations Infinite-dimensional methods for path-dependent equations (Università di Pisa) 7th General AMaMeF and Swissquote Conference EPFL, Lausanne, 8 September 215 Based on Flandoli F., Zanco G. - An infinite-dimensional

More information

Mean Field Control: Selected Topics and Applications

Mean Field Control: Selected Topics and Applications Mean Field Control: Selected Topics and Applications School of Mathematics and Statistics Carleton University Ottawa, Canada University of Michigan, Ann Arbor, Feb 2015 Outline of talk Background and motivation

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Mean Field Game Theory for Systems with Major and Minor Agents: Nonlinear Theory and Mean Field Estimation

Mean Field Game Theory for Systems with Major and Minor Agents: Nonlinear Theory and Mean Field Estimation Mean Field Game Theory for Systems with Major and Minor Agents: Nonlinear Theory and Mean Field Estimation Peter E. Caines McGill University, Montreal, Canada 2 nd Workshop on Mean-Field Games and Related

More information

Mean field games and related models

Mean field games and related models Mean field games and related models Fabio Camilli SBAI-Dipartimento di Scienze di Base e Applicate per l Ingegneria Facoltà di Ingegneria Civile ed Industriale Email: Camilli@dmmm.uniroma1.it Web page:

More information

A numerical approach to the MFG

A numerical approach to the MFG A numerical approach to the MFG Gabriel Turinici CEREMADE, Université Paris Dauphine CIRM Marseille, Summer 2011 Gabriel Turinici (CEREMADE, Université Paris Dauphine A numerical ) approach to the MFG

More information

Optimal Execution Tracking a Benchmark

Optimal Execution Tracking a Benchmark Optimal Execution Tracking a Benchmark René Carmona Bendheim Center for Finance Department of Operations Research & Financial Engineering Princeton University Princeton, June 20, 2013 Optimal Execution

More information

Ergodicity and Non-Ergodicity in Economics

Ergodicity and Non-Ergodicity in Economics Abstract An stochastic system is called ergodic if it tends in probability to a limiting form that is independent of the initial conditions. Breakdown of ergodicity gives rise to path dependence. We illustrate

More information

Variational approach to mean field games with density constraints

Variational approach to mean field games with density constraints 1 / 18 Variational approach to mean field games with density constraints Alpár Richárd Mészáros LMO, Université Paris-Sud (based on ongoing joint works with F. Santambrogio, P. Cardaliaguet and F. J. Silva)

More information

Network Games with Friends and Foes

Network Games with Friends and Foes Network Games with Friends and Foes Stefan Schmid T-Labs / TU Berlin Who are the participants in the Internet? Stefan Schmid @ Tel Aviv Uni, 200 2 How to Model the Internet? Normal participants : - e.g.,

More information

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations Lukasz Szpruch School of Mathemtics University of Edinburgh joint work with Shuren Tan and Alvin Tse (Edinburgh) Lukasz Szpruch (University

More information

Collusion between the government and the enterprise in pollution management

Collusion between the government and the enterprise in pollution management 30 5 015 10 JOURNAL OF SYSTEMS ENGINEERING Vol.30 No.5 Oct. 015 1, (1., 400044;., 400044 :. Stackelberg,.,,,. : ; Stackelberg ; ; ; : F4 : A : 1000 5781(01505 0584 10 doi: 10.13383/j.cnki.jse.015.05.00

More information

Linear-Quadratic Stochastic Differential Games with General Noise Processes

Linear-Quadratic Stochastic Differential Games with General Noise Processes Linear-Quadratic Stochastic Differential Games with General Noise Processes Tyrone E. Duncan Abstract In this paper a noncooperative, two person, zero sum, stochastic differential game is formulated and

More information

The Pennsylvania State University The Graduate School MODELS OF NONCOOPERATIVE GAMES. A Dissertation in Mathematics by Deling Wei.

The Pennsylvania State University The Graduate School MODELS OF NONCOOPERATIVE GAMES. A Dissertation in Mathematics by Deling Wei. The Pennsylvania State University The Graduate School MODELS OF NONCOOPERATIVE GAMES A Dissertation in Mathematics by Deling Wei c 213 Deling Wei Submitted in Partial Fulfillment of the Requirements for

More information

Linear Filtering of general Gaussian processes

Linear Filtering of general Gaussian processes Linear Filtering of general Gaussian processes Vít Kubelka Advisor: prof. RNDr. Bohdan Maslowski, DrSc. Robust 2018 Department of Probability and Statistics Faculty of Mathematics and Physics Charles University

More information

The Mathematics of Continuous Time Contract Theory

The Mathematics of Continuous Time Contract Theory The Mathematics of Continuous Time Contract Theory Ecole Polytechnique, France University of Michigan, April 3, 2018 Outline Introduction to moral hazard 1 Introduction to moral hazard 2 3 General formulation

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

DISCRETE-TIME STOCHASTIC MODELS, SDEs, AND NUMERICAL METHODS. Ed Allen. NIMBioS Tutorial: Stochastic Models With Biological Applications

DISCRETE-TIME STOCHASTIC MODELS, SDEs, AND NUMERICAL METHODS. Ed Allen. NIMBioS Tutorial: Stochastic Models With Biological Applications DISCRETE-TIME STOCHASTIC MODELS, SDEs, AND NUMERICAL METHODS Ed Allen NIMBioS Tutorial: Stochastic Models With Biological Applications University of Tennessee, Knoxville March, 2011 ACKNOWLEDGEMENT I thank

More information

Weakly interacting particle systems on graphs: from dense to sparse

Weakly interacting particle systems on graphs: from dense to sparse Weakly interacting particle systems on graphs: from dense to sparse Ruoyu Wu University of Michigan (Based on joint works with Daniel Lacker and Kavita Ramanan) USC Math Finance Colloquium October 29,

More information

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Mostafa D. Awheda Department of Systems and Computer Engineering Carleton University Ottawa, Canada KS 5B6 Email: mawheda@sce.carleton.ca

More information

Ambiguity and Information Processing in a Model of Intermediary Asset Pricing

Ambiguity and Information Processing in a Model of Intermediary Asset Pricing Ambiguity and Information Processing in a Model of Intermediary Asset Pricing Leyla Jianyu Han 1 Kenneth Kasa 2 Yulei Luo 1 1 The University of Hong Kong 2 Simon Fraser University December 15, 218 1 /

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Session 1: Probability and Markov chains

Session 1: Probability and Markov chains Session 1: Probability and Markov chains 1. Probability distributions and densities. 2. Relevant distributions. 3. Change of variable. 4. Stochastic processes. 5. The Markov property. 6. Markov finite

More information

Dynamic Consistency for Stochastic Optimal Control Problems

Dynamic Consistency for Stochastic Optimal Control Problems Dynamic Consistency for Stochastic Optimal Control Problems Cadarache Summer School CEA/EDF/INRIA 2012 Pierre Carpentier Jean-Philippe Chancelier Michel De Lara SOWG June 2012 Lecture outline Introduction

More information

Memory and hypoellipticity in neuronal models

Memory and hypoellipticity in neuronal models Memory and hypoellipticity in neuronal models S. Ditlevsen R. Höpfner E. Löcherbach M. Thieullen Banff, 2017 What this talk is about : What is the effect of memory in probabilistic models for neurons?

More information

Risk-Averse Control of Partially Observable Markov Systems. Andrzej Ruszczyński. Workshop on Dynamic Multivariate Programming Vienna, March 2018

Risk-Averse Control of Partially Observable Markov Systems. Andrzej Ruszczyński. Workshop on Dynamic Multivariate Programming Vienna, March 2018 Risk-Averse Control of Partially Observable Markov Systems Workshop on Dynamic Multivariate Programming Vienna, March 2018 Partially Observable Discrete-Time Models Markov Process: fx t ; Y t g td1;:::;t

More information

Numerical methods for Mean Field Games: additional material

Numerical methods for Mean Field Games: additional material Numerical methods for Mean Field Games: additional material Y. Achdou (LJLL, Université Paris-Diderot) July, 2017 Luminy Convergence results Outline 1 Convergence results 2 Variational MFGs 3 A numerical

More information

Stochastic Maximum Principle and Dynamic Convex Duality in Continuous-time Constrained Portfolio Optimization

Stochastic Maximum Principle and Dynamic Convex Duality in Continuous-time Constrained Portfolio Optimization Stochastic Maximum Principle and Dynamic Convex Duality in Continuous-time Constrained Portfolio Optimization Yusong Li Department of Mathematics Imperial College London Thesis presented for examination

More information

Management of dam systems via optimal price control

Management of dam systems via optimal price control Available online at www.sciencedirect.com Procedia Computer Science 4 (211) 1373 1382 International Conference on Computational Science, ICCS 211 Management of dam systems via optimal price control Boris

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Stochastic contraction BACS Workshop Chamonix, January 14, 2008

Stochastic contraction BACS Workshop Chamonix, January 14, 2008 Stochastic contraction BACS Workshop Chamonix, January 14, 2008 Q.-C. Pham N. Tabareau J.-J. Slotine Q.-C. Pham, N. Tabareau, J.-J. Slotine () Stochastic contraction 1 / 19 Why stochastic contraction?

More information

Alberto Bressan. Department of Mathematics, Penn State University

Alberto Bressan. Department of Mathematics, Penn State University Non-cooperative Differential Games A Homotopy Approach Alberto Bressan Department of Mathematics, Penn State University 1 Differential Games d dt x(t) = G(x(t), u 1(t), u 2 (t)), x(0) = y, u i (t) U i

More information

Stochastic Mean-field games and control

Stochastic Mean-field games and control Laboratory of Applied Mathematics University of Biskra, Algeria Doctoriales Nationales de Mathématiques ENS de Constantine October 28-31, 2017 Stochastic Control Part 1: Survey of Stochastic control Brownian

More information

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde Gillespie s Algorithm and its Approximations Des Higham Department of Mathematics and Statistics University of Strathclyde djh@maths.strath.ac.uk The Three Lectures 1 Gillespie s algorithm and its relation

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Control Theory in Physics and other Fields of Science

Control Theory in Physics and other Fields of Science Michael Schulz Control Theory in Physics and other Fields of Science Concepts, Tools, and Applications With 46 Figures Sprin ger 1 Introduction 1 1.1 The Aim of Control Theory 1 1.2 Dynamic State of Classical

More information

Portfolio Optimization with unobservable Markov-modulated drift process

Portfolio Optimization with unobservable Markov-modulated drift process Portfolio Optimization with unobservable Markov-modulated drift process Ulrich Rieder Department of Optimization and Operations Research University of Ulm, Germany D-89069 Ulm, Germany e-mail: rieder@mathematik.uni-ulm.de

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

OPTIMAL CONTROL SYSTEMS

OPTIMAL CONTROL SYSTEMS SYSTEMS MIN-MAX Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University OUTLINE MIN-MAX CONTROL Problem Definition HJB Equation Example GAME THEORY Differential Games Isaacs

More information

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Group 4: Bertan Yilmaz, Richard Oti-Aboagye and Di Liu May, 15 Chapter 1 Proving Dynkin s formula

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Mean Field Game Systems with Common Noise and Markovian Latent Processes

Mean Field Game Systems with Common Noise and Markovian Latent Processes arxiv:189.7865v2 [math.oc 29 Oct 218 Mean Field Game Systems with Common Noise and Markovian Latent Processes 1 Abstract Dena Firoozi, Peter E. Caines, Sebastian Jaimungal A class of non-cooperative stochastic

More information

Mean Field Consumption-Accumulation Optimization with HAR

Mean Field Consumption-Accumulation Optimization with HAR Mean Field Consumption-Accumulation Optimization with HARA Utility School of Mathematics and Statistics Carleton University, Ottawa Workshop on Mean Field Games and Related Topics 2 Padova, September 4-7,

More information

Information and Credit Risk

Information and Credit Risk Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information

More information

arxiv: v4 [math.oc] 24 Sep 2016

arxiv: v4 [math.oc] 24 Sep 2016 On Social Optima of on-cooperative Mean Field Games Sen Li, Wei Zhang, and Lin Zhao arxiv:607.0482v4 [math.oc] 24 Sep 206 Abstract This paper studies the connections between meanfield games and the social

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

Chapter Introduction. A. Bensoussan

Chapter Introduction. A. Bensoussan Chapter 2 EXPLICIT SOLUTIONS OFLINEAR QUADRATIC DIFFERENTIAL GAMES A. Bensoussan International Center for Decision and Risk Analysis University of Texas at Dallas School of Management, P.O. Box 830688

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

A random perturbation approach to some stochastic approximation algorithms in optimization.

A random perturbation approach to some stochastic approximation algorithms in optimization. A random perturbation approach to some stochastic approximation algorithms in optimization. Wenqing Hu. 1 (Presentation based on joint works with Chris Junchi Li 2, Weijie Su 3, Haoyi Xiong 4.) 1. Department

More information

Quadratic BSDE systems and applications

Quadratic BSDE systems and applications Quadratic BSDE systems and applications Hao Xing London School of Economics joint work with Gordan Žitković Stochastic Analysis and Mathematical Finance-A Fruitful Partnership 25 May, 2016 1 / 23 Backward

More information

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function: Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic

More information

Mean Field Games on networks

Mean Field Games on networks Mean Field Games on networks Claudio Marchi Università di Padova joint works with: S. Cacace (Rome) and F. Camilli (Rome) C. Marchi (Univ. of Padova) Mean Field Games on networks Roma, June 14 th, 2017

More information

ON NON-UNIQUENESS AND UNIQUENESS OF SOLUTIONS IN FINITE-HORIZON MEAN FIELD GAMES

ON NON-UNIQUENESS AND UNIQUENESS OF SOLUTIONS IN FINITE-HORIZON MEAN FIELD GAMES ON NON-UNIQUENESS AND UNIQUENESS OF SOLUTIONS IN FINITE-HOIZON MEAN FIELD GAMES MATINO BADI AND MAKUS FISCHE Abstract. This paper presents a class of evolutive Mean Field Games with multiple solutions

More information

Formula Sheet for Optimal Control

Formula Sheet for Optimal Control Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming

More information