On Stopping Times and Impulse Control with Constraint

Size: px
Start display at page:

Download "On Stopping Times and Impulse Control with Constraint"

Transcription

1 On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA ( IMA, Minneapolis, MN May 7 11, 218 Wed May 9, 218, 1:-1:5 JLM (WSU) Control Problems with Constraint 1 / 24

2 1.- Introduction Table of Content Content 1.- Introduction: Stopping, Impulse, and Signal 2.- Stopping with Constraint 3.- Impulse Control with Constraint - Setting, Assumptions Formal Analysis, Dynamic Programming Solving HJB Equation Extensions 7.- Hybrid Control, Stopping, Impulse, Switching JLM (WSU) Control Problems with Constraint 2 / 24

3 1.- Introduction Stopping & Impulse Optimal Stopping Time & Impulse Control Problem Control θ = stop evolution of a Markov-Feller x t on a Polish space Optimal Problem = maximize/minimize the reward/cost functional { θ J x (θ, ψ) = E x e αt f (x t )dt + e αθ ψ(x θ ), (1) θ stopping time, f, ψ (running & terminal), α > (discount). Optimal cost: v(x) = inf { J x (θ, ψ) : θ stopping time. Cost functional (with intervention) { J x (ν) = E x e αt f (xt ν )dt + e αϑ i c(xϑ ν i, ξ i ), (2) for any ˆν impulse/switching control, and the optimal cost function v(x) = inf ν i= J x(ν). JLM (WSU) Control Problems with Constraint 3 / 24

4 1.- Introduction Stopping & Impulse - Constraint VI & QVI - Stopping & Impulse - Constraint The dynamic programming gives a HJB equation (VI or QVI) of the form min { Av αv + f, ψ v = or min { Av αv + f, Mv v =, with A = infinitesimal generator of Markov-Feller process {x t, t [, [ (with states in E, Polish space), and Mu(x) = inf ξ {c(x, ξ) + u(ξ). This QVI can be solved by using the sequence of variational inequalities min { Av n αv n + f, Mv n 1 v n =, where v n is the optimal cost for the problem with stopping cost Mv n 1. Bensoussan and Lions (book 1978), Robin (thesis 1978), Menaldi (thesis 198), etc, etc, Peskir and Shiryaev (book 26), etc, etc, etc. (extremely long list, with variations) Constraint: Wait for a signal. For instance, the system evolves according to a Wiener process w t (with drift b and diffusion σ) and the signal is the jumps of a Poisson process N t, independent of the Wiener process. Dupuis and Wang (22) [also, Lempa (212), Liang and Wei (214)] studied this case for a geometric Brownian process. JLM (WSU) Control Problems with Constraint 4 / 24

5 1.- Introduction Signal Constraint: Wait for a Signal The dynamic programming equation (or HJB equation) takes the form Au αu = λ[u ψ] +, with f =, where the verification theorem is based on an explicit solution of the HJB equation and on the solution of a discrete time problem. Objective: To generalize the above problem in two directions: - to replace the 1-d Wiener process with a Markov-Feller process, - to replace the Poisson process by a more general counting process. The simplest model {τ n : n 1 are the jumps of a Poisson process, i.e., the time between two consecutive jumps {T 1 = τ 1, T 2 = τ 2 τ 1, T 3 = τ 3 τ 2,... is necessarily an IID sequence, exponentially distributed. However, we focus first on {T n : n 1 being a sequence of IID random variables with common law π (not exponential in general, i.e., not memoryless). JLM (WSU) Control Problems with Constraint 5 / 24

6 1.- Introduction Time Elapsed Since Last Signal Signal as a Process Let us introduce an homogeneous Markov process {y t : t, y t [, [, independent of {x t : t time elapsed since the last signal. So that almost surely, the cad-lag paths t y t are piecewise differentiable ẏ t = 1, with jumps only back to zero, and expected infinitesimal generator A 1 ϕ(y) = y ϕ(t) + λ(y)[ϕ() ϕ(y)], y, (3) where the intensity λ(y) is a Borel measurable function. Thus, the signals are defined as functionals on {y t : t, namely, τ =, τ n = inf { t > τ n 1 : y t =, n 1. (4) The couple {(x t, y t ) : t is an homogeneous Markov process in continuous time, and a signal arrives at a random instant τ if and only if y τ = (and τ > ). Stopping with Constraint JLM (WSU) Control Problems with Constraint 6 / 24

7 2.- Stopping Problem Setting Stopping Two Optimal Costs - Stopping Problem C b = C b (E [, [) continuous & bounded functions on E [, [, {(x t, y t ) : t Markov-Feller process, i.e., for any ϕ(x, y) in C b, The cost function is (x, y, t) E x,y { ϕ(xt, y t ) is continuous, and (5) α > and f, ψ, C b (E [, [), (6) J x,y (θ, ψ) = E x,y { θ which yields an optimal cost e αt f (x t, y t )dt + e αθ ψ(x θ, y θ ), v(x, y) = inf { J x,y (θ, ψ) : θ >, y θ =, (7) i.e., θ is any admissible stopping time, and an auxiliary optimal cost is defined as v (x, y) = inf { J x,y (θ, ψ) : y θ =, (8) which is an homogeneous Markov model. JLM (WSU) Control Problems with Constraint 7 / 24

8 2.- Stopping Problem HJB Stopping Dynamic Programming - HJB Equations - Optimal Stopping Cost If τ = inf { t > : y t = then HJB equations and { u (x, ) = min {ψ, τ E x,y e αt f (x t, y t )dt + e ατ u (x τ, y τ ), (9) u(x, y) = E x,y { τ e αt f (x t, y t )dt + e ατ min{ψ, u(x τ, y τ ), (1) are the corresponding HJB equations, and both problems are related by the equation u(x, y) = E x,y { τ e αt f (x t, y t )dt + e ατ u (x τ, y τ ). (11) Note that y τ = and u (x, y) = u(x, y) for any y >. JLM (WSU) Control Problems with Constraint 8 / 24

9 2.- Stopping Problem Solving Stopping Solving Optimal Stopping Problems Theorem Assume (5) & (6). Then VI (9) & (1) have each a unique solution in C b (E [, [), which are the optimal costs (7) & (8). Moreover, the first exit time from the continuation region [u < ψ] is optimal (discrete s.t.) ˆθ = inf { t > : u(x t, y t ) ψ(x t, y t ), y t =, ˆθ = inf { t : u (x t, y t ) = ψ(x t, y t ), y t = (12) are optimal, namely, u(x, y) = J x,y (ˆθ, ψ) and u (x, y) = J x,y (ˆθ, ψ). Furthermore, the relation (11) holds. Also u = min{u, ψ, which may D(A x,y ) C b (E [, [). However, the optimal cost u given by (7) belongs to D(A x,y ) and for any (x, y) in E [, [, (HJB) A x,y u(x, y) + αu(x, y) + λ(x, y)[u(x, ) ψ(x, y)] + = f (x, y). (13) IC w/constraint: Setting, Assumptions JLM (WSU) Control Problems with Constraint 9 / 24

10 3.- Impulse Control Assumptions Impulse Control Setting - Evolution If x t represents the state of the uncontrolled system then a sequence ν = {(ϑ i, ξ i ) : i 1 of stopping times {ϑ 1 ϑ 2 and E-valued random variables {ξ 1, ξ 2,... is called an impulse control. The controlled system {xt ν : t behaves likes x t for t within [ϑ i 1, ϑ i [, i 1, and xϑ ν i = ξ i, in short, instantaneously moves from x ν ϑ i (xϑ ν i ) to ξ i. A cost-per-impulse c : E Γ [c, [, c is continuous and c >, (14) with a closed subset Γ of E (= set where to jump / admissible jumps). {(x t, y t ) : t is a time-homogeneous Markov-Feller process, with infinitesimal generator, x E, y, A x,y ϕ(x, y) = A x ϕ(x, y) + y ϕ(x, y) + λ(x, y)[ϕ(x, ) ϕ(x, y)], (15) i.e., now the intensity λ depends also on x. JLM (WSU) Control Problems with Constraint 1 / 24

11 3.- Impulse Control Setting Cost per Interventions / Impulse costs First: admissible impulse control ν = {(ϑ i, ξ i ) : i 1 if y ϑi =, i 1, and ϑ 1 >. If ϑ 1 = is allowed then ν is zero-admissible. 1st decision: choose F-st ϑ 1, a F ϑ1 -meas. rv ξ 1 : Ω Γ E, cost up to ϑ 1 { J x,y (ν ϑ 1 ) = E ν ϑ ϑ 1 x,y f (x t, y t )e αt dt + e αϑ 1 c(x ϑ1, ξ 1 ), ϑ =, ϑ 2nd decision: choose a F-st ϑ 2, a F ϑ2 -meas. rv ξ 2 : Ω Γ E, cost up to ϑ 2 { J x,y (ν ϑ 2 ) = J x,y (ν ϑ 1 ) + E ν ϑ ϑ 2 1 x,y f (x t, y t )e αt dt + e αϑ 2 c(x ϑ2, ξ 2 ) ϑ 1 Iterating, cost upto ϑ k+1 (ϑ k+1 < means intervention ) { J x,y (ν ϑ k+1 ) = J x,y (ν ϑ k ) + E ν ϑ ϑ k+1 k x,y f (x t, y t )e αt dt + ϑ k + e αϑ k+1 c(x ϑk+1, ξ k+1 ). (16) JLM (WSU) Control Problems with Constraint 11 / 24

12 3.- Impulse Control Setting (cont. 1) Optimal Impulse Costs Total cost: limit J x,y (ν) = lim k J x,y (ν ϑ k+1 ). However, it is convenient to construct (in an copy-space) a probability P ν x,y such that { J x,y (ν) = E ν x,y e αt f (xt ν, yt ν )dt + e αϑ i c(x i 1,ν ϑ i, ξ i ), (17) V (or V ) = admissible (or zero-admissible) impulse controls, all relative to the initial condition (x, y ) = (x, y). Optimal cost: v(x, y) = inf { J x,y (ν) : ν V, (x, y) E [, [, (18) and also, the time-homogeneous impulse control, with an optimal cost: v (x, y) = inf { J x,y (ν) : ν V, (x, y) E [, [, (19) which is actually needed only for y =. Formal Analysis, Dynamic Programming JLM (WSU) Control Problems with Constraint 12 / 24 i=1

13 4.- Formal Analysis Dynamic Programming - IC Dynamic Programming - State Model - Time Homogeneous Two models: constraint wait to intervene until a signal arrive : (a) v(x, y) translated as intervene only when y = and t > ; (b) (Markov/stationary) v (x, y) translated as intervene only when y =. (1) Time homogeneous? v(x, y) v(x, y, s), v(x, y, ) = v(x, y) and for s >, { J x,y,s (ν) = E ν x,y e α(t s) f (xs+t)dt ν + e α(ϑ i s) c(x i 1,ν s+ϑ i, ξ i ), i v(x, y, s) = inf { J x,y,s (ν) : ν any admissible impulse control, where now admissible means intervene only when y = and s. (2) Multiple impulses? Are excluded from the optimal decision if c(x, ξ 1 ) + c(ξ 1, ξ 2 ) c(x, ξ 2 ), x E, ξ 1, ξ 2 Γ. (>?) (2) and v(x, y) v (x, y), x E, y, with = if y >, hold true. v (x, ) = min { v(x, ), inf ξ Γ {v(ξ, ) + c(x, ξ), x E. (21) JLM (WSU) Control Problems with Constraint 13 / 24

14 4.- Formal Analysis Dynamic Programming - Weak DP Equation Dynamic Programming - Weak DP/HJB Equation The relations u(x, y) = E x,y { τ 1 An equation with u alone { τ 1 u(x, y) = E x,y e αt f (x t )dt + or equivalently Also e αt f (x t )dt + e ατ1 u (x τ1, ). (22) + e ατ1 min { u(x τ1, ), inf ξ 1 Γ {u(ξ 1, ) + c(x τ1, ξ 1 ), u(x, y) = ( R(min{u(, ), (Mu(, )) ) (x, y), x, y, (23) u (x, ) = min { u(x, ), (Mu(, ))(x) = { = min {(Mu τ 1 (, ))(x), E x, e αt f (x t )dt + e ατ1 u (x τ1, ). Solving HJB Equation JLM (WSU) Control Problems with Constraint 14 / 24

15 6.- Solving HJB Equation Existence and Uniqueness HJB Equation - Existence and Uniqueness Either Au + αu = f or u (x) = E x e αt f (x t )dt, x E, (24) v, v C(u ) = {ϕ C(E) : ϕ(x) u (x), x E (25) HJB equation with u (x) = u (x, ), either u = min{mu, Ru or { u (x) = min inf {u (ξ) + c(x, ξ), ξ Γ { τ 1, E x, e αt f (x t )dt + e ατ1 u (x τ1 ) (26) Consider the scheme u n = min{mun 1, Ru n, with u = u, n 1. This equation has a unique solution in C(E), and u n = v n (, ) with { θ v n (x, y) = inf E x,y e αt f (x t )dt + e αθ Mv n 1 (x θ, ), (27) θ where the minimization is over all zero-admissible stopping times θ, i.e., θ = τ η for some discrete stopping time η. JLM (WSU) Control Problems with Constraint 15 / 24

16 6.- Solving HJB Equation Existence and Uniqueness (Thm 1) Existence and Uniqueness (Thm 1: u ) Theorem Under the previous assumptions, the decreasing sequence {u n : n 1 converges to u, and there are two constants C > and ρ in ], 1[ such that u n (x) u (x) Cρ n, x E, n 1. (28) Moreover, the HJB equation (26) has one and only one solution u within the interval C(u ), and the representation { θ u (x) = inf E x, e αt f (x t )dt + e αθ Mu (x θ ), (29) θ holds true, where the minimization is over all zero-admissible stopping times θ, i.e., θ = τ η for some discrete stopping time η. JLM (WSU) Control Problems with Constraint 16 / 24

17 6.- Solving HJB Equation Existence and Uniqueness (Thm 2) Existence and Uniqueness (Thm 2: u) Theorem Under previous assumptions, u n u, and C > and ρ in ], 1[ such that u n (x) u(x) Cρ n, x E, n 1. (3) Moreover, the HJB equation (23) has one and only one solution u within the interval C(u ), and { θ u(x) = inf E x, e αt f (x t )dt + e αθ Mu(x θ ), (31) θ θ admissible, i.e., θ = τ η for some discrete stopping time η 1. Furthermore, u n and u belong to D(A x,y ) C b (E [, [) and A x,y u(x, y) + αu(x, y) + λ(x, y) [ u(x, ) Mu(, )(x) ] + = f (x) (32) A x,y u n (x, y) + αu(x, y) + λ [ u n (x, ) Mu n 1 (, )(x) ] + = f (x) (33) JLM (WSU) Control Problems with Constraint 17 / 24

18 6.- Solving HJB Equation Existence and Uniqueness (Thm 3 & 4) Solution: Optimal Cost and Feedback (Thm 3 & 4) Theorem Under the previous assumptions, the unique solution of the HJB equation (23) is the optimal cost (18), i.e., u(x, y) = inf { J x,y (ν) : ν any admissible impulse control, (34) for every (x, y) in E [, [. Theorem Under previous assumptions, the first exit time of the continuation region [u < Mu] provides an optimal admissible impulse control. Extensions JLM (WSU) Control Problems with Constraint 18 / 24

19 6.- Extensions Other Impulse Control Problems Extensions - Other Impulse Control Problems Instead of Γ, may use variable Γ(x), Γ : E 2 E, with Γ(x) closed x E. Impulse intervention operator Mϕ(x) = inf { ϕ(x) + c(x, ξ) : ξ Γ(x), x E, should have the properties { (a) M maps continuously Cb (E) into itself, (b) there exists a Borel measurable minimizer for M, (35) where a minimizer means a Borel function ˆξ : E C b (E) E satisfying ˆξ(x, ϕ) Γ(x), Mϕ(x) = ϕ(x) + c(x, ˆξ(x, ϕ)), x, ϕ. Sometimes, the space C b (E) is replaced by another suitable space. JLM (WSU) Control Problems with Constraint 19 / 24

20 6.- Extensions Unbounded & Other Signals Unbounded & Other Signals Initially E compact, but E = R d, or in general, E locally compact f with polynomial growth, in C p (R d ), C p (H), H = Hilbert Signal with a sequence {T 1, T 2,... of independent identically distributed (IID conditionally to {x t : t ), or pure jump Markov processes, semi Markov processes, piecewise-deterministic Markov processes and diffusion processes with jumps. Essentially, it suffices to add a new variable to the initial process {x t : t to be reduced to the current situation. Instead of assuming y t in R +, we can consider the case y t in [, y [, for some finite y >, and therefore π([, y [) = 1 In the case of IID variables independent of {x t : t, we can consider a general distribution π, without a density. Hybrid Control, Stopping, Impulse, Switching JLM (WSU) Control Problems with Constraint 2 / 24

21 7. Hybrid Control Abstract Model Hybrid Abstract Model a lot of references, many models,... a book in preparation with Héctor (Jasso-Fuentes) & Maurice (Robin) time t in [, [, state variable (x, n) in S R d R m trajectories piecewise continuous in x, piecewise constant in n the continuous-type component x may include some discrete evolution (i.e., taking discrete values) needed to trigger a discrete transition (or event), produced when x hits the set-interface D n = {x : (x, n) D n is sometime called mode/regime, example of evolution equation: (ẋ(t), ṅ(t) ) = ( g(x(t), n(ti ), v(t)), ), for t [t i, t i+1 [, (x(t i ), n(t i )) = ( X (x(t i ), n(t i ), k i ), N(x(t i ), n(t i ), k i ) ), i =, 1, t i+1 T(x( ), n(t i ), t i, D) := inf { t t i : (x(t), n(t i )) D, if t i <. with initial condition (x(), n()) = (x, n ) S, t = JLM (WSU) Control Problems with Constraint 21 / 24

22 7. Hybrid Control Automaton Case Automaton Case Theorem Under the assumptions inf k K (X, N) : D K S D, uniformly continuous. g : S V R d, continuous g(x, n, v) C(1 + x ), x, n, v g(x, n, v) g(x, n, v) M x x, x, x, n, v inf { ξ X (x, n, k) + η N(x, n, k) c > (x,n),(ξ,η) D the trajectories are well defined. Stochastic Dynamics, SDEs, SPDEs, general Markov-Feller processes,... JLM (WSU) Control Problems with Constraint 22 / 24

23 7. Hybrid Control General Case General Case The data is as follows: state-space S R d R m, open or closed, minimal set-interface D S, closed, maximal set-interface D S, closed, D D, control-spaces V K R p R q, compact, set-constraints R V S V and R K S K, closed. The dynamics follows the rule: (1) a mandatory jump or switching is applied on D, (2) a continuous evolution takes place on S D, (3) the controller chooses either (1) or (2) on D D. JLM (WSU) Control Problems with Constraint 23 / 24

24 7. Hybrid Control Stopping/Impulse/Switching Stopping/Impulse/Switching with Constraint Particular cases: (x, y) S = E [, [, D =, D = E {, i.e., the control is allowed only when y = (the signal has arrived). In our Optimal Stopping Time Problem (with constraint) we have two costs, u and u related by u(x, y) = E x,y { e ατ 1 u (x τ1, ), (x, y) E [, [. (36) The cost u corresponds to the hybrid problem control only when y =, but the cost that we are interested in is u. Current Works (besides with M. Robin): in hybrid control (with Héctor Jasso-Fuentes) Relaxation and linear programs in hybrid control problems,... (with Héctor Jasso-Fuentes and Tomás Prieto-Rumeau) Discrete-Time Hybrid Control in Borel Spaces,... T H A N K S! JLM (WSU) Control Problems with Constraint 24 / 24

On Ergodic Impulse Control with Constraint

On Ergodic Impulse Control with Constraint On Ergodic Impulse Control with Constraint Maurice Robin Based on joint papers with J.L. Menaldi University Paris-Sanclay 9119 Saint-Aubin, France (e-mail: maurice.robin@polytechnique.edu) IMA, Minneapolis,

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes Optimal Control of Partially Observable Piecewise Deterministic Markov Processes Nicole Bäuerle based on a joint work with D. Lange Wien, April 2018 Outline Motivation PDMP with Partial Observation Controlled

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

On semilinear elliptic equations with measure data

On semilinear elliptic equations with measure data On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July

More information

Minimization of ruin probabilities by investment under transaction costs

Minimization of ruin probabilities by investment under transaction costs Minimization of ruin probabilities by investment under transaction costs Stefan Thonhauser DSA, HEC, Université de Lausanne 13 th Scientific Day, Bonn, 3.4.214 Outline Introduction Risk models and controls

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

UNIVERSITY OF MANITOBA

UNIVERSITY OF MANITOBA Question Points Score INSTRUCTIONS TO STUDENTS: This is a 6 hour examination. No extra time will be given. No texts, notes, or other aids are permitted. There are no calculators, cellphones or electronic

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

u xx + u yy = 0. (5.1)

u xx + u yy = 0. (5.1) Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function

More information

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Ulrich Horst 1 Humboldt-Universität zu Berlin Department of Mathematics and School of Business and Economics Vienna, Nov.

More information

LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION

LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION René Carmona Department of Operations Research & Financial Engineering PACM

More information

Simulation of diffusion. processes with discontinuous coefficients. Antoine Lejay Projet TOSCA, INRIA Nancy Grand-Est, Institut Élie Cartan

Simulation of diffusion. processes with discontinuous coefficients. Antoine Lejay Projet TOSCA, INRIA Nancy Grand-Est, Institut Élie Cartan Simulation of diffusion. processes with discontinuous coefficients Antoine Lejay Projet TOSCA, INRIA Nancy Grand-Est, Institut Élie Cartan From collaborations with Pierre Étoré and Miguel Martinez . Divergence

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

exchange rate, optimal impulse control, quasi-variational inequalities, stopping

exchange rate, optimal impulse control, quasi-variational inequalities, stopping OPTIMAL INTERVENTION IN THE FOREIGN EXCHANGE MARKET WHEN INTERVENTIONS AFFECT MARKET DYNAMICS ALEC N. KERCHEVAL AND JUAN F. MORENO Abbreviated Title: Optimal intervention in foreign exchange Abstract.

More information

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Erhan Bayraktar University of Michigan joint work with Virginia R. Young, University of Michigan K αρλoβασi,

More information

Noncooperative continuous-time Markov games

Noncooperative continuous-time Markov games Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and

More information

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2)

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2) SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES SAVAS DAYANIK AND SEMIH O. SEZER Abstract. One of two simple hypotheses is correct about the unknown arrival rate and jump distribution

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

On Some Ergodic Impulse Control Problems with Constraint

On Some Ergodic Impulse Control Problems with Constraint Wayne State University Mathematics Faculty Research Publications Mathematics 7-19-218 On Some Ergodic Impulse Control Problems with Constraint J. L. Menaldi Wayne State University, menaldi@wayne.edu Maurice

More information

Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review

Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review L. Gawarecki Kettering University NSF/CBMS Conference Analysis of Stochastic Partial

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Noise is often considered as some disturbing component of the system. In particular physical situations, noise becomes

More information

Classical and Restricted Impulse Control for the Exchange Rate under a Stochastic Trend Model

Classical and Restricted Impulse Control for the Exchange Rate under a Stochastic Trend Model Classical and Restricted Impulse Control for the Exchange Rate under a Stochastic Trend Model Wolfgang J. Runggaldier Department of Mathematics University of Padua, Italy runggal@math.unipd.it Kazuhiro

More information

Potential Theory on Wiener space revisited

Potential Theory on Wiener space revisited Potential Theory on Wiener space revisited Michael Röckner (University of Bielefeld) Joint work with Aurel Cornea 1 and Lucian Beznea (Rumanian Academy, Bukarest) CRC 701 and BiBoS-Preprint 1 Aurel tragically

More information

Elliptic Operators with Unbounded Coefficients

Elliptic Operators with Unbounded Coefficients Elliptic Operators with Unbounded Coefficients Federica Gregorio Universitá degli Studi di Salerno 8th June 2018 joint work with S.E. Boutiah, A. Rhandi, C. Tacelli Motivation Consider the Stochastic Differential

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

Threshold behavior and non-quasiconvergent solutions with localized initial data for bistable reaction-diffusion equations

Threshold behavior and non-quasiconvergent solutions with localized initial data for bistable reaction-diffusion equations Threshold behavior and non-quasiconvergent solutions with localized initial data for bistable reaction-diffusion equations P. Poláčik School of Mathematics, University of Minnesota Minneapolis, MN 55455

More information

Chapter 2 Event-Triggered Sampling

Chapter 2 Event-Triggered Sampling Chapter Event-Triggered Sampling In this chapter, some general ideas and basic results on event-triggered sampling are introduced. The process considered is described by a first-order stochastic differential

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes

Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes RAÚL MONTES-DE-OCA Departamento de Matemáticas Universidad Autónoma Metropolitana-Iztapalapa San Rafael

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Asymptotics of minimax stochastic programs

Asymptotics of minimax stochastic programs Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming

More information

OBSTACLE PROBLEMS FOR NONLOCAL OPERATORS: A BRIEF OVERVIEW

OBSTACLE PROBLEMS FOR NONLOCAL OPERATORS: A BRIEF OVERVIEW OBSTACLE PROBLEMS FOR NONLOCAL OPERATORS: A BRIEF OVERVIEW DONATELLA DANIELLI, ARSHAK PETROSYAN, AND CAMELIA A. POP Abstract. In this note, we give a brief overview of obstacle problems for nonlocal operators,

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES

GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES MICHAEL B. MARCUS AND JAY ROSEN CITY COLLEGE AND COLLEGE OF STATEN ISLAND Abstract. We give a relatively simple proof of the necessary

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s

More information

ON LARGE DEVIATIONS OF MARKOV PROCESSES WITH DISCONTINUOUS STATISTICS 1

ON LARGE DEVIATIONS OF MARKOV PROCESSES WITH DISCONTINUOUS STATISTICS 1 The Annals of Applied Probability 1998, Vol. 8, No. 1, 45 66 ON LARGE DEVIATIONS OF MARKOV PROCESSES WITH DISCONTINUOUS STATISTICS 1 By Murat Alanyali 2 and Bruce Hajek Bell Laboratories and University

More information

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73 CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly

More information

Solutions For Stochastic Process Final Exam

Solutions For Stochastic Process Final Exam Solutions For Stochastic Process Final Exam (a) λ BMW = 20 0% = 2 X BMW Poisson(2) Let N t be the number of BMWs which have passes during [0, t] Then the probability in question is P (N ) = P (N = 0) =

More information

Bridging the Gap between Center and Tail for Multiscale Processes

Bridging the Gap between Center and Tail for Multiscale Processes Bridging the Gap between Center and Tail for Multiscale Processes Matthew R. Morse Department of Mathematics and Statistics Boston University BU-Keio 2016, August 16 Matthew R. Morse (BU) Moderate Deviations

More information

Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables

Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables Joao Meireles joint work with Martino Bardi and Guy Barles University of Padua, Italy Workshop "New Perspectives in Optimal

More information

On Robust Arm-Acquiring Bandit Problems

On Robust Arm-Acquiring Bandit Problems On Robust Arm-Acquiring Bandit Problems Shiqing Yu Faculty Mentor: Xiang Yu July 20, 2014 Abstract In the classical multi-armed bandit problem, at each stage, the player has to choose one from N given

More information

Cross-Selling in a Call Center with a Heterogeneous Customer Population. Technical Appendix

Cross-Selling in a Call Center with a Heterogeneous Customer Population. Technical Appendix Cross-Selling in a Call Center with a Heterogeneous Customer opulation Technical Appendix Itay Gurvich Mor Armony Costis Maglaras This technical appendix is dedicated to the completion of the proof of

More information

University Of Calgary Department of Mathematics and Statistics

University Of Calgary Department of Mathematics and Statistics University Of Calgary Department of Mathematics and Statistics Hawkes Seminar May 23, 2018 1 / 46 Some Problems in Insurance and Reinsurance Mohamed Badaoui Department of Electrical Engineering National

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS

SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS By Ari Arapostathis, Anup Biswas, and Vivek S. Borkar The University of Texas at Austin, Indian Institute of Science

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Obliquely Reflected Brownian motions (ORBMs) in Non-Smooth Domains

Obliquely Reflected Brownian motions (ORBMs) in Non-Smooth Domains Obliquely Reflected Brownian motions (ORBMs) in Non-Smooth Domains Kavita Ramanan, Brown University Frontier Probability Days, U. of Arizona, May 2014 Why Study Obliquely Reflected Diffusions? Applications

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

Obstacle problems for nonlocal operators

Obstacle problems for nonlocal operators Obstacle problems for nonlocal operators Camelia Pop School of Mathematics, University of Minnesota Fractional PDEs: Theory, Algorithms and Applications ICERM June 19, 2018 Outline Motivation Optimal regularity

More information

Fractional Laplacian

Fractional Laplacian Fractional Laplacian Grzegorz Karch 5ème Ecole de printemps EDP Non-linéaire Mathématiques et Interactions: modèles non locaux et applications Ecole Supérieure de Technologie d Essaouira, du 27 au 30 Avril

More information

arxiv: v2 [math.pr] 4 Feb 2009

arxiv: v2 [math.pr] 4 Feb 2009 Optimal detection of homogeneous segment of observations in stochastic sequence arxiv:0812.3632v2 [math.pr] 4 Feb 2009 Abstract Wojciech Sarnowski a, Krzysztof Szajowski b,a a Wroc law University of Technology,

More information

GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1. Hungarian Academy of Sciences

GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1. Hungarian Academy of Sciences The Annals of Probability 1996, Vol. 24, No. 3, 1324 1367 GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1 By Bálint Tóth Hungarian Academy of Sciences We consider

More information

Self-similar Markov processes

Self-similar Markov processes Self-similar Markov processes Andreas E. Kyprianou 1 1 Unversity of Bath Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy

More information

Band Control of Mutual Proportional Reinsurance

Band Control of Mutual Proportional Reinsurance Band Control of Mutual Proportional Reinsurance arxiv:1112.4458v1 [math.oc] 19 Dec 2011 John Liu College of Business, City University of Hong Kong, Hong Kong Michael Taksar Department of Mathematics, University

More information

Average-cost temporal difference learning and adaptive control variates

Average-cost temporal difference learning and adaptive control variates Average-cost temporal difference learning and adaptive control variates Sean Meyn Department of ECE and the Coordinated Science Laboratory Joint work with S. Mannor, McGill V. Tadic, Sheffield S. Henderson,

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

Approximation of BSDEs using least-squares regression and Malliavin weights

Approximation of BSDEs using least-squares regression and Malliavin weights Approximation of BSDEs using least-squares regression and Malliavin weights Plamen Turkedjiev (turkedji@math.hu-berlin.de) 3rd July, 2012 Joint work with Prof. Emmanuel Gobet (E cole Polytechnique) Plamen

More information

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University Introduction to Optimal Control Theory and Hamilton-Jacobi equations Seung Yeal Ha Department of Mathematical Sciences Seoul National University 1 A priori message from SYHA The main purpose of these series

More information

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term 1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes

More information

Statistical test for some multistable processes

Statistical test for some multistable processes Statistical test for some multistable processes Ronan Le Guével Joint work in progress with A. Philippe Journées MAS 2014 1 Multistable processes First definition : Ferguson-Klass-LePage series Properties

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018 EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,

More information

Polynomial Preserving Jump-Diffusions on the Unit Interval

Polynomial Preserving Jump-Diffusions on the Unit Interval Polynomial Preserving Jump-Diffusions on the Unit Interval Martin Larsson Department of Mathematics, ETH Zürich joint work with Christa Cuchiero and Sara Svaluto-Ferro 7th General AMaMeF and Swissquote

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner

More information

Some aspects of the mean-field stochastic target problem

Some aspects of the mean-field stochastic target problem Some aspects of the mean-field stochastic target problem Quenched Mass ransport of Particles owards a arget Boualem Djehiche KH Royal Institute of echnology, Stockholm (Joint work with B. Bouchard and

More information

Numerical schemes of resolution of stochastic optimal control HJB equation

Numerical schemes of resolution of stochastic optimal control HJB equation Numerical schemes of resolution of stochastic optimal control HJB equation Elisabeth Ottenwaelter Journée des doctorants 7 mars 2007 Équipe COMMANDS Elisabeth Ottenwaelter (INRIA et CMAP) Numerical schemes

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information

Impulse Control of Stochastic Navier-Stokes Equations

Impulse Control of Stochastic Navier-Stokes Equations Wayne State University Mathematics Faculty Research Publications Mathematics 1-1-23 Impulse Control of Stochastic Navier-Stokes Equations J. L. Menaldi Wayne State University, menaldi@wayne.edu S. S. Sritharan

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Partial Differential Equations

Partial Differential Equations M3M3 Partial Differential Equations Solutions to problem sheet 3/4 1* (i) Show that the second order linear differential operators L and M, defined in some domain Ω R n, and given by Mφ = Lφ = j=1 j=1

More information

Lecture Characterization of Infinitely Divisible Distributions

Lecture Characterization of Infinitely Divisible Distributions Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly

More information

Jukka Lempa The Optimal Stopping Problem of Dupuis and Wang: A Generalization. Aboa Centre for Economics

Jukka Lempa The Optimal Stopping Problem of Dupuis and Wang: A Generalization. Aboa Centre for Economics Jukka Lempa The Optimal Stopping Problem of Dupuis and Wang: A Generalization Aboa Centre for Economics Discussion Paper No. 36 Turku 28 Copyright Author(s) ISSN 1796-3133 Turun kauppakorkeakoulun monistamo

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University The Residual and Error of Finite Element Solutions Mixed BVP of Poisson Equation

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

On the Multi-Dimensional Controller and Stopper Games

On the Multi-Dimensional Controller and Stopper Games On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle

Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle Ph.D. course in OPTIMAL CONTROL Emiliano Cristiani (IAC CNR) e.cristiani@iac.cnr.it

More information

On dynamic stochastic control with control-dependent information

On dynamic stochastic control with control-dependent information On dynamic stochastic control with control-dependent information 12 (Warwick) Liverpool 2 July, 2015 1 {see arxiv:1503.02375} 2 Joint work with Matija Vidmar (Ljubljana) Outline of talk Introduction Some

More information

Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process

Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process Mátyás Barczy University of Debrecen, Hungary The 3 rd Workshop on Branching Processes and Related

More information