On the Duality of Optimal Control Problems with Stochastic Differential Equations
|
|
- Amelia Atkins
- 5 years ago
- Views:
Transcription
1 Technical report, IDE831, October 17, 28 On the Duality of Optimal Control Problems with Stochastic Differential Equations Master s Thesis in Financial Mathematics Tony Huschto School of Information Science, Computer and Electrical Engineering Halmstad University
2
3 On the Duality of Optimal Control Problems with Stochastic Differential Equations Tony Huschto Halmstad University Project Report IDE831 Master s Thesis in Financial Mathematics, 15 ECTS credits Supervisor: Prof. Dr. S. Pickenhain Examiner: Prof. L.A. Bordag External referee: Prof. V.N. Roubtsov October 17, 28 Department of Mathematics, Physics and Electrical engineering School of Information Science, Computer and Electrical Engineering Halmstad University
4
5 Preface This thesis arised in the course of the Master s Programme in Financial Mathematics at Halmstad University. It tries to link lectures in Mathematical Methods of Portfolio Optimization with stochastic differential equations and duality and might appeal readers interested in those topics. As my field of studies in Germany preferentially is optimisation, this thesis is more connected to that topic than to original Financial Mathematics, but it certainly opens up great possibilities in this area of research. Cohesive to this work and its becoming I would like to express my gratitude to my supervisor Prof. Dr. Sabine Pickenhain for her support and all those enriching discussions, Prof. Ljudmila A. Bordag for giving me the opportunity to participate in this programme, and the other tutors for their help along the way. I would also like to thank my family for their encouragement, my friends and the wonderful people I have met in Sweden, and - especially - my girlfriend Claudia for her love. i
6 ii
7 Abstract The main achievement of this work is the development of a duality theory for optimal control problems with stochastic differential equations. Incipient with the Hamilton-Jacobi-Bellman equation we established a dual problem to a given stochastic control problem and were also able to generalise the assembled theory. iii
8 iv
9 Contents Introduction 1 1 An introduction to duality 3 2 Stochastic basics Stochastic processes and Brownian motion Itô integral and the Itô formula Stochastic differential equations and diffusions The duality of the stochastic control problem The problem The Bellman principle The Hamilton-Jacobi-Bellman equation The dual problem A generalisation of the condition (3.17) Economic examples 37 Conclusion and outlook 43 Appendix A: The royal road of Carathéodory 45 Appendix B: The Dirichlet-Poisson problem 51 Frequently used notation and symbols 53 Bibliography 55 v
10 vi
11 Introduction In this thesis we want to develop the connection between stochastics, optimal control, and duality. Based on the Hamilton-Jacobi-Bellman approach, we try to construct a dual problem to a given stochastic control problem [ τ J(X t, u t ) = x r(s, X s, u) ds + g(τ, X τ ) ½ {τ< } sup!, (.1) with respect to all controls u U, where U is the control space, r is a profit rate function, and g is a bequest function. The n-dimensional stochastic process X t is given by the stochastic differential equation dx t = b(t, X t, u t ) dt + σ(t, X t, u t ) db t (.2) starting at X = x, where B t is an m-dimensional Brownian motion. Another aim of this thesis is - after finding a theory of constructing a dual problem to the given one - to weaken the assumptions on the dual variable to make our results more general. The starting points of this theory can be found in [Ø and [KK. In the first chapter of this work a short introduction to duality is given. This continues closely along [P2. After that the needed theory in stochastics is described, following [Ø, [F, [J, and [N, [E2. With this knowledge we finally come to the stochastic control theory which will be discussed in chapter three. We state the problem, give an approach by using the Bellman principle, and examine the Hamilton-Jacobi-Bellman equation. These fundamentals of the development of the duality theory can be found in [Ø and [KK. In the following section we describe the construction of the dual problem in two different cases, for a start when the process X t proceeds for all times t within a domain G Ê + Ê n, and after that for the case when we have a bounded domain G and therefore a time when the process exits from this domain. Before we come to some economic examples like the linear stochastic regulator problem, or finally one problematic case, we generalise the theory as wanted and expand it to less restricted dual variable functions. In the end a short appendix to the royal road of Carathéodory and the Dirichlet-Poisson problem is given. 1
12 2 Introduction
13 1 An introduction to duality The concept of duality appears in many parts of mathematics, like group theory, optimisation, or calculus of variations. This chapter gives a short outline of this idea. Definition 1.1 f and g be real functionals, that is f : X Ê 1, g : Y Ê 1, where X and Y are functional spaces. Then f(x) inf!, with respect to all x X, (P) is called the primal problem to g(y) sup!, w.r.t. all y Y, (D) if inf x X f(x) sup g(y). (1.1) y Y This is the weak duality relation and (D) is called a dual problem to (P). Additionally, we have the following conventions: If X =, then inf f(x) = ; and if Y =, then sup g(y) =. Definition 1.2 If equality holds in (1.1), we obtain strong duality. In conclusion we get that, if there exist ˆx X, ŷ Y with f(ˆx) = g(ŷ), ˆx is a global optimal solution of (P) and ŷ is a global optimal solution of (D). Based on the problem (P) to find the infimum of f(x), w.r.t. x X, we can construct a dual problem in three steps. First of all, we assume that X can be displayed as X = X X 1, where X and X 1 are arbitrary in the beginning. Their structure might be presaged by (P). In the second step we establish the so-called claim of equivalence. We take a set Y and a real functional Φ on X Y with the property inf f(x) = inf x X sup x X y Y Φ(x, y). (1.2) Finally, we calculate g(y) = inf x X Φ(x, y). (1.3) 3
14 1 An introduction to duality Theorem 1.1 With the given construction is a dual problem to (P). g(y) = inf x X Φ(x, y) max!, w.r.t. y Y, (1.4) Proof: For arbitrary sets X and Y we have because Therefore, we obtain inf inf sup x X sup x X y Y Φ(x, y) sup y Y inf Φ(x, y), (1.5) x X sup Φ(x, y) Φ(x, y), x X, y Y, y Y y Y inf sup x X y Y inf f(x) = inf x X Φ(x, y) inf x X Φ(x, y), y Y, Φ(x, y) sup y Y sup x X y Y Φ(x, y) sup y Y inf Φ(x, y). x X inf Φ(x, y) = sup g(y). x X Corollary 1.1 If Y and Φ satisfy the claim of equivalence and (x, y ) X Y is a saddle point of Φ, that is Φ(x, y ) Φ(x, y ) Φ(x, y), x X, y Y, (1.6) then we have strong duality between (P) and (D). Proof: Because of (1.6) we have y Y Φ(x, y ) = inf Φ(x, y ) sup inf Φ(x, y) = sup g(y), x X x X but on the other side (1.6) also entails Thus, we obtain y Y Φ(x, y ) = sup Φ(x, y) inf y Y sup x X y Y inf f(x) x X Φ(x, y ) sup g(y), and together with the weak duality condition inf x X y Y Φ(x, y) = inf x X f(x). y Y f(x) = sup g(y). y Y Corollary 1.2 With every dual problem g(y) sup!, w.r.t. y Y, the problem g(ỹ) sup!, w.r.t. ỹ Ỹ Y, g(ỹ) g(y), ỹ Ỹ, is also a dual problem to (P). 4
15 2 Stochastic basics As we examine the connection between stochastics and optimal control, this chapter shortly deals with the most important stochastic principles used in this thesis. 2.1 Stochastic processes and Brownian motion Definition 2.1 Let Ω be a given set, then a σ-algebra F on Ω is a family F of subsets of Ω with the following properties: (i) F, (ii) F F F C F, where F C = Ω \ F is the complement of F in Ω, (iii) A 1, A 2,... F A = A i F. The pair (Ω, F) is called a measurable space. A probability measure È on a measurable space (Ω, F) is a function È : F [, 1 such that (1) È( ) =, È(Ω) = 1, (2) if A 1, A 2,... F and {A i } is disjoint, then È ( ) A i = È(A i ). The triple (Ω, F, È) is called a probability space. It is called a complete probability space if F contains all subsets H Ω with È-outer measure zero, that is with È (H) = inf{è(f) F F, H F } =. Definition 2.2 The subsets F Ω which belong to F are called F-measurable sets. In a context of probability these sets are called events. We use the interpretation È(F) = the probability that F occurs. If È(F) = 1, we say that F occurs with probability 1 or almost surely (a.s.). 5
16 2 Stochastic basics Definition 2.3 Given a family U of subsets of Ω, there is a smallest σ-algebra H U containing U: H U = {H H σ-algebra of Ω, U H}. We call H U the σ-algebra generated by U. If U is the collection of all open subsets of a topological space Ω (for example the space Ê n ), then B = H U is called the Borel σ-algebra on Ω, and the elements B B are called Borel sets. B contains all open sets, all closed sets, all countable unions of closed sets, all countable intersections of such countable unions, etc. Definition 2.4 If (Ω, F, È) is a given probability space, then a function Y : Ω Ê n is called F-measurable if Y 1 (U) = {ω Ω Y (ω) U} F, for all open sets U Ê n (or equivalently for all Borel sets U). Definition 2.5 If X : Ω Ê n is any function, then the σ-algebra H X generated by X is the smallest σ-algebra on Ω containing all the sets X 1 (U), U Ê n open. From now on we denote by (Ω, F, È) a given complete probability space. Definition 2.6 A random variable X is an F-measurable function X : Ω Ê n. Every random variable includes a probability measure µ X on Ê n, defined by µ X is called the distribution of X. µ X (B) = È(X 1 (B)). Definition 2.7 If X(ω) dè(ω) <, then the number Ω [X = X(ω) dè(ω) = xdµ X (x) Ω Ê n is the expectation of X (w.r.t. È). With all of these preliminary notions we can finally define a stochastic process: Definition 2.8 Be (Ω, F, È) given. A family of random variables (X t ) t I is called stochastic process. (I = [, ), I = Ê, I = Æ, or I = [, T.) We can describe (X t ) t I as a function X : I Ω Ê with X(t, ω) = X t (ω). For all t I X(t, ) = X t is a random variable, for all ω Ω X(, ω) : I Ê is called a path (or trajectory) of X. 6
17 On the Duality of Stochastic Control Problems Definition 2.9 Again (Ω, F, È) be given. A family (F t ) t I of σ-algebras of F with F t F s ( F), for all t s, is called filtration of F. Definition 2.1 If (F t ) t I is a filtration and (X t ) t I a stochastic process, then (X t ) t I is called adapted to (F t ) t I if, t I, the random variable X t is F t - measurable. The space Ê I = {f f : I Ê} of all functions from I to Ê includes all paths of a stochastic process. Be B(Ê I ) the σ-algebra of the cylinder sets in Ê I, that is the smallest σ-algebra over Ê I which contains all sets of the form {f Ê I (f(t1 ),...,f(t n )) A}, n Æ, t 1,..., t n I, A B n. Further (X t ) t I be given, È t1,...,t n (A) = È ( {ω Ω (Xt1 (ω),..., X tn (ω)) A} ), (2.1) n Æ, t 1,...,t n I, A B n, then È t1,...,t n is a probability measure on (Ê n, B n ) and satisfies the conditions of the following definition: Definition 2.11 Be (È τ ) τ T a family of probability measures, whereupon T is the set of all different finite sequences of I, and È τ is a probability measure on (Ê n, B n ) if τ = n. Then (È τ ) τ T is called consistent if (i) È t1,...,t n,t n+1 (A Ê) = È t1,...,t n (A), A B n, (ii) n, (t 1,...,t n ) T, and all permutations (Π(1),...,Π(n)) of {1,..., n} holds. È t1,...,t n (A 1... A n ) = È tπ(1),...,t Π(n) (A Π(1)... A Π(n) ), A k B, Theorem 2.1 (Kolmogorov s existence theorem) (È τ ) τ T be a consistent family of probability measures. Then there exist a È on (Ê I, B(Ê I )) and a stochastic process (X t ) t I on (Ê I, B(Ê I ), È) such that, n Æ, (t 1,..., t n ) T È ( {ω Ê I (Xt1 (ω),..., X tn (ω)) A} ) = È t1,...,t n (A), A B n. (2.2) Finally, we define the Brownian motion and give some properties of this special process. Definition 2.12 A stochastic process (B t ) t [, ) is called Brownian motion (or Wiener process) if (1) B =, (2) (B t ) is a process with independent increments, i.e., t < t 1 <... < t n, we have that B t1 B t,...,b tn B tn 1 are independent, 7
18 2 Stochastic basics (3) (B t ) is a process with stationary increments, i.e., t, s, h, we have B t B s B t+h B s+h, (4) (B t ) N(, t), t, that is B t is normally distributed with expectation zero and variance t, (5) È-almost all paths are continuous. Now we can easily verify the following Theorem 2.2 If (B t ) t is a Brownian motion, then we obtain (i) B t B s N(, t s), t > s, (ii) Cov(X t, X s ) = min{t, s}. Proof: For t > s we get by the definition of a Brownian motion B t B s B t s B s s = B t s N(, t s). Assume s < t, thus, B t B s N(, t s), B s N(, s). B s and B t B s are independent and we get Cov(B t B s, B s) =. Then we conclude Cov(B t, B s ) = [B t B s [B t [B s = [(B t B s + B s )B s [ = [(B t B s )B s + 2 Bs }{{}}{{} = Var[B s = s = min{s, t} Analogously, we can repeat the calculation above for t < s to complete the proof. Theorem 2.3 A Brownian motion is a Gaußian process, i.e. all finite dimensional distributions are normal. Definition 2.13 A stochastic process (X t ) t is called λ-selfsimilar for λ (, 1) if for all n Æ, t 1,..., t n, and for all τ > ( τ λ X t1,...,τ λ X tn ) (Xτt1,...,X τtn ). (2.3) Theorem 2.4 A Brownian motion (B t ) t is 1 2 -selfsimilar. 8
19 On the Duality of Stochastic Control Problems Proof: If we have X N(, Σ) and Y = AX for some fitting matrix A, and σ σ 1n Σ =.., σ n1... σ nn where σ ij = min{t i, t j }, then we know that Y N(, AΣA T ). Now we consider t B 1 min{t 1, t 2 } t1 min{t 1, t 2 } t 2. N.,... B tn t n. This justifies σ ij = Cov(B ti, B tj ) = min{t i, t j }. Be τ >. τbt1. = 1 B t1 τ B t1 τ.... =.... τbtn 1 B tn τ B tn τbt1 B t1 τbt1. = A.. N(, AΣA T ). τbtn τbtn B tn As A = τi and A T = A, we get AΣA T = τσ = Σ, and therefore σ ij = τσ ij = min{τt i, τt j }. Thus, τbti N(, τt i ) B τti. To complete this introduction about stochastic processes and Brownian motions we give the Theorem 2.5 Be (B t ) t a Brownian motion and = t < t 1 <... < t n = T a decomposition of [, T. k B = B tk B tk 1, k {....,n}, k = t k t k 1. Then Q n (T) = ( k B) 2 [Q n (T) = T, Q n (T) k=1 Proof: [ [Q n (T) = ( k B) 2 = = k=1 [ ( k B) 2 = k=1 k=1 È T. (2.4) n [ (Btk B )2 tk 1 Var [ B tk B tk 1 = Var [ B tk t k 1 = (t k t k 1 ) = T. k=1 k=1 k=1 9
20 [ Var[Q n (T) = Var ( k B) 2 = = = = = k=1 [ ( k B) 4 k=1 [ 4 Btk t k 1 k=1 k=1 k=1 Var [ ( k B) 2 k=1 ( ( k ) 2 k=1 [ ( tk ) 4 t k 1B 1 [ 2 4 k B1 k=1 [ ( k B) 2) 2 2 Stochastic basics ( k ) 2 k=1 2 k = 2 Now (τ n ) n=1 be a null sequence, where τ n is defined by τ n : = t (n) < t (n)... < t n (n) = T. Then with We abbreviate (n) k = t (n) k Var[Q n (T) = 2 diam τ n = k=1 max k {1,...,n} t(n) k t (n) k 1 lim diam τ n =. n t (n) k 1 and conclude k=1 ( (n) k )2 2 diam τ n k=1 k=1 k 2 (n) k = 2 diamτ n T. 1 < Thus, for the null sequence (τ n ) n=1. Because of Var[Q n (T) n Var[Q n (T) = [ (Qn (T) [Q n (T)) 2 = [ (Qn (T) T) 2 n and we have that in the sense of L 2. È ( Q n (T) T ε) [ (Qn (T) T) 2 Q n (T) ε 2 È T n n, ε >, 1
21 On the Duality of Stochastic Control Problems 2.2 Itô integral and the Itô formula With the last statements given above we can take a closer look on the term T B t db t. (2.5) Be τ n : = t < t 1 <... < t n = T, k B = B tk B tk 1, and k = t k t k 1. Then This leads to Hence, S n = lim n ( ) n 1 B tk 1 Btk B tk 1 = B tk B tk 1 k=1 = 1 2 B t n k=1 k=1 k= B tk 2 ( Btk B tk 1) 2 = 1 2 B T Q n(t). [ S n 1 ( 2 BT T ) [ ( = lim 1 2 n 2 Q n(t) + 1 ) 2 2 T T B t db t = 1 2 = 1 4 lim n [ (Qn (T) T) 2 =. ( BT 2 T ). (2.6) We can see easily that this integral does not follow the regular rules of integration. It is the first example of an Itô integral which will be described a little more precisely now. Let B t be a Brownian motion on the filtered probability space (Ω, F, (F t ) t, È). The function g(t, X t ) is adapted to F t, it means that g(t, X t ) is measurable w.r.t. F t and independent of the future of t. Further [ T (g(t, X t )) 2 dt <, so the integral T g(t, X t ) db t (2.7) exists. We can verify this by assuming a simple function g first, i.e. g(t, X t ) = g k for some subdivision of the interval [, T. Thereafter we can approximate any arbitrary g with simple functions. Before we come to the important Itô formula here are some properties of the Itô integral. They can be shown effortlessly. 11
22 2 Stochastic basics T T (α g + b h) db t = a T g db t + b h db t, where a, b are constants, [ T g db t =, [ ( T ) 2 [ T g db t = g 2 dt, [ T [ T T g db t h db t = gh dt, T g db t has continuous trajectories. Definition 2.14 Let B t be a one-dimensional Brownian motion on (Ω, F, È). A (one-dimensional) Itô process (or stochastic integral) is a stochastic process X t on (Ω, F, È) of the form X t = X + t b(s, ω) ds + t σ(s, ω) db s, (2.8) where È ( t ) ( t ) b(s, ω) ds < = 1, È (σ(s, ω)) 2 ds < = 1. If X t is an Itô process of the form (2.8), it is sometimes written in the shorter differential form dx t = bdt + σ db t. (2.9) Theorem 2.6 (The Itô formula) Suppose X t is a stochastic process given by the Itô differential dx t = b(t, ω) dt + σ(t, ω) db t. Then for some function f(t, x) : Ê + Ê Ê, f C 2 ([, ) Ê), we find that Y t = f(t, X t ) is again an Itô process and ( f dy t = df(t, X t ) = + b(t, ω) f t x + 1 ) 2 (σ(t, f ω))2 2 x 2 dt + σ(t, ω) f x db t. (2.1) 12
23 On the Duality of Stochastic Control Problems Before we can prove this theorem we need two special cases. Lemma 2.1 Be B a Brownian motion. Then we find (i) d(b 2 ) = 2B db + dt, (ii) d(tb) = B dt + t db. Proof: The first part follows directly from equation (2.6). To prove (ii) consider a sequence of partitions of [, T, P n = { = t n < t n 1 <... < t n m n = T }, with P n. Then, with the limit taken in L 2, we can note T m n 1 t db = lim n Since t B(t) is continuous a.s. T k= k= t n k ( B(t n k+1 ) B(t n k )). m n 1 B dt = lim B(t n n k+1) ( t n k+1 tk) n. This holds since for a.e. ω the sum is an ordinary Riemann sum approximation, where we can take the right-hand endpoint at which to evaluate the continuous integrand. By adding these formulas we obtain T t db + T B dt = lim n = lim n m n 1 k= m n 1 k= ( t n k ( B(t n k+1 ) B(t n k )) + B(t n k+1 ) ( t n k+1 tn k)) ( B(t n k+1 )t n k+1 B(tn k )tn k) ( = lim B(t n mn )t n m n n B(t n )tn = B(T) T. Lemma 2.2 (Itô product rule) Let B be a Brownian motion and suppose dx 1 = b 1 dt + σ 1 db, dx 2 = b 2 dt + σ 2 db, for t [, T, and b i L 1 (, T), σ i L 2 (, T), i = 1, 2. Then d(x 1 X 2 ) = X 2 dx 1 + X 1 dx 2 + σ 1 σ 2 dt. (2.11) ) 13
24 2 Stochastic basics Proof: Choose r T, then assume X 1 () = X 2 () =, b i (t) b i, σ i (t) σ i for simplicity, where b i, σ i are time-independent, F()-measurable random variables. Then X i (t) = b i t + σ i B(t), t, i = 1, 2. Hence, r X 2 dx 1 + = = = r r r + r r X 1 dx 2 + r X 2 (b 1 dt + σ 1 db) + (X 1 b 2 + X 2 b 1 ) dt + σ 1 σ 2 dt r r X 1 (b 2 dt + σ 2 db) + (X 1 σ 2 + X 2 σ 1 ) db + (b 1 b 2 t + b 2 σ 1 B + b 1 b 2 t + b 1 σ 2 B) dt = b 1 b 2 r 2 + r r (b 1 σ 2 t + σ 1 σ 2 B + b 2 σ 1 t + σ 1 σ 2 B) db + σ 1 σ 2 r r (b 1 σ 2 + b 2 σ 1 )B dt + r + 2σ 1 σ 2 B db + σ 1 σ 2 r. With Lemma (2.1) we reason r X 2 dx 1 + r X 1 dx 2 + r σ 1 σ 2 dt r (b 1 σ 2 + b 2 σ 1 )t db σ 1 σ 2 dt σ 1 σ 2 dt = b 1 b 2 r 2 + (b 1 σ 2 + b 2 σ 1 )rb(r) + σ 1 σ 2 (B(r)) 2 σ 1 σ 2 r + σ 1 σ 2 r = X 1 (r) X 2 (r). In the case when we integrate from s to r and X 1 (s), X 2 (s) are arbitrary, and b i, σ i are constant, F(s)-measurable random variables, the proof is similar. Now let b i, σ i be step processes and apply the previous calculation on each subinterval [t k, t k+1 ) on which b i and σ i are constant. In the general case we select step processes b n i L 1 (, T), σi n L 2 (, T) with [ T b n i b i dt, [ T (σi n σ i ) 2 dt, for n. Define X n i (t) = X i () + t b n i ds + t σ n i db(s) 14
25 On the Duality of Stochastic Control Problems and apply the latest step to Xi n ( ) on (s, r). By passing to limits we get the formula X 1 (r) X 2 (r) = X 1 (s)x 2 (s) + Proof: (of Itô s formula) Suppose r s X 1 dx 2 + dx = bdt + σ db t, r s X 2 dx 1 + r s σ 1 σ 2 dt. with b L 1 (, T) and σ L 2 (, T). We begin with a function f(x) = x m for m Æ and claim that d(x m ) = mx m 1 dx m(m 1)Xm 2 σ 2 dt. This obviously holds for m =, 1, the case m = 2 follows by the Itô product rule. Now we prove the statement by induction assuming it for m 1: d(x m ) = d(xx m 1 ) = X d(x m 1 ) + X m 1 dx + (m 1)x m 2 σ 2 dt ( = X (m 1)X m 2 dx + 1 ) 2 (m 1)(m 2)Xm 3 σ 2 dt + (m 1)X m 2 σ 2 dt + X m 1 dx ( ) 1 = mx m 1 dx + (m 1)(m 2) + (m 1) X m 2 σ 2 dt 2 = mx m 1 dx m(m 1)Xm 2 σ 2 dt. Hence, Itô s formula holds for functions f(x) = x m and since the differentiation operator is linear it is also valid for all polynomials in x. In the next step suppose f(t, x) = p(t)q(x), where p and q are polynomials. So df(t, X) = d(p(t)q(x)) = q(x) dp(t) + p(t) dq(x) [ = q(x)p (t) dt + p(t) q (X) dx q (X)σ 2 dt = f t dt + f x dx σ2 2 f x 2 dt. This verifies Itô s formula for f(t, x) = p(t)q(x). Thus, it also proves it for any function with m f(t, x) = p i (t)q i (x), 15
26 2 Stochastic basics where p i and q i are polynomials. The last step is done by the following: Be f given as in Itô s formula, then there exists a sequence of polynomials f n such that f n f, f n t f t, f n x f x, 2 f n x 2 2 f x 2, uniformly on compact subsets of [, T Ê. From the previous calculations we obtain, for all r T, r ( f f n (r, X(r)) f n n (, X()) = t + b fn x + 1 ) f n r 2 σ2 2 dt+ σ fn x 2 x db(t) a.s. In the end we pass to limits as n and yield the statement of the theorem. Now we want to examine the situation in higher dimensions. Therefor B t = (Bt 1,..., Bd t )T be a d-dimensional Brownian motion. If each of the processes b i (t, ω) and σ ij (t, ω), i {1,..., n}, j {1,...,d}, satisfies the conditions of definition (2.14), then we can consider the n Itô processes dxt 1 = b 1 dt + σ 11 dbt σ1d dbt d,. (2.12) dxt n = b n dt + σ n1 dbt σ nd dbt d, or in matrix notation dx t = bdt + σ db t. (2.13) Such a process X t is called an n-dimensional Itô process. Theorem 2.7 (Itô s formula in d dimensions) If X = (X 1 t,..., X d t ) T is an Itô process as above and f(t, x) : Ê + Ê n Ê, f C 2 ([, ) Ê n ), then Y t = f(t, X t ) is again an Itô process, and the d-dimensional version of the Itô formula is given by dy t = f d t dt + f x i dxi t + 1 d 2 f 2 x i x j dxi tdx j t. (2.14) Proof: This proof is similar to the one-dimensional version and can be found in, e.g. [N. i,j=1 16
27 On the Duality of Stochastic Control Problems 2.3 Stochastic differential equations and diffusions We now take a look on possible solutions X t (ω) of the stochastic differential equation dx t = b(t, X t ) + σ(t, X t )W t, (2.15) dt where W t should be one-dimensional white noise. The Itô interpretation of this formula is that X t satisfies the stochastic integral equation or X t = X + t b(s, X s ) ds + t σ(s, X s ) db s, dx t = b(t, X t ) dt + σ(t, X t ) db t (2.16) if we want to write it in differential form. Hence, we obtain (2.16) from (2.15) by merely replacing the white noise W t by dbt and multiplying with dt. dt Definition 2.15 X t is a solution to the stochastic differential equation { dxt = b(t, X t ) dt + σ(t, X t ) db t X = x (2.17) if X t is measurable with respect to F t = F(X, B s, s t) and b(t, X t ) L 1 (, T) a.s., σ(t, X t ) L 2 (, T) a.s. Before we finally come to our main topic, we will give a short introduction to (Itô) diffusions. In a stochastic differential equation of the form dx t = b(t, X t ) dt + σ(t, X t ) db t, where X t Ê n, b(t, X t ) Ê n, σ(t, X t ) Ê n m, and B t is an m-dimensional Brownian motion, we call b the drift coefficient and σ the diffusion coefficient. Sometimes the diffusion coefficient is also connected with the term 1 2 σσt. Hence, we can interpret the solution of a stochastic differential equation as the motion of a small particle in a moving fluid, or, in other words, as a diffusion. We will also need some important properties and theorems which will be given now. Therefor let Q x denote the probability law of a given Itô diffusion (X t ) t, when its initial value is X = x Ê n. Then we denote the expectation w.r.t. Q x by x [. Further, we have already introduced the σ-algebra generated by B r for r t, that is F t. Similarly, M t be the σ-algebra generated by X r for r t. We know that X t is measurable w.r.t. F t, thus, M t F t. We can show now that X t satisfies the Markov property, meaning that the future behaviour of our process given what has happened up to time t is the same as the behaviour obtained when starting the process at X t (see [Ø). 17
28 2 Stochastic basics Theorem 2.8 (The Markov property for Itô diffusions) Let f be a bounded Borel function, f : Ê n Ê. Then, for t, h, we get x [ f(x t+h ) Ft (ω) = X t(ω) [f(x h ). (2.18) But we can even generalise this a little more. The strong Markov property states that (2.18) holds also if the time t is replaced by a random time τ(ω), also called stopping time. Definition 2.16 Let (H t ) be an increasing family of σ-algebras (of subsets of Ω). Then a function τ : Ω [, is called a stopping time w.r.t. (H t ) if {ω τ(ω) t} Ht, t. Theorem 2.9 (The strong Markov property for Itô diffusions) Let f be a bounded Borel function on Ê n, τ a stopping time w.r.t. F t, τ < a.s. Then we get x [ f(x τ+h ) Ft = X τ [f(x h ), h >. (2.19) The next important idea is the generator of an Itô diffusion. Definition 2.17 Let (X t ) be an Itô diffusion in Ê n. The infinitesimal generator A of X t is defined by x [f(x t ) f(x ) Af(x ) = lim, x Ê n. (2.2) t t With this we can create a connection between A and the coefficients b and σ in the stochastic differential equation defining X t. Therefor we need Theorem 2.1 Let Y t be an Itô process in Ê n of the form Y t (ω) = x + t u(s, ω) dt + t v(s, ω) db s (ω), where x = Y and B t is m-dimensional. Let f C 2 (Ê n ) and τ be a stopping time w.r.t. (F t ). Assume x [τ <. Further assume that u and v are bounded on the set of (t, ω) such that Y t (ω) belongs to the support of f. If x denotes the expectation w.r.t. the natural probability law R x of Y t starting at x, then x [f(y τ ) = f(x ) + x [ ( τ u i (s, ω) f (Y s ) + 1 x i 2 ) (vv T 2 f ) ij (s, ω) (Y s ) ds. x i x j i,j=1 (2.21) 18
29 On the Duality of Stochastic Control Problems Proof: First we apply Itô s formula to Z = f(y ) to obtain dz = = f x i (Y ) dy i u i f x i dt i,j=1 i,j=1 2 f x i x j (Y ) dy i dy j 2 f x i x j (v db) i (v db) j + f x i (v db) i, where we suppressed the index t and let Y 1,...,Y n and B 1,...,B m denote the coordinates of Y and B. Further on, ( m )( m ) (v db) i (v db) j = v ik db k v jl db l and thus, we yield f(y t ) = f(y ) + + k=1 t m k=1 k=1 l=1 ( m ) = v ik v jk dt = (vv T ) ij dt, ( t k=1 u i f x i v ik f x i db k. ) (vv T 2 f ) ij ds x i x j i,j=1 Hence, [ ( τ x [f(y τ ) = f(x ) + x f u i (Y ) + 1 x i 2 m [ τ + x f v ik (Y ) db k. x i ) (vv T 2 f ) ij (Y ) ds x i x j If we have a bounded Borel function g with g M, then for all integers q we get [ τ q [ q x g(y s ) db s = x ½ {s<τ} g(y s ) db s =, i,j=1 since g(y s ) and ½ {s<τ} are F s -measurable. Moreover, [ ( τ τ q ) 2 [ τ x g(y s ) db s g(y s ) db s = x τ q g 2 (Y s ) db s M 2 x [τ τ q. 19
30 2 Stochastic basics This yields [ τ q [ τ = lim x g(y s ) db s = x g(y s ) db s. q With this result we finally get the statement of Theorem 2.1. Theorem 2.11 Let X t be the Itô diffusion dx t = b(x t ) dt + σ(x t ) db t. If f C 2 (Ê n ), then the limit in definition (2.17) exists and Af(x) = b i (x) f x i (σσ T 2 f ) ij (x). (2.22) x i x j i,j=1 Finally, we need Theorem 2.12 (Dynkin s formula) Let f C 2(Ên ). Suppose τ is a stopping time and x [τ <. Then [ τ x [f(x τ ) = f(x ) + x Af(X s ) ds. (2.23) 2
31 3 The duality of the stochastic control problem 3.1 The problem As we are trying to develop a duality theory for optimal control problems with stochastic differential equations, first we have to describe a regular optimal control problem. We consider an independent variable t [, T, where T can be fixed or variable. Then we are looking upon a problem of the type J(x, u) = T r(t, x(t), u(t)) dt min!, w.r.t. state variables x and control variables u, x = (x 1,..., x n ) X, u = (u 1,...,u r ) U. Those variables satisfy state equations ẋ(t) = g(t, x(t), u(t)) a.e. on (, T) and control constraints u(t) U(t) Ê r. Further there can be boundary values of the kind c k (x(), x(t)) =, for k of some index set I 1, and c l (x(), x(t)), for l I 2. Additionally, there can be state constraints as well. The main idea in our case is to replace the state equation ẋ = g by a stochastic differential equation. Suppose the state at time t to be described by an Itô process X t, dx t = dx u t = b(t, X t, u t ) dt + σ(t, X t, u t ) db t, (3.1) where X t Ê n, b : Ê + Ê n U Ê n, σ : Ê + Ê n U Ê n m and B t is an m-dimensional Brownian motion. u t U is a parameter within the Borel set U. It can be used to control the process X t. Hence, it is a stochastic process and must be measurable w.r.t. F t. Thus, by (3.1) we define a stochastic integral. Then we can denote a solution (X t ) t of (3.1) such that X = x by X t = x + t b(s, X s, u s ) ds + t σ(s, X s, u s ) db s. Let the probability law of X t starting at x be denoted by Q x. The given functions r : Ê + Ê n U Ê (the profit rate function) and g : Ê + Ê n Ê (the bequest function) be continuous, and G be a fixed domain in Ê + Ê n. The first exit time of the process from G be denoted by τ, meaning τ = inf{r > (r, Xr ) / G}. (3.2) 21
32 3 The duality of the stochastic control problem Suppose x [ τ r(s, X s, u s ) ds + g(τ, X τ ) ½ {τ< } <, x, u, then we can define the performance function J(X t, u t ) by [ τ J(X t, u t ) = x r(s, X s, u s ) ds + g(τ, X τ ) ½ {τ< }. (3.3) This leads directly to our stochastic optimal control problem, namely to find the number Φ(X t ) and a control u such that Φ(X t ) = sup J(X t, u) = J(X t, u (X t )), (3.4) u A where the supremum is taken over a given family of admissible controls, contained in the set of F t -adapted processes u t U. Such a control u is called optimal if it exists and Φ then is the optimal performance function. In this thesis we will only consider Markov controls u, that means we have a function u of the form u(t, ω) = u (t, X t (ω)) for u : Ê n+1 U which does not depend on the starting point but only on the state of the system at time t. In the following we will not write u but denote the Markov control directly by u(t, X t ). 3.2 The Bellman principle Our first approach to solve the problem J(X t, u t ) sup! will use the value function which is also called the Bellman function, and the, so-called, Bellman principle (see [KK). As for stochastic control problems this theory does not differ much from the deterministic version, we will directly follow the considerations without introducing them. The Bellman function is defined as [ τ V (t, ξ) = sup u A X t =ξ J(t, X t, u t ) = sup t,x u A X t =ξ t r(s, X s, u s ) ds + g(τ, X τ ) ½ {τ< }, (3.5) (t, ξ) G, where J is a little different than in the previous section. Here, the integral within the expectation does not begin at time t = but at time t. We denote this by writing J(t, X t, u t ) and, analogously, t,x. The n-dimensional process X t is defined by dx i t = bi (t, X t, u t ) dt + m σ ik (t, X t, u t ) dbt k, (3.6) k=1 22
33 On the Duality of Stochastic Control Problems as B t is an m-dimensional Brownian motion. Additionally, we note that t and ξ are fixed in equation (3.5). This means we try to find the supremum for a process X t that takes the value ξ. Thus, we can obtain [ θ V (t, ξ) = sup t,x r(s, X s, u s ) ds + V (θ, X θ ), θ [t, τ. (3.7) u A t X t =ξ The Bellman principle states now that we can obtain the maximum of J by taking the supremum over the combined strategy choose the control u on [t, θ and behave optimal on the interval [θ, τ. To use the Bellman principle we shall assume that V C 2, the operations done are allowed, and the appearing stochastic integrals have an expectation equal to zero. With the Itô formula we calculate dv (θ, X θ ) = V t dt + V dxt i + 1 ξ i 2 i,j=1 2 V ξ i ξ j dx i t dx j t, or, equivalently, V (θ, X θ ) = V (t, ξ) θ t θ θ t V ξi (s, X s ) t i,j=1 V t (s, X s ) ds + θ t b i (s, X s, u s )V ξi (s, X s ) ds m σ ik (s, X s, u s ) dbt k (3.8) k=1 (σσ T ) ij (s, X s, u s )V ξi ξ j (s, X s ) ds. By inserting this into (3.7) and using the properties of the Itô integral we get [ θ θ V (t, ξ) = sup t,x r(s, X s, u s ) ds + V (t, ξ) + V t (s, X s ) ds u A t t X t =ξ θ + b i (s, X s, u s )V ξi (s, X s ) ds t θ t (σσ T ) ij (s, X s, u s )V ξi ξ j (s, X s ) ds. i,j=1 23
34 3 The duality of the stochastic control problem As the term V (t, ξ) is independent of u, we can substract it on both sides of the equation to obtain [ { θ = sup t,x r(s, X s, u s ) + V t (s, X s ) + b i (s, X s, u s )V ξi (s, X s ) u A t X t =ξ } + 1 (σσ T ) ij (s, X s, u s )V ξi ξ 2 j (s, X s ) ds. i,j=1 Now we divide the last equation by (θ t) and take the limit θ t. Formally, this gives [ { = sup t,x 1 θ lim r(s, X s, u s ) + V t (s, X s ) + b i (s, X s, u s )V ξi (s, X s ) u A θ t θ t t X t =ξ } + 1 (σσ T ) ij (s, X s, u s )V ξi ξ 2 j (s, X s ) ds = sup t,x u A X t =ξ [ i,j=1 r(t, X t, u t ) + V t (t, X t ) + b i (t, X t, u t )V ξi (t, X t ) (σσ T ) ij (t, X t, u t )V ξi ξ j (t, X t ) i,j=1 (3.9) as we apply the mean value theorem. Yet we know both the value of X t at time t and u t, so we can drop the expectation in equation (3.9). Further, we only have to maximise w.r.t. all admissable initial values of the control u, that is w.r.t. all v U and not w.r.t. all u A. This is true because only the initial values of the controls enter equation (3.9). Hence, we obtain { = sup v U r(t, ξ, v) + V t (t, ξ) b i (t, ξ, v)v ξi (t, ξ) } (σσ T ) ij (t, ξ, v)v ξi ξ j (t, ξ). (3.1) i,j=1 This leads us directly to the Hamilton-Jacobi-Bellman equation which we will discuss in the next section. But already here we notice that we have an additional term compared to the deterministic case. We need the second partial derivatives of the value function V. We will return to this a little later when we develop the duality theory. 24
35 On the Duality of Stochastic Control Problems Here we see finally that we can obtain the value function V (t, ξ) by maximising the Hamilton-Jacobi-Bellman equation, then inserting the so-obtained optimum u into the equation, and solving the partial differential equation with boundary conditions. To find some examples to this see the next chapter. 3.3 The Hamilton-Jacobi-Bellman equation To make equation (3.1) a little shorter and easier to overlook we define for v U and S C 2 (Ê Ê n ) (L v S)(t, ξ) = S t (t, ξ) + b i (t, ξ, v) S + 1 ξ i 2 (σσ T ) ij (t, ξ, v) 2 S. (3.11) ξ i ξ j i,j=1 For each v the solution (t, X t ) is an Itô diffusion with a generator A given by (AS)(t, ξ) = (L v(t,ξ) S)(t, ξ). With this knowlegde we can state the first important theorem, according to [Ø. Theorem 3.1 (The Hamilton-Jacobi-Bellman equation I) Define Φ(t, ξ) = sup{j(ξ, v) v = v(xt ) Markov control}. Assume Φ C 2 (G) C(Ḡ) satisfies [ x S(α, X α ) + α (L v Φ)(t, X t ) dt <, for all bounded stopping times α < τ, all (t, ξ) G, and all v U. Further, suppose that an optimal Markov control u exists and that G is regular for the solution (t, X t ) u, that is Q x (τ = ) = 1. Then and sup{r(t, ξ, v) + (L v Φ)(t, ξ)} =, (t, ξ) G, (3.12) v U Φ(t, ξ) = g(t, ξ), (t, ξ) G. (3.13) The supremum in (3.12) is obtained if v = u (t, ξ), where u is an optimal Markov control. This means r(t, ξ, u (t, ξ)) + (L u (t,ξ) Φ)(t, ξ) =, (t, ξ) G. (3.14) Proof: As u is optimal, we get [ τ Φ(t, ξ) = J(t, ξ, u (t, ξ)) = x r(s, X s, u (s, X s )) ds + g(τ, X τ )½ {τ< }. 25
36 3 The duality of the stochastic control problem If (t, ξ) is an element of G, then τ = a.s. Q x since G is regular. Therefore, we obtain for (t, ξ) G Φ(t, ξ) = g(t, ξ), and thus, (3.13). By the solution of the Dirichlet-Poisson problem (see Appendix B) we get (L u (t,ξ) Φ)(t, ξ) = r(t, ξ, u (t, ξ)), (t, ξ) G, which proves (3.14). For the remaining statement we fix (t, ξ) G and choose a Markov control v. Let α τ be a bounded stopping time. By the strong Markov property (see Theorem 2.9) we have x [ θ τ ψ Ft = X τ [ψ, for any stopping time τ and all bounded ψ H, where H is the set of all real F -measurable functions; θ denotes the shift operator. Further we have θ β η ½ {β< } = g(x τ β ) ½ H {τ β < }, H where H Ê n is measurable and τ H is the first exit time from H for an Itô diffusion X t. β be another stopping time and η be defined as η = g(x τh ) ½ {τh < } for a bounded continuous function g, τ β H = inf{t > β Xt / H}. The last property τ needed is that for ζ = g(x s ) ds we have Now we can calculate θ r ζ = τ r g(x s ) ds. So, x [J(α, X α, v) [ τ = x [ α,xα r(s, X s, v) ds + g(τ, X τ ) ½ {τ< } [ ( τ ) = x x [θ α r(s, X s, v) ds + g(τ, X τ ) ½ {τ< } F α [ [ τ = x x r(s, X s, v) ds + g(τ, X τ ) ½ {τ< } F α α [ τ α = x r(s, X s, v) ds + g(τ, X τ ) ½ {τ< } r(s, X s, v) ds [ α = J(t, ξ, v) x r(s, X s, v) ds. J(t, ξ, v) = x [ α r(s, X x, v) ds + x [J(α, X α, v). 26
37 On the Duality of Stochastic Control Problems y G (t,x) W s t 1 Let W G be of the form W = {(s, y) G s < t1 }, where t < t 1, and put α = inf{t (t, X t ) / W }. Suppose that an optimal control u (s, y) exists and choose { v if (s, y) W, v(s, y) = u (s, y) if (s, y) G \ W, where v U is arbitrary. Then and hence, Φ(α, X α ) = J(α, X α, u (α, X α )) = J(α, X α, v(α, X α )) Φ(t, ξ) J(t, ξ, v) = x [ α r(s, X s, v) ds + x [Φ(α, X α ). Since Φ C 2 (G) we can use Dynkin s formula (2.23) and get [ α x [Φ(α, X α ) = Φ(t, ξ) + x (L v Φ)(s, X s ) ds. This yields Φ(t, ξ) x or, equivalently, [ α x [ α [ α r(s, X s, v) ds + Φ(t, ξ) + x (L v Φ)(s, X s ) ds, {r(s, X s, v) + (L v Φ)(s, X s )} ds. Consequently, we obtain [ x α {r(s, X s, v) + (L v Φ)(s, X s )} ds, x [α 27
38 3 The duality of the stochastic control problem for all such W. When we finally take the limit t 1 t, we obtain, since r(,, v) and (L v Φ)(, ) are continuous at (t, ξ), that (L v Φ)(t, ξ) + r(t, ξ, v), which, combined with (3.14), gives (3.12). The statement of this theorem is that if an optimal control u exists, then we know that its value v at the point (t, ξ) is a point v, where the function v r(t, ξ, v) + (L v Φ)(t, ξ), v U, attains its maximum (see [Ø). The original stochastic optimal control problem reduces to finding the maximum of this real function in U. But the theorem above only states that it is necessary that u is the maximum of this function, whereas the verification theorem given in the next section states that this is also sufficient. If we found some u (t, ξ) at each point (t, ξ) such that r(t, ξ, v) + (L v Φ)(t, ξ) is maximal, and this maximum is zero, then u will also be an optimal control. 3.4 The dual problem Using the previous investigations we can develop a duality theory now. Let us first consider the case when G is an unbounded domain within the space Ê Ê n, and we observe this problem for a fixed end time T. Then our stochastic control problem is given by J(X t, u t ) = x [ T r(s, X s, u s ) ds sup!, (3.15) where r(t, ξ, v) : Ê + Ê n U Ê is again the profit rate function and the process X t is defined by dx t = b(t, X t, u t )dt + σ(t, X t, u t )db t, starts in t =, and proceeds for all t within G Ê + Ê n. Then we can transform our maximisation problem into minimising the negative objective, this results in [ T J(X t, u t ) = x r(s, X s, u s ) ds inf! (3.16) Thus, we get for a function S C 2 (G) C(Ḡ) (3.17) 28
39 On the Duality of Stochastic Control Problems by using an idea similar to the royal road of Carathéodory (see Appendix A) and suppressing the dependencies of b and σ from time, process, and control J(X t, u t ) = x = x = x [ T T T r(s, X s, u s ) ds [ r(s, X s, u s ) ds { } T S ± t (s, X s) + b i S (s, X s ) + 1 (σσ T 2 S ) ij (s, X s ) ds ξ i 2 ξ i,j=1 i ξ j [ { T r(s, X s, u s ) + S t (s, X s) + b i S (s, X s ) ξ i } + 1 (σσ T 2 S ) ij (s, X s ) ds 2 ξ i,j=1 i ξ j { } T S + t (s, X s) + b i S (s, X s ) + 1 (σσ T 2 S ) ij (s, X s ) ds ξ i 2 ξ i,j=1 i ξ j T S m + (s, X s ) σ il dbs l ξ i = x + = x [ T [ = ω G + x x { S T T l=1 {r(s, X s, u s ) + (L us S)(s, X s )} } t (s, X S s) ds + dxs i ξ S (s, X s ) ds i 2 ξ i,j=1 i ξ j T {r(s, X s, u s ) + (L us S)(s, X s )} ds + ds(s, X s ) {r(s, X(s, ω), u s ) + (L us S)(s, X(s, ω))} }{{} [ T ds(s, X s ) [ T ds(s, X s ) x [S(T, X T ) S(, x ) [ x inf ζ R(t) G {S(T, ζ T ) S(, ζ )} = inf {S(T, ζ T ) S(, ζ )} ζ R(t) G = Λ(S), ds dq x (ω) 29
40 3 The duality of the stochastic control problem where R(t) = {ζ t Ê n Xt (ω) = ζ t } is the set of accessibility of the stochastic process. This calculation holds by applying the multi-dimensional Itô formula on S(t, X t ) and assuming r(t, ξ, v)+(l v S)(t, ξ) for all (t, ξ) G. We can do this because for every point ξ we have a continuous realisation of a process that leads through this point. This will lead us to the verification theorem of the Hamilton-Jacobi- Bellman equation afterwards. Further on, in the penultimate step we eliminated the randomness to generalise our dual problem. Therefor we introduced vectors ζ t which take values of the process X t. This means we get the following dual problem to our original problem (3.16): Λ(S) sup!, (3.18) w.r.t. all S Γ = {S C 2 (G) C(Ḡ) r(t, ξ, v) + (L v S)(t, ξ) for (t, ξ) G}. A very important thing is to notice that we automatically got the condition S C 2 (G). This is different to the deterministic case, where S only needs to be a linear function. To summarise the considerations above we can formulate the Theorem 3.2 Be S Γ and u an admissible Markov control for the stochastic control problem (3.16). If we have (i) sup{r(t, ξ, v) + (L v S)(t, ξ)} = r(t, ξ, u (t, ξ)) + (L u (t,ξ) S)(t, ξ), v U (ii) r(t, ξ, u (t, ξ)) + (L u (t,ξ) S)(t, ξ) =, (iii) Λ(S u ) = x [S(T, X T ) u S(, x ), then u is optimal for all (t, ξ) G. Proof: If the stated conditions hold, we have equality in the estimation above. As a remainder we want to note that this is true because of the properties and the use of the stochastic differential equation, continuous trajectories, and the introduced set of accessibility. However, in general we cannot guarantee that the domain G in which our process develops is unbounded. Therefore, let us assume G to be fixed in Ê + Ê n and τ be the first time when the process X t exits from G. Then our original problem is [ τ J(X t, u t ) = x r(s, X s, u s ) ds + g(τ, X τ ) ½ {τ< } sup!, 3
41 On the Duality of Stochastic Control Problems in which, as in the previous sections, we take the bequest function g : Ê + Ê n Ê into account. Again, we can reformulate this as a minimisation problem J(X t, u t ) = x [ τ r(s, X s, u s ) ds g(τ, X τ ) ½ {τ< } inf! (3.19) and apply the steps of estimation as in the case of an unbounded G to obtain (again with suppressing the dependencies of b and σ from time, process and control) J(X t, u t ) [ τ = x r(s, X s, u s ) ds g(τ, X τ ) ½ {τ< } [ τ = x r(s, X s, u s ) ds g(τ, X τ ) ½ {τ< } { } τ S ± t (s, X s) + b i S (s, X s ) + 1 (σσ T 2 S ) ij (s, X s ) ds ξ i 2 ξ i,j=1 i ξ j [ τ = x {r(s, X s, u s ) + (L us S)(s, X s )} ds τ + ds(s, X s ) g(τ, X τ ) ½ {τ< } τ = {r(s, X(s, ω), u s ) + (L us S)(t, X(s, ω))} ds dq x (ω) ω G }{{} [ τ + x ds(s, X s ) g(τ, X τ ) ½ {τ< } [ τ x ds(s, X s ) g(τ, X τ ) ½ {τ< } [ τ τ = x ½ {τ= } ds(s, X s ) + ½ {τ< } ds(s, X s ) g(τ, X τ ) ½ {τ< } [ ( ) = x ½ {τ= } lim S(τ, X τ) S(, x) + ½ {τ< } ( S(, x )) τ [ { ( ) } x inf ½ {τ= } lim S(τ, ζ τ) S(, ζ ) ½ {τ< } S(, ζ ) ζ R(t) G τ { ( ) } = inf È(τ = ) lim S(τ, ζ τ) S(, ζ ) È(τ < ) S(, ζ ) ζ R(t) G τ = Λ(S), where R(t) = {ζ t Ê n Xt (ω) = ζ t }. 31
42 3 The duality of the stochastic control problem We deduced this because we know that lim S(t, X t) = g(τ, X τ ), respectively S(t, ξ) = g(t, ξ), (t, ξ) G. t τ Again, we disposed of the randomness by including a vector ζ t Ê n and come to the dual problem (3.18) once again. Of course, we have to mind that in this case G is a bounded domain and the function Λ, therefore, is defined in a little different way. In analogon to theorem (3.2) we get the Theorem 3.3 Be S Γ and u an admissible Markov control for the stochastic control problem (3.19). If we have (i) sup{r(t, ξ, v) + (L v S)(t, ξ)} = r(t, ξ, u (t, ξ)) + (L u (t,ξ) S)(t, ξ), v U (ii) r(t, ξ, u (t, ξ)) + (L u (t,ξ) S)(t, ξ) =, [ ( ) (iii) Λ(S u ) = x ½ {τ= } lim S(τ, X τ) u S(, x ) τ + ½ {τ< } ( S(, x )), then u is optimal for all (t, ξ) G. Again, the proof is simply done because for u the estimates in the calculation above are sharp. With these considerations we can finally formulate a verification theorem for the Hamilton-Jacobi-Bellman equation (see [Ø): Theorem 3.4 (The Hamilton-Jacobi-Bellman equation II - a verification theorem) Let S C 2 (G) C(Ḡ) be such that, for all v U, with boundary values r(t, ξ, v) + (L v S)(t, ξ), (t, ξ) G, (3.2) lim S(t, X t) = g(τ, X τ ) ½ {τ< } a.s. Q x, (3.21) t τ and such that {S ( τ, X τ ) τ stopping time, τ τ} is uniformly Q x -integrable for all Markov controls v and all (t, ξ) G. Then S(t, ξ) J(ξ, v), Markov controls v and (t, ξ) G. (3.22) If for each (t, ξ) G we have found u (t, ξ) such that r(t, ξ, u (t, ξ)) + (L u (t,ξ) S)(t, ξ) =, (3.23) 32
43 On the Duality of Stochastic Control Problems and {S( τ, X τ ) u τ stopping time, τ τ} is uniformly Q x -integrable for all (t, ξ) G, then u = u (t, ξ) is a Markov control such that S(t, ξ) u = J(ξ, u ), (3.24) and hence, if u is admissible, then u must be an optimal control and S(t, ξ) u = Φ(t, ξ). (3.25) Proof: Assume that S satisfies (3.2) and (3.21), let v be a Markov control. Then we have (L v S)(, ) r(,, v) in G and by Dynkin s formula (2.23) we obtain x [S(T R, X TR ) = S(t, ξ) + x S(t, ξ) x [ TR [ TR (L v S)(s, X s ) ds r(t, X s, v) ds, where T R = min { R, τ, inf{t > (t, Xt ) R} } R <. This gives with [ τ x r(s, X s, v) ds + g(τ, X τ ) ½ {τ< } <, t, x, v, (3.21), the integrability condition on {S ( τ, X τ ) τ stopping time, τ τ}, and the Fatou lemma that [ TR S(t, ξ) lim inf R x r(s, X s, v) ds + S(T R, X TR ) [ τ x r(s, X s, v) ds + g(τ, X τ ) ½ {τ< } = J(t, ξ, v), and therefore, (3.22). If u is such that (3.23) and the integrability condition on {S( τ, X τ ) u τ stopping time, τ τ} hold, then we have [ TR x [S(T R, X TR ) = S(t, ξ) x r(s, X s, u ) ds, and hence, [ τ S(t, ξ) = x r(s, X s, u ) ds + g(τ, X τ ) ½ {τ< } = J(t, ξ, u ). In conclusion we see that we could develop a dual problem to our original stochastic control problem. But we have to accept that it is a very strong requirement for S to be an element of C 2 (G) C(Ḡ). In the next section we want to weaken this demand. 33
44 3 The duality of the stochastic control problem 3.5 A generalisation of the condition (3.17) Let us first consider the case again when X t proceeds within G for all times t. Then Y be the set of all functions S defined in the following way: We have a decomposition of [, T into finitely many subintervals [t =, t 1, [t 1, t 2,..., [t p 1, t p = T such that S(t, X t ) C 2 (G i ) C(Ḡi) for G i := {(t, ξ) G t [t i 1, t i }. Now we can estimate as before J(X t, u t ) [ T = x r(s, X s, u s ) dt [ { T x S t (s, X S s) ds + (s, X s ) dxs i ξ i } + 1 (σσ T ) ij (s, X s, u) 2 S ds 2 ξ i,j=1 i ξ j [ p 1 ti+1 = x ds(s, X s ) i= t i [ p 1 ( S(T, X T ) S(, x ) + S(ti, X ti ) S(t i +, X ti ) ) = x x + [ inf ζ R(t) G { S(T, ζ T ) S(, ζ ) p 1 ( S(ti, ζ ti ) S(t i +, ζ ti ) )} = inf ζ R(t) G = Λ(S), { p 1 ( S(T, ζ T ) S(, ζ ) + S(ti, ζ ti ) S(t i +, ζ ti ) )} where we have R(t) = {ζ t Ê n Xt (ω) = ζ t } again. In the case of a bounded domain G, and therefore, by considering the control problem only up to the time τ when our process X t exits from G for the first time, we have a fairly related argumentation and calculation. We consider the set N of all times t such that S(t, X t ) / C 2 (G) C(Ḡ). Further, we assume that the probability Q x (N) =, or, correspondingly, (t, X t ) G a.s. Then we 34
45 On the Duality of Stochastic Control Problems conclude J(X t, u t ) [ τ = x r(s, X s, u s ) ds g(τ, X τ ) ½ {τ< } [ { τ x S t (s, X S s) ds + (s, X s ) dxs i ξ i } + 1 (σσ T 2 S ) ij (s, X s ) ds g(τ, X τ ) ½{τ< } 2 ξ i,j=1 i ξ j [ τ τ = x ½ {τ= } ds(s, X s ) + ½ {τ< } ds(s, X s ) ½ {τ< } g(τ, X τ ) [ ( τ p ) ti+1 = x ½ {τ= } ds(s, X s ) + ds(s, X s ) g(τ, X τ ) ½ {τ< } i= t i [ τ = x ½ {τ= } ds(s, X s ) ( p ( + ½ {τ< } S(ti, X ti ) S(t i +, X ti ) ) ) S(, x ) = x [ ½ {τ= } ( ( S(t, Xt ) S(t +, X t ) )) t N ( p ( + ½ {τ< } S(ti, X ti ) S(t i +, X ti ) ) ) S(, x ) x [ inf ζ R(t) G { ½ {τ= } ( ( S(t, ζt ) S(t +, ζ t ) )) t N ( p ( + ½ {τ< } S(ti, ζ ti ) S(t i +, ζ ti ) ) )} S(, ζ ) = inf ζ R(t) G { ( Q x ( (τ = ) S(ti, ζ ti ) S(t i +, ζ ti ) )) t N ( p ( + È(τ < ) S(ti, ζ ti ) S(t i +, ζ ti ) ) )} S(, ζ ) = Λ(S), where we need R(t) = {ζ t Ê n Xt (ω) = ζ t }. We get this estimation because in the case τ < we have a finite number p of points t for which S(t, X t ) / C 2 (G) C(Ḡ) and because S(t, ξ) = g(t, ξ) for 35
46 3 The duality of the stochastic control problem (t, ξ) G. 36
47 4 Economic examples In this final chapter we want to examine a few problems which arise in economics. For a deeper insight see [Ø, [KK, and [B. Example 4.1 (Maximisation of the expected value with quadratic control costs) This first example can be found in [KK. We consider a process X t that is given by X t = x + B t + t u s ds, (4.1) where B t is a one-dimensional Brownian motion and the control action is to vary the intensity of the drift process. Now let the choice of u t result in costs of the form a u t 2, then it is not only our goal to reach a large value of the process at the end time T, namely X T, but also mind the control costs. Thus, we want to minimise x [ T a u t 2 ds b X T, a, b >. (4.2) At first sight we notice that under special requirements on u t we have [X T = x + This means we can rewrite (4.2) as x [ T [ T u s ds. { aus 2 bu s } ds bx. (4.3) Minimising the integrand in (4.3) w.r.t. u t we obtain the optimal control u t = b 2a. But the aim of this example is to solve the occuring problem by applying the verification theorem (3.4). As mentioned before, we have the cost functional [ T J(X t, u t ) = x au 2 s ds bx T 37
The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.
The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes
More informationStochastic Differential Equations.
Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationFE 5204 Stochastic Differential Equations
Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationReflected Brownian Motion
Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationVerona Course April Lecture 1. Review of probability
Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationStochastic optimal control with rough paths
Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction
More information13 The martingale problem
19-3-2012 Notations Ω complete metric space of all continuous functions from [0, + ) to R d endowed with the distance d(ω 1, ω 2 ) = k=1 ω 1 ω 2 C([0,k];H) 2 k (1 + ω 1 ω 2 C([0,k];H) ), ω 1, ω 2 Ω. F
More informationStochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik
Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2
More informationThe multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications
The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationfor all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true
3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO
More informationSolutions to the Exercises in Stochastic Analysis
Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional
More informationFundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales
Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time
More informationStochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes
BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationBrownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion
Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationp 1 ( Y p dp) 1/p ( X p dp) 1 1 p
Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p
More informationReal Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi
Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.
More informationStochastic Calculus February 11, / 33
Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M
More informationWeak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij
Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationSolution of Stochastic Optimal Control Problems and Financial Applications
Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty
More informationMartingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi
s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November
More informationOn pathwise stochastic integration
On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationMA8109 Stochastic Processes in Systems Theory Autumn 2013
Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More information4th Preparation Sheet - Solutions
Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space
More informationContinuous dependence estimates for the ergodic problem with an application to homogenization
Continuous dependence estimates for the ergodic problem with an application to homogenization Claudio Marchi Bayreuth, September 12 th, 2013 C. Marchi (Università di Padova) Continuous dependence Bayreuth,
More informationPartial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula
Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Group 4: Bertan Yilmaz, Richard Oti-Aboagye and Di Liu May, 15 Chapter 1 Proving Dynkin s formula
More informationBasic Definitions: Indexed Collections and Random Functions
Chapter 1 Basic Definitions: Indexed Collections and Random Functions Section 1.1 introduces stochastic processes as indexed collections of random variables. Section 1.2 builds the necessary machinery
More informationContinuous Time Finance
Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale
More informationStability of Stochastic Differential Equations
Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010
More informationStochastic integration. P.J.C. Spreij
Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................
More informationHarmonic Functions and Brownian motion
Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F
More informationIntegration on Measure Spaces
Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of
More informationExercises. T 2T. e ita φ(t)dt.
Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.
More informationEstimates for probabilities of independent events and infinite series
Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences
More informationLecture 19 L 2 -Stochastic integration
Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes
More informationMath212a1413 The Lebesgue integral.
Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationSolution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have
362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications
More informationDISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction
DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be
More information2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.
Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following
More informationNumerical Approximations of Stochastic Optimal Stopping and Control Problems
Numerical Approximations of Stochastic Optimal Stopping and Control Problems David Šiška Doctor of Philosophy University of Edinburgh 9th November 27 Abstract We study numerical approximations for the
More informationBernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012
1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.
More informationMaximum Process Problems in Optimal Control Theory
J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard
More informationNOTES ON CALCULUS OF VARIATIONS. September 13, 2012
NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,
More informationNested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model
Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Xiaowei Chen International Business School, Nankai University, Tianjin 371, China School of Finance, Nankai
More information1. Stochastic Process
HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real
More informationThe Wiener Itô Chaos Expansion
1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in
More informationStochastic Processes
Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space
More informationRandom Process Lecture 1. Fundamentals of Probability
Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationn E(X t T n = lim X s Tn = X s
Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:
More informationNotes on Measure, Probability and Stochastic Processes. João Lopes Dias
Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt
More information1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),
Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer
More informationIn terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.
1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t
More informationµ X (A) = P ( X 1 (A) )
1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More informationOptimal Stopping under Adverse Nonlinear Expectation and Related Games
Optimal Stopping under Adverse Nonlinear Expectation and Related Games Marcel Nutz Jianfeng Zhang First version: December 7, 2012. This version: July 21, 2014 Abstract We study the existence of optimal
More informationIntroduction to numerical simulations for Stochastic ODEs
Introduction to numerical simulations for Stochastic ODEs Xingye Kan Illinois Institute of Technology Department of Applied Mathematics Chicago, IL 60616 August 9, 2010 Outline 1 Preliminaries 2 Numerical
More informationNotes on Measure Theory and Markov Processes
Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow
More informationBrownian Motion. Chapter Stochastic Process
Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic
More informationUniversal examples. Chapter The Bernoulli process
Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial
More informationn [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)
1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line
More informationLecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University
Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral
More informationRough paths methods 4: Application to fbm
Rough paths methods 4: Application to fbm Samy Tindel Purdue University University of Aarhus 2016 Samy T. (Purdue) Rough Paths 4 Aarhus 2016 1 / 67 Outline 1 Main result 2 Construction of the Levy area:
More informationStochastic Analysis I S.Kotani April 2006
Stochastic Analysis I S.Kotani April 6 To describe time evolution of randomly developing phenomena such as motion of particles in random media, variation of stock prices and so on, we have to treat stochastic
More information2 Statement of the problem and assumptions
Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on
More information1.1 Definition of BM and its finite-dimensional distributions
1 Brownian motion Brownian motion as a physical phenomenon was discovered by botanist Robert Brown as he observed a chaotic motion of particles suspended in water. The rigorous mathematical model of BM
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationStochastic Differential Equations
CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for
More informationOptimal stopping for non-linear expectations Part I
Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More information02. Measure and integral. 1. Borel-measurable functions and pointwise limits
(October 3, 2017) 02. Measure and integral Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2017-18/02 measure and integral.pdf]
More informationTHEOREMS, ETC., FOR MATH 515
THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every
More informationNumerical Approximations of Stochastic Optimal Stopping and Control Problems
Numerical Approximations of Stochastic Optimal Stopping and Control Problems David Šiška Doctor of Philosophy University of Edinburgh 9th November 27 Abstract We study numerical approximations for the
More informationAlbert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes
Albert N. Shiryaev Steklov Mathematical Institute On sharp maximal inequalities for stochastic processes joint work with Yaroslav Lyulko, Higher School of Economics email: albertsh@mi.ras.ru 1 TOPIC I:
More information3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable
More informationA TWO PARAMETERS AMBROSETTI PRODI PROBLEM*
PORTUGALIAE MATHEMATICA Vol. 53 Fasc. 3 1996 A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* C. De Coster** and P. Habets 1 Introduction The study of the Ambrosetti Prodi problem has started with the paper
More informationON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME
ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems
More informationPreliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012
Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of
More informationSTOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI
STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications
More informationBrownian Motion and Conditional Probability
Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical
More informationCHAPTER 3 Further properties of splines and B-splines
CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions
More information