Jan Kallsen. Stochastic Optimal Control in Mathematical Finance

Size: px
Start display at page:

Download "Jan Kallsen. Stochastic Optimal Control in Mathematical Finance"

Transcription

1 Jan Kallsen Stochastic Optimal Control in Mathematical Finance CAU zu Kiel, WS 15/16, as of April 21, 2016

2 Contents 0 Motivation 4 I Discrete time 7 1 Recap of stochastic processes Processes, stopping times, martingales Stochastic integration Conditional jump distribution Essential supremum Dynamic Programming 27 3 Optimal Stopping 34 4 Markovian situation 40 5 Stochastic Maximum Principle 45 II Continuous time 60 6 Recap of stochastic processes Continuous semimartingales Processes, stopping times, martingales Brownian motion Quadratic variation Square-integrable martingales Stopping times Stochastic integral Differential notation Itō processes Itō diffusions Doléans exponential Martingale representation Change of measure

3 CONTENTS 3 7 Dynamic programming 87 8 Optimal stopping 90 9 Markovian situation Stochastic control Optimal Stopping Stochastic maximum principle 102 Bibliography 108

4 Chapter 0 Motivation In Mathematical Finance one often faces optimization problems of various kinds, in particular when it comes to choosing trading strategies with in some sense maximal utility or minimal risk. The choice of an optimal exercise time of an American option belongs to this category as well. Such problems can be tackled with different methods. We distinguish two main approaches, which are discussed both is discrete and in continuous time. As a motivation we first consider the simple situation of maximizing a deterministic function of one or several variables. Example Direct approach Suppose that the goal is to maximize a function x, α T ft, x t 1, α t + gx T t=1 over all x = x 1,..., x T R d T, α = α 1,..., α T A T such that x t := x t x t 1 = δx t 1, α t, t = 1,..., T for some given function δ : R d R m R d. The initial value x 0 R d, the state space of controls A R m and the objective functions f : {1,..., T } R d A R, g : R d R are supposed to be given. The approach in Chapters 2 and 7 below corresponds to finding the maximum directly, without relying on smoothness or convexity of the functions f, g, δ or on topological properties of A. Rather, the idea is to reduce the problem to a sequence of simpler optimizations in just one A-valued variable α t. 2. Lagrange multiplier approach Since the problem above concerns constrained optimization, Lagrange multiplier techniques may make sense. To this end, define the Lagrange function Lx, α, y := T ft, x t 1, α t + gx T t=1 T y t x t δx t 1, α t t=1 on R d T A T R d T. The usual first-order conditions lead us to look for a candidate x R d T, α A T, y R d T satisfying a x t = δx t 1, α t for t = 1,..., T, where we set x 0 := x 0, 4

5 5 b y T = gx T, c y t = x Ht, x t 1, α t for t = 1,..., T, where we set Ht, ξ, a := ft, ξ, a + y t δξ, a and x H denotes the gradient of H viewed as a function of its second argument, d α t maximizes a Ht, x t 1, a on A for t = 1,..., T. Provided that some convexity conditions hold, a d are in fact sufficient for optimality of α : Lemma 0.2 Suppose that the set A is convex, ξ gξ, ξ, a Ht, ξ, a, t = 1,..., T are concave and ξ gξ, ξ Ht, ξ, a, t = 1,..., T, a A are differentiable. If Conditions a d hold, then x, α is optimal for the problem in Example Proof. For any competitor x, α satisfying the constraints set ht, ξ := sup a A Ht, ξ, a. Condition d yields ht, x t 1 = Ht, x t 1, α t for t = 1,..., T. We have T ft, x t 1, α t + gx T t=1 = = T t=1 T ft, x t 1, αt gx T t=1 Ht, x t 1, α t Ht, x t 1, αt yt x t x t + gx T gx T T Ht, x t 1, α t ht, x t 1 + ht, x t 1 ht, x t 1 t=1 yt x t x t + gx T x T x T T t=1 x ht, x t 1x t 1 x t 1 yt x t x t + gx T x T x T 0.1 T yt x t 1 x t 1 yt x t x t t=1 + y T x T x T 0.2 = y 0x 0 x 0 = 0 where existence of x ht, x t 1, inequality 0.1 as well as equation 0.2 follow from Lemma 0.3 below and the concavity of g. Under some more convexity e.g. if δ is affine and ft,, is concave for t = 1,..., T, the Lagrange multiplier solves some dual minimisation problem. This happens e.g. in the stochastic examples in Chapter 5.

6 6 CHAPTER 0. MOTIVATION The following lemma is a version of the envelope theorem which makes a statement on the derivative of the maximum of a parametrised function. Lemma 0.3 Let A be a convex set, f : R d A R { } a concave function, and fx := sup a A fx, a, x R d. Then f is concave. Suppose in addition that, for some fixed x R d, the optimizer a := argmax a A fx, a exists and x fx, a is differentiable in x. Then f is differentiable in x with derivative D i fx = D i fx, a, i = 1,..., d. 0.3 Proof. One easily verifies that f is concave. For h R d we have fx + yh fx + yh, a d = fx, a + y D i fx, a h i + oy as y R tends to 0. In view of [HUL13, Proposition I.1.1.4], concavity of f implies that we actually have d fx + yh fx, a + y D i fx, a h i and hence differentiability of f in x with derivative 0.3. In the remainder of this course we consider optimization in a dynamic stochastic setup. Green parts in these notes are skipped either because they are assumed to be known Chapters 1 and 6 or for lack of time. Comments are welcome, in particular if they concern errors in this text. i=1 i=1

7 Part I Discrete time 7

8 Chapter 1 Recap of stochastic processes The theory of stochastic processes deals with random functions of time as e.g. asset prices, interest rates, or trading strategies. As is true for Mathematical Finance as well, it can be developped in both discrete and continuous time. Actual calculations are sometimes easier and more transparent in continuous-time models, but the theory typically requires less background in discrete time. 1.1 Processes, stopping times, martingales The natural starting point in probability theory is a probability space Ω, F, P. The more or less abstract sample space Ω stands for the possible outcomes of the random experiment. It could e.g. contain all conceivable sample paths of a stock price process. The probability measure P assigns probabilities to subsets of outcomes. For measure theoretic reasons it is typically impossible to assign probabilities to all subsets of Ω in a consistent manner. As a way out one specifies a σ-field F, i.e. a collection of subsets of Ω which is closed under countable set operations as e.g.,, \, C. The probability P F is defined only for events F F. Random variables X are functions of the outcome ω Ω. Typically its values Xω are numbers but they may also be vectors or even functions, in which case X is a random vector resp. process. We denote by EX, VarX the expected value and variance of a real-valued random variable. Accordingly, EX, CovX denote the expectation vector and covariance matrix of a random vector X. For static random experiments one needs to consider only two states of information. Before the experiment nothing precise is known about the outcome, only probabilities and expected values can be assigned. After the experiment the outcome is completely determined. In dynamic random experiments as e.g. stock markets the situation is more involved. During the time interval of observation, some random events e.g. yesterday s stock returns have already happened and can be considered as deterministic whereas others e.g. tomorrows stock returns still belong to the unknown future. As time passes, more and more information is accumulated. This increasing knowledge is expressed mathematically in terms of a filtration F = F t t 0, i.e. an increasing sequence of sub-σ-fields of F. The collection of events F t stands for the observable information up to time t. The statement F F t means that the 8

9 1.1. PROCESSES, STOPPING TIMES, MARTINGALES 9 random event F e.g. F = {stock return positive at time t 1} is no longer random at time t. We know for sure whether it is true or not. If our observable information is e.g. given by the evolution of the stock price, then F t contains all events that can be expressed in terms of the stock price up to time t. The quadrupel Ω, F, F, P is called filtered probability space. We consider it to be fixed during most of the following. Often one assumes F 0 = {, Ω}, i.e. F 0 is the trivial σ-field corresponding to no prior information. As time passes, not only the observable information but also probabilities and expectations of future events change. E.g. our conception of the terminal stock price evolves gradually from vague ideas to perfect knowledge. This is modelled mathematically in terms of conditional expectations. The conditional expectation EX F t of a random variable X is its expected value given the information up to time t. As such, it is not a number but itself a random variable which may depend on the randomness up to time t, e.g. on the stock price up to t in the above example. Mathematically speaking, Y = EX F t is F t -measurable, which means that {Y B} := {ω Ω : Y ω B} F t for any reasonable i.e. Borel set B. Accordingly, the conditional probability P F F t denotes the probability of an event F F given the information up to time t. As is true for conditional expectation, it is not a number but an F t -measurable random variable. Formally, the conditional expectation EX F t is defined as the unique F t -measurable random variable Y such that EXZ = EY Z for any bounded, F t -measurable random variable Z. It can also be interpreted as the best prediction of X given F t. Indeed, if EX 2 <, then EX F t minimizes the mean squared difference EX Z 2 among all F t -measurable random variables Z. Strictly speaking, EX F t is unique only up to a set of probability 0, i.e. any two versions Y, Ỹ satisfy P Y Ỹ = 0. In this notes we do not make such fine distinctions. Equalities, inequalities etc. are always meant to hold only almost surely, i.e. up to a set of probability 0. A few rules on conditional expectations are used over and over again. E.g. we have EX F t = EX 1.1 if F t = {, Ω} is the trivial σ-field representing no information on random events. More generally, 1.1 holds if X and F t are stochastically independent, i.e. if P {X B} F = P X BP F for any reasonable i.e. Borel set B and any F F t. On the other hand we have EX F t = X and more generally EXY F t = XEY F t if X is F t -measurable, i.e. known at time t. The law of iterated expectations tells us that for s t. Almost as a corollary we have E EX F t Fs = EX Fs E EX F t = EX. Finally, the conditional expectation shares many properties of the expectation, e.g, it is linear and monotone in X and it satisfies monotone and dominated convergence, Fatou s lemma, Jensen s inequality etc.

10 10 CHAPTER 1. RECAP OF STOCHASTIC PROCESSES Recall that the probability of a set can be expressed as expectation of an indicator function via P F = E1 F. This suggests to use the relation P F F t := E1 F F t 1.2 to define conditional probabilities in terms of conditional expectation. Of course, we would like P F F t to be a probability measure when it is considered as a function of F. This property, however, is not as evident as it seems because of the null sets involved in the definition of conditional expectation. We do not worry about technical details here and assume instead that we are given a regular conditional probability, i.e. a version of P F F t ω which, for any fixed ω, is a probability measure when viewed as a function of F. Such a regular version exists in all instances where it is used in this notes. In line with 1.2 we denote by P X Ft B := P X B F t := E1 B X F t the conditional law of X given F t. A useful rule states that EfX, Y F t = fx, Y P X Ft dx 1.3 for real-valued functions f and F t -measurable random variables Y. If X is stochastically independent of F t, we have P X Ft = P X, i.e. the conditional law of X coincides with the law of X. In this case, 1.3 turns into EfX, Y F t = fx, Y P X dx 1.4 for F t -measurable random variables Y. A stochastic process X = Xt t 0 is a collection of random variables Xt, indexed by time t. In this section, the time set is assumed to be N = {0, 1, 2,... }, afterwards we consider continuous time R + = [0,. As noted earlier, a stochastic process X = Xt t 0 can be interpreted as a random function of time. Indeed, Xω, t is a function of t or sequence in the current discrete case for fixed ω. Sometimes, it is also convenient to interpret a process X as a real-valued function on the product space Ω N or Ω R +, respectively. In the discrete time case we use the notation Xt := Xt Xt 1. Moreover we denote by X = X t t 0 the process X t := { Xt 1 for t 1, X0 for t = We will only consider processes which are consistent with the information structure, i.e. Xt is supposed to be observable at time t. Mathematically speaking, we assume Xt to be F t -measurable for any t. Such processes X are called adapted to the filtration F. There is in fact a minimal filtration F such that X is F-adapted. Formally, this filtration is given by F t = σxs : s t, 1.6

11 1.1. PROCESSES, STOPPING TIMES, MARTINGALES 11 i.e. F t is the smallest σ-field such that all Xs, s t are F t -measurable. Intuitively, this means that the only information on random events is coming from observing the process X. One calls F the filtration generated by X. For some processes one actually needs a stronger notion of measurability than adaptedness, namely predictability. A stochastic process X is called predictable if Xt is known already one period in advance, i.e. Xt is F t 1 -measurable. The use of this notion will become clearer in Section 1.2. Example 1.1 Random walk and geometric random walk We call an adapted process X random walk relative to F if the increments Xt, t 1 are identically distributed and independent of F t 1. We obtain such a process if Xt, t 1 are independent and identically distributed i.i.d. random variables and if the filtration F is generated by X. Similarly, we call a positive adapted process X geometric random walk relative to F if the relative increments Xt Xt 1 = Xt Xt are identically distributed and independent of F t 1 for t 1. A process X is a geometric random walk if and only if logx is a random walk or, equivalently, Xt = expy t for some random walk Y. Indeed, the random variables in 1.7 are identically distributed and independent of F t 1 if and only this holds for Xt log Xt = logxt logxt 1 = log Xt 1 + 1, t 1. Random walks and geometric random walks represent processes of constant growth in an additive or multiplicative sense, respectively. Simple asset price models are often of geometric random walk type. A stopping time τ is a random variable whose values are times, i.e. are in N { } in the discrete case. Additionally one requires that τ is consistent with the information structure F. More precisely, one assumes that {τ = t} F t or equivalently {τ t} F t for any t. Intuitively, this means that the decision to say stop! right now can only be based on our current information. As an example consider the first time τ when the observed stock price hits the level 100. Even though this time is random and not known in advance, we obviously know τ in the instant it occurs. The situation is different if we define τ to be the instant one period before the stock hits 100. Since we cannot look into the future, we only know τ one period after it has happened. Consequently, this random variable is not a stopping time. Stopping times occur naturally in finance e.g. in the context of American options but they also play an important technical role in stochastic calculus. As indicated above, the time when some adapted process first hits a given set is a stopping time: Lemma 1.2 Let X be some adapted process and B a Borel set. Then is a stopping time. τ := inf{t 0 : Xt B}

12 12 CHAPTER 1. RECAP OF STOCHASTIC PROCESSES Proof. By adaptedness, we have {Xs B} F s F t, s t and hence t {τ t} = {Xs B} F t. s=0 Occasionally, it turns out to be important to freeze a process at a stopping time. For any adapted process X at any stopping time τ, the process stopped at τ is defined as X τ t := Xτ t, where we use the notation a b := mina, b as usual. The stopped process X τ remains constant on the level Xτ after time τ. It is easy to see that it is adapted as well. The concept of martingales is central to stochastic calculus and finance. A martingale resp. submartingale, supermartingale is an adapted process X that is integrable in the sense that E Xt < for any t and satisfies EXt F s = Xs resp. Xs, Xs 1.8 for s t. If X is a martingale, then the best prediction for future values is the present level. If e.g. the price process of an asset is a martingale, then it is neither going up nor down on average. In that sense, it correponds to a fair game. By contrast, submartingales resp. supermartingales may increase resp. decrease on average. They correspond to favourable resp. unfavourable games. The concept of a martingale is global in the sense that 1.8 must be satisfied for any s t. If we restrict attention to the case s = t 1, we obtain the slightly more general local counterpart. A local martingale resp. submartingale, supermartingale is an adapted process X which satisfies E X0 <, E Xt F t 1 < and EXt F t 1 = Xt 1 resp. Xt 1, Xt for any t = 1, 2,... In discrete time the difference between martingales and local martingales is minor: Lemma 1.3 Any integrable local martingale in the sense that E Xt < for any t is a martingale. An analogous statement holds for sub- and supermartingales. Proof. This follows by induction from the law of iterated expectations. Corresponding statements hold for sub-/supermartingales. Integrability in Lemma 1.3 holds e.g. if X is nonnegative. The above classes of processes are stable under stopping in the sense of the following lemma, which has a natural economic interpretation: you cannot turn a fair game into e.g. a strictly favourable one by stopping to play at some reasonable time. Lemma 1.4 Let τ denote a stopping time. If X is a martingale resp. sub-/supermartingale, so is X τ. A corresponding statement holds for local martingales and local sub-/supermartingales.

13 1.1. PROCESSES, STOPPING TIMES, MARTINGALES 13 Proof. We start by verifying the integrability conditions. For martingales resp. subsupermartingales E Xt < implies E X τ t E t s=0 Xs <. For local martingales resp. local sub-/supermartingales E Xt F t 1 < yields E X τ t F t 1 = t E Xs F t 1 s=0 t Xs + E Xt F t 1 <. s=0 In order to verify 1.9, observe that {τ t} = {τ t 1} C F t 1 implies EX τ t1 {τ t} F t 1 = EXt1 {τ t} F t 1 = EXt F t 1 1 {τ t} = Xt 11 {τ t} = X τ t 11 {τ t} resp., in the sub-/supermartingale case. For s < t we have {τ = s} F s F t 1 and hence Together we obtain EX τ t1 {τ=s} F t 1 = EXs1 {τ=s} F t 1 = Xs1 {τ=s} = X τ t 11 {τ=s}. resp.,. EX τ t F t 1 = = t 1 EX τ t1 {τ=s} F t 1 + EXt1 {τ t} F t 1 s=0 t 1 X τ t 11 {τ=s} + X τ t 11 {τ t} = X τ t 1 s=0 For later use, we note that a supermartingale with constant expectation is actually a martingale. Lemma 1.5 If X is a supermatingale and T 0 with EXT = EX0, then 1.8 holds with equality for any s t T. Proof. The supermartingale property means that EXt Xs1 A 0 for any s t and any A F s. Since 0 = EXT EX0 = EXT Xt + EXt Xs1 A + EXt Xs1 A c 0 + EXs X0

14 14 CHAPTER 1. RECAP OF STOCHASTIC PROCESSES for any s t T and any event A F s, the four nonpositive summands must actually be 0. This yields EXt Xs1 A = 0 and hence the assertion. The following technical result is used in Chapter 3. Lemma 1.6 Let X be a supermartingale, Y a martingale, t T, and A F t with Xt = Y t on A and XT Y T. Then Xs = Y s on A for t s T. The statement remains to hold if we only require X T X t, Y T Y t instead of X, Y to be a supermartingale resp. martingale. Proof. From Xs Y s EXT Y T F s 0 and EXs Y s1 A EXt Y t1 A = 0 it follows that Xt Y t1 A = 0. One easily verifies that an integrable random walk X is a martingale if and only if the increments Xt have expectation 0. An analogous result holds for integrable geometric random walks whose relative increments Xt/Xt 1 have vanishing mean. For the martingale property to hold, one actually does not need the increments resp. relative increments of X to be identically distributed. If ξ denotes an integrable random variable, then it naturally induces a martingale X, namely Xt = Eξ F t. X is called the martingale generated by ξ. If the time horizon is finite, i.e. we consider the time set {0, 1,..., T 1, T } rather then N, then any martingale is generated by some random variable, namely by XT. This ceases to be true for infinite time horizon. E.g. random walks are not generated by a single random variable unless they are constant. Example 1.7 Density process A probability measure Q on Ω, F is called equivalent to P written Q P if the events of probability 0 are the same under P and Q. By the Radon-Nikodym theorem, Q has a P -density and vice versa, i.e. there are some unique random variables dq, dp such that dp dq dq dp QF = E P 1 F, P F = E Q 1 F dp dq for any set F F, where E P, E Q denote expectation under P and Q, respectively. P, Q are in fact equivalent if and only if such mutual densities exist, in which case we have dp dq = 1/. dq dp The martingale Z generated by dq is called density process of Q, i.e. we have dp dq Zt = E P dp F t.

15 1.1. PROCESSES, STOPPING TIMES, MARTINGALES 15 One easily verifies that Zt coincides with the density of the restricted measures Q F t relative to P F t, i.e. Zt is F t -measurable and QF = E P 1 F Zt holds for any event F F t. Note further that Z and the density process Y of P relative to Q are reciprocal to each other because Zt = dq F / t dp Ft = 1 = 1/Y t. dp F t dq F t The density process Z can be used to compute conditional expectations relative to Q. Indeed, the generalized Bayes rule E Q ξ F t = E P ξ dq dp F t Zt holds for sufficiently integrable random variables ξ because E Q ξζ = E P ξζ dq dp = E P E P ξζ dq dp F t ξζ dq dp = E P E P Zt F t Zt ξζ dq dp = E Q E P Zt F t EP ξ dq = E F dp t Q ζ Zt for any bounded F t -measurable ζ. Similarly, one shows E Q ξ F s = E P ξzt F s Zs for s t and F t -measurable random variables ξ Martingales are expected to stay on the current level on average. More general processes may show an increasing, decreasing or possibly variable trend. This fact is expressed formally by a variant of Doob s decomposition. The idea is to decompose the increment Xt of an arbitrary process into a predictable trend component A X t and a random deviation M X t from this short time prediction. Lemma 1.8 canonical decomposition Any integrable adapted process X i.e. with E Xt < for any t can be uniquely decomposed as X = X0 + M X + A X with some martingale M X and some predictable process A X satisfying M X 0 = A X 0 = 0. We call A X the compensator of X.

16 16 CHAPTER 1. RECAP OF STOCHASTIC PROCESSES Proof. Define A X t = t s=1 AX s by A X s := E Xs F s 1 and M X := X X0 A X. Predictability of A X is obvious. The integrability of X implies that of A X and thus of M X. The latter is a martingale because EM X t F t 1 = M X t 1 + E Xt A X t F t 1 = M X t 1 + E Xt F t 1 EE Xt F t 1 F t 1 = M X t 1. Conversely, for any decomposition as in Lemma 1.8 we have E Xt F t 1 = E M X t F t 1 + E A X t F t 1 = A X t, which means that it coincides with the decomposition in the first part of the proof. Note that uniqueness of the decomposition still holds if we only require M X to be a local martingale. In this relaxed sense, it suffices to assume E Xt F t 1 < for any t in order to define the compensator A X. If X is a submartingale resp. supermartingale, then A X is increasing resp. decreasing. This is the case commonly referred to as Doob s decomposition. E.g. the compensator of an integrable random walk X equals A X t = t E Xs F s 1 = te X1. s=1 1.2 Stochastic integration Gains from trade in dynamic portfolios can be expressed in terms of stochastic integrals, which are nothing else than sums in discrete time. Definition 1.9 Let X be an adapted and ϕ a predictable or at least also an adapted process. The stochastic integral of ϕ relative to X is the adapted process ϕ X defined as ϕ Xt := t ϕs Xs. s=1 If both ϕ = ϕ 1,..., ϕ d and X = X 1,..., X d are vector-valued processes, we define ϕ X to be the real-valued process given by ϕ Xt := t d ϕ i s X i s s=1 i=1 In order to motivate this definition, let us interpret Xt as the price of a stock at time t. We invest in this stock using the trading strategy ϕ, i.e. ϕt denotes the number of shares we own at time t. Due to the price move from Xt 1 to Xt our wealth changes by ϕtxt Xt 1 = ϕt Xt in the period between t 1 and t. Consequently, the

17 1.2. STOCHASTIC INTEGRATION 17 integral ϕ Xt stands for the cumulative gains from trade up to time t. If we invest in a portfolio of several stocks, both the trading strategy ϕ and the price process X are vector valued. ϕ i t now stands for the number of shares of stock i and X i t for its price. In order to compute the total gains of the portfolio, we must sum up the gains ϕ i t X i t in each single stock, which leads to For the above reasoning to make sense, one must be careful about the order in which things happen at time t. If ϕtxt Xt 1 is meant to stand for the gains at time t, we obviously have to buy the portfolio ϕt before prices change from Xt 1 to Xt. Put differently, we must choose ϕt already at the end of period t 1, right after the stock price has attained the value Xt 1. This choice can only be based on information up to time t 1 and in particular not on Xt, which is as yet unknown. This motivates why one typically requires trading strategies to be predictable rather than adapted. The purely mathematical definition of ϕ X, however, makes sense regardless of any measurability assumption. The covariation process [X, Y ] of adapted processes X, Y is defined as [X, Y ]t := t Xs Y s s=1 Its compensator X, Y t = t E Xs Y s F s 1 s=1 is called predictable covariation process if it exists. In the special case X = Y one refers to the quadratic variation resp. predictable quadratic variation of X. If X, Y are martingales, their predictable covariation can be viewed as a dynamic analogue of the covariance of two random variables. We are now ready to state a few properties of stochastic integration: Lemma 1.10 For adapted processes X, Y and predictable processes ϕ, ψ we have: 1. ϕ X is linear in ϕ and X. 2. [X, Y ] and X, Y are symmetric and linear in X and Y. 3. ψ ϕ X = ψϕ X 4. [ϕ X, Y ] = ϕ [X, Y ] 5. ϕ X, Y = ϕ X, Y whenever the predictable covariations are defined. 6. integration by parts XY = X0Y 0 + X Y + Y X 1.13 = X0Y 0 + X Y + Y X + [X, Y ] 7. If X is a local martingale, so is ϕ X.

18 18 CHAPTER 1. RECAP OF STOCHASTIC PROCESSES 8. If ϕ 0 and X is a local sub-/supermartingale, ϕ X is a local sub-/supermartingale as well. 9. A ϕ X = ϕ A X if the compensator A X exists in the relaxed sense following Lemma If X, Y are martingales with E Xt < and E Y t < for any t, the process XY X, Y is a martingale, i.e. X, Y is the compensator of XY. Proof. 1. This is obvious from the definition. 2. This is obvious from the definition as well. 3. This follows from ψ ϕ Xt = ψt ϕ Xt = ψtϕt Xt = ψϕ Xt. 4. This follows from 5. Predictability of ϕ yields 6. The first equation is [ϕ X, Y ]t = ϕt Xt Y t = ϕ [X, Y ]t. ϕ X, Y t = E ϕ Xt Y t F t 1 XtY t = X0Y 0 + The second follows from = X0Y = Eϕt Xt Y t F t 1 = ϕte Xt Y t F t 1 = ϕ X, Y t. t XsY s Xs 1Y s 1 s=1 t Xs 1Y s Y s 1 s=1 t Xs Xs 1Y s s=1 = X0Y 0 + t Xs 1 Y s + Y s Xs s=1 = X0Y 0 + X Y t + Y Xt. Y Xt = Y Xt + Y Xt = Y Xt + [X, Y ]t.

19 1.2. STOCHASTIC INTEGRATION Predictability of ϕ and 1.9 yield Eϕ Xt F t 1 = Eϕ Xt 1 + ϕtxt Xt 1 F t 1 = ϕ Xt 1 + ϕtext F t 1 Xt 1 = ϕ Xt This follows along the same lines as This follows from 7. because ϕ X = ϕ M X + ϕ A X is the canonical decomposition of ϕ X. 10. This follows from statements 6 and 7. If they make sense, the above rules also hold for vector-valued processes, e.g. ψ ϕ X = ψϕ X if both ϕ, X are R d -valued. Itō s formula is probably the most important rule in continuous-time stochastic calculus. This motivates why we state its obvious discrete-time counterpart here. Lemma 1.11 Itō s formula If X is an R d -valued adapted process and f : R d differentiable function, then R a fxt = fx0 + t fxs fxs 1 s=1 = fx0 + DfX Xt t + fxs fxs 1 DfXs 1 Xs 1.14 s=1 where Dfx denotes the derivative or gradient of f in x. Proof. The first statement is obvious. stochastic integral. The second follows from the definition of the If the increments Xs are small and f is sufficiently smooth, we may use the secondorder Taylor expansion fxs = fxs 1 + Xs fxs 1 + f Xs 1 Xs f Xs 1 Xs 2 in the univariate case, which leads to fxt fx0 + f X Xt + t s=1 1 2 f Xs 1 Xs 2 = fx0 + f X Xt f X [X, X]t.

20 20 CHAPTER 1. RECAP OF STOCHASTIC PROCESSES If X is vector valued, we obtain accordingly fxt fx0 + DfX Xt d D ij fx [X i, X j ]t i,j=1 Processes of multiplicative structure are called stochastic exponentials. Definition 1.12 Let X be an adapted process. The unique adapted process Z satisfying Z = 1 + Z X is called stochastic exponential of X and it is written E X. The stochastic exponential can easily be motivated from a financial point of view. Suppose that 1 e earns the possibly random interest X t in period t, i.e. 1 e at time t 1 turns into 1 + X t e at time t. Then 1 e at time 0 run up to E Xt e at time t. It is easy to compute E X explicitly: Lemma 1.13 We have E Xt = where the product is set to 1 for t = 0. t 1 + Xs, s=1 Proof. For Zt = t s=1 1 + Xs we have and hence Zt = Z0 + Zt = Zt Zt 1 = Zt 1 Xt t Zs = 1 + s=1 t Zs Xs = 1 + Z Xt. s=1 The previous lemma implies that the stochastic exponential of a random walk X with increments Xt > 1 is a geometric random walk. More specifically, one can write any geometric random walk Z alternatively in exponential or stochastic exponential form, namely Z = e X = Z0E X with random walks X, X, respectively. X and X are related to each other via Xt = e Xt 1 resp. Xt = log1 + Xt. If the increments Xs are small enough, we can use the approximation log1 + Xs Xs 1 2 Xs2

21 1.2. STOCHASTIC INTEGRATION 21 and obtain t E Xt = 1 + Xs s=1 t = exp log1 + Xs s=1 t exp Xs 12 Xs2 s=1 = exp Xt X0 12 [X, X]t. The product of stochastic exponentials is again a stochastic exponential. Observe the similarity of the following result to the rule e x e y = e x+y for exponential functions. Lemma 1.14 Yor s formula holds for any two adapted processes X, Y. E XE Y = E X + Y + [X, Y ] Proof. Let Z := E XE Y. Integration by parts and the other statements of Lemma 1.10 yield Z = Z0 + E X E Y + E Y E X + [E X, E Y ] = 1 + E X E Y Y + E Y E X X + E X E Y [X, Y ] = 1 + Z X + Y + [X, Y ], which implies that Z = E X + Y + [X, Y ]. If an adapted process Z does not attain the value 0, it can be written as Z = Z0E X with some unique process X satisfying X0 = 0. This process X is naturally called stochastic logarithm L Z of Z. We have Indeed, X = 1 Z Z satisfies L Z = 1 Z Z. Z Z0 X = Z 1 Z0 Z = 1 Z Z0 Z = Z Z0 1 and hence Z Z0 = E X

22 22 CHAPTER 1. RECAP OF STOCHASTIC PROCESSES as claimed. Changes of the underlying probability measure play an important role in Mathematical Finance. Since the notion of a martingale involves expectation, it is not invariant under such measure changes. Lemma 1.15 Let Q P be a probability measure with density process Z. An adapted process X is a Q-martingale resp. Q-local martingale if and only if XZ is a P -martingale resp. P -local martingale. Proof. X is a Q-local martingale if and only if E Q Xt F t 1 = Xt for any t. By Bayes rule 1.10 the left-hand side equals EXtZt F t 1 /Zt 1. Hence 1.16 is equivalent to EXtZt F t 1 = Xt 1Zt 1, which is the local martingale property of XZ relative to P. The integrability property for martingales cf. Lemma 1.3 is shown similarly. A martingale X may possibly show a trend under the new probability measure Q. This trend can be expressed in terms of predictable covariation. Lemma 1.16 Let Q P be a probability measure with density process Z. Moreover, suppose that X is a P -martingale. If X is Q-integrable, its Q-compensator equals L Z, X. Proof. Since A := L Z, X = 1 Z Z, X is a predictable process and by the proof of Lemma 1.8, it suffices to show that X X0 A is a Q-local martingale. By Lemma 1.15 the amounts to proving that ZX X0 A is a P -local martingale. Integration by parts yields ZX X0 A = Z X + X X0 Z + [Z, X X0] Z A A Z The integrals relative to X and Z are local martingales. Moreoever, Z A = Z Z Z, X = Z, X is the compensator of [Z, X] = [Z, X X0], which implies that the difference is a local martingale. Altogether the right-hand side of 1.17 is indeed a local martingale. 1.3 Conditional jump distribution For later use we study the conditional law of the jumps of a stochastic process. We are particularly interested in the Markovian case where this law depends only on the current state of the process

23 1.3. CONDITIONAL JUMP DISTRIBUTION 23 Definition 1.17 Let X be an adapted process with values in E R d. We call the mapping K X t, B := P Xt B F t 1 := E1 B Xt F t 1, t = 0,..., T, B E conditional jump distribution of X. As usual, we skip the argument ω in the notation. If the conditional jump distribution depends on ω, t only through Xt 1ω, i.e. if it is of the form K X t, B = κxt 1, B with some deterministic function κ, we call the process Markov. In this case we define the generator G of the Markov process, which is an operator mapping functions f : E R on the like. It is defined by the equation Gfx = fx + y fxκx, dy Random walks have particularly simple conditional jump distributions. Lemma 1.18 Random walk An adapted process X is a random walk if and only if its conditional jump distribution is of the form K X t, B = QB for some probability measure Q which does not depend on ω, t. In this case Q is the law of X1. In particular, random walks are Markov processes. Proof. If X is a random walk, we have P Xt F t 1 = P Xt = P X1 since Xt is independent of F t 1 and has the same law for all t. Conversely, P { Xt B} A = E1 B Xt1 A = E1 B Xt F t 1 1 A dp = QB1 A dp = QBP A. for A F t 1. For A = Ω we obtain P Xt B = QB = P X1 B. Hence Xt is independent of F t 1 and has the same law for all t. The conditional jump distribution of a geometric random walk is also easily obtained from its definition. Example 1.19 Geometric random walk The jump characteristic of a geometric random walk X is given by K X t, B = ϱ{x R d : Xt 1x 1 B}, where ϱ denotes the distribution of X1/X0. In particular, it is a Markov process as well. Indeed, we have Xt E1 B Xt F t 1 = E 1 B Xt 1 Xt 1 1 F t 1 = 1 B Xt 1x 1ϱdx by 1.4 and the fact that Xt/Xt 1 has law ϱ and is independent of F t 1.

24 24 CHAPTER 1. RECAP OF STOCHASTIC PROCESSES For the following we define the identity process I as It := t. The characteristics can be used to compute the compensator of an adapted process. Lemma 1.20 Compensator If X is an integrable adapted process, then its compensator A X and its conditional jump distribution K X are related to each other via with a X t := A X = a X I xk X t, dx. Proof. By definition of the compensator we have A X t = E Xt F t 1 = xp Xt F t 1 dx = xk X t, dx = a X t = a X It. Since the predictable covariation is a compensator, it can also be expressed in terms of compensators and conditional jump distributions. Lemma 1.21 Predictable covariation The predictable covariation of adapted processes X, Y and of their martingale parts M X, M Y is given by with c XY t := ĉ XY t := provided that the integrals exist. X, Y = c XY I, M X, M Y = ĉ XY I xyk X,Y t, dx, y, xyk X,Y t, dx, y a X ta Y t Proof. The first statement follows similarly as Lemma 1.20 by observing that X, Y t = E Xt Y t F t 1. The second in turn follows from the first and from Lemma 1.20 because M X, M Y t = E M X t M Y t F t 1 = E Xt Y t F t 1 E Xt F t 1 A Y t A X te Y t F t 1 + A X t A Y t = X, Y t A X t A Y t = xyk X,Y t, dx, y a X ta Y t.

25 1.4. ESSENTIAL SUPREMUM 25 Let us rephrase the integration by parts rule in terms of characteristics. Lemma 1.22 We have a XY t = Xt 1a Y t + Y t 1a X t + c XY t provided that X, Y and XY are integrable adapted processes. Proof. Computing the compensators of XY = X0Y 0 + X Y + Y X + [X, Y ] yields a XY I = Xt 1a Y I + Y t 1a X I + c XY I by Lemmas and Considering increments yields the claim. 1.4 Essential supremum In the context of optimal control we need to consider suprema of possibly uncountable many random variables. To this end, let G denote a sub-σ-field of F and X i i I a family of G - measurable random variables with values in [, ]. Definition 1.23 A G -measurable random variable Y is called essential supremum of X i i I if Y X i almost surely for any i I and Y Z almost surely for any G - measurable random variable Z such that Y Z almost surely for any i I. We write Y =: ess sup i I X i. Lemma The essential supremum exists. It is almost surely unique in the sense that any two random variables Y, Ỹ as in Definition 1.23 coincide almost surely. 2. There is a countable subset J I such that ess sup i J X i = ess sup i I X i. Proof. By considering X i := arctan X i instead of X i, we may assume w.l.o.g. that the X i all have values in the same bounded interval. Observe that sup i C is a G -measurable random variable for any countable subset C of I. We denote the set of all such countable subsets of I by C. Consider a sequence C n n N in C such that Esup i Cn X i sup C C Esup i C X i. Then C := n N C n is a countable subset of I satisfying Esup i C X i = sup C C Esup i C X i. We show that Y := sup i C X i meets the requirements of an essential supremum. Indeed, for fixed i I we have Y Y X i and EY = EY X i <, which implies that Y = Y X i and hence X i Y almost surely. On the other hand, we have Z X i, i I and hence Z Y almost surely for any Z as in the definition. The following results helps to approximate the essential supremum by a sequence of random variables.

26 26 CHAPTER 1. RECAP OF STOCHASTIC PROCESSES Lemma 1.25 Suppose that X i i I has the lattice property, i.e. for any i, j I there exists k I such that X i X j X k almost surely. Then there is a sequence i n n N such that X in ess sup i I X i almost surely. Proof. Choose a sequence j n n N such that J = {j n : n N} holds for the countable set J I in statement 2 of Lemma If the lattice property holds, essential supremum and expectation can be interchanged. Lemma 1.26 Let X i 0, i I or Eess sup i I X i <. If X i i I has the lattice property, then Eess sup i I X i H = ess sup i I EX i H for any sub-σ-field H of F. Proof. Since Eess sup i I X i H EX j H a.s. for any j I, we obviously have Eess sup i I X i H ess sup i I EX i H. Let i n n N be a sequence as in the previous lemma. Then X in ess sup i I X i a.s. and monotone resp. dominated convergence imply EX in H Eess sup i I X i H and hence the claim.

27 Chapter 2 Dynamic Programming Since we consider discrete-time stochastic control in this part, we work on a filtered probability space Ω, F, F, P with filtration F = F t t=0,1,...,t. For simplicity, we assume F 0 to be trivial, i.e. all F 0 -measurable random variables are deterministic. By 1.1 this implies EX F 0 = EX for any random variable X. Our goal is to maximize some expected reward Euα over controls α A. The set A of admissible controls is a subset of all R m -valued adapted processes and it is assumed to be stable under bifurcation, i.e. for any stopping time τ, any event B F τ, and any α, α A with α τ = α τ, the process α τ B α defined by α τ B αt := 1 B cαt + 1 B αt is again an admissible control. Intuitively, this means that the decision how to continue may depend on the observations so far. Moreover, we suppose that α0 coincides for all controls α A. The reward is expressed by some reward function u : Ω R m {0,1,...,T } R { }. For fixed α A, we use the shorthand uα for the random variable ω uω, αω. The reward is meant to refer to some fixed time T N, which is expressed mathematically by the assumption that uα is F T -measurable for any α A. Example 2.1 Typically, the reward function is of the form uα = T ft, X α t 1, αt + gx α T t=1 for some functions f : {1,..., T } R d R m R { }, g : R d R { }, and R d -valued adapted controlled processes X α. Definition 2.2 We call α A optimal control if it maximizes Euα over all α A where we set Euα := if Euα :=. Moreover, the value process of the optimization problem is the family J, α α A of adapted processes defined via for t [0, T ], α A. Jt, α := ess sup { Eu α F t : α A with α t = α t} The value process is characterized by some martingale/supermartingale property: 27

28 28 CHAPTER 2. DYNAMIC PROGRAMMING Theorem 2.3 Suppose that J0 := sup α A Euα ±. 1. For any admissible control α with terminal value Euα >, the process Jt, α t {0,...,T } is a supermartingale with terminal value JT, α = uα. If α is an optimal control, Jt, α t {0,...,T } is a martingale. 2. Suppose that J, α α A is a family of processes such that Proof. a J0, α coincides for all α A and is denoted as J0, b Jt, α t {0,...,T } is a supermartingale with terminal value JT, α = uα for any admissible control α with Euα >, c Jt, α t {0,...,T } is a martingale for some admissible control α. Then α is optimal and Jt, α = Jt, α for t = 0,..., T. 1. Adaptedness and terminal value of J, α are evident. In order to show the supermartingale property, let t {0,..., T }. Stability under bifurcation implies that the set of all Eu α F t with α A satisfying α t = α t has the lattice property. By Lemma 1.25 there exists a sequence of admissible controls α n with αn t = α t and Euα n F t Jt, α. For s t we have EEuα n F t F s = Euα n F s Js, α. By monotone convergence we obtain the supermartingale property EJt, α F s Js, α. Let α be an optimal control. Since J, α is a supermartingale, the martingale property follows from and Lemma 1.5. J0, α = sup Euα = Euα = EJT, α α A 2. The supermartingale property implies that Euα = E JT, α J0, α = J0 for any admissible control α. Since equality holds for α, we have that α is optimal. By statement 1, J, α is a martingale with terminal value uα. Since the same is true for J, α by assumptions 2b,c, we have Jt, α = Euα F t = Jt, α for t = 0,..., T. The previous theorem does not immediately lead to an optimal control but it often helps in order to verify that some candidate control is in fact optimal.

29 29 Remark It may happen that the supremum in the definition of the optimal value is not a maximum, i.e. an optimal control does not exist. In this case Theorem 2.3 cannot be applied. Sometimes this problem can be circumvented by considering a certain closure of the set of admissible controls which does in fact contain the optimizer. If this is not feasible, a variation of Theorem 2.32 without assumption 2c may be of interest. The supermartingale property 2b of the candidate value process ensures that J0 is an upper bound of the optimal value J0. If, for any ε > 0, one can find an admissible control α ε with J0 E JT, α ε + ε, then J0 = J0 and the α ε yield a sequence of controls approaching this optimal value. 2. The above setup allows for a straightforward extension to infinite time horizon T =. As an example we consider the Merton problem to maximize expected logarithmic utility of terminal wealth. Example 2.5 Logarithmic utility of terminal wealth An investor trades in a market consisting of a constant bank account and a stock whose price at time t equals St = S0E Xt = S0 t 1 + Xs with Xt > 1. Given that ϕt denotes the number of shares in the investor s portfolio from time t 1 to t, the profits from the stock investment in this period are ϕt St. If v 0 > 0 denotes the investor s initial endowment, her wealth at any time t amounts to V ϕ t := v 0 + s=1 t ϕs Ss = v 0 + ϕ St. 2.1 s=1 We assume that the investor s goal is to maximize the expected logarithmic utility ElogV ϕ T of wealth at time T. To this end, we assume that the stock price process S is exogenously given and the investor s set of admissible controls is A := {ϕ predictable : V ϕ 0 and ϕ0 = 0}. It turns out that the problem becomes more transparent if we consider the relative portfolio St 1 πt := ϕt, t = 1,..., T, 2.2 V ϕ t 1 i.e. the fraction of wealth invested in the stock at t 1. Starting with v 0, the stock holdings ϕt and the wealth process V ϕ t are recovered from π via and t V ϕ t = v 0 E π Xt = v πs Xs 2.3 s=1 ϕt = πt V ϕt 1 St 1 = πtv 0E π Xt 1. St 1

30 30 CHAPTER 2. DYNAMIC PROGRAMMING Indeed, 2.3 follows from V ϕ t = ϕt St = V ϕt 1πt St = V ϕ t 1 π St 1 Xt. If T = 1, a simple calculation shows that the investor should buy ϕ 1 = π 1v 0 /S0 shares at time 0, where the optimal fraction π 1 maximizes the function γ Elog1 + γ X1. We guess that the same essentially holds for multi-period markets, i.e. we assume that the optimal relative portfolio is obtained as the maximizer π ω, t of the mapping γ Elog1 + γ Xt F t 1 ω. 2.4 The corresponding candidate value process is T Jt, ϕ := E log V ϕ t 1 + π s Xs F t Observe that = log V ϕ t + E s=t+1 T s=t+1 log 1 + π s Xs F t EJt, ϕ F t 1 = log V ϕ t 1 ϕtst 1 + E log 1 + Xt V ϕ t 1 F t 1 T + E s=t+1 log 1 + π s Xs F t 1 log V ϕ t 1 + E log1 + π t Xt F t 1 T + E s=t+1 = Jt 1, ϕ log 1 + π s Xs F t for any t 1 and ϕ A, with equality for the candidate optimizer ϕ satisfying ϕ t = π tv ϕ t 1/St 1. By Theorem 2.3, we conclude that ϕ is indeed optimal. Note that the optimizer or, more precisely, the optimal fraction of wealth invested in stock depends only on the local dynamics of the stock. This myopic property holds only for logarithmic utility. The following variation of Example 2.5 considers utility of consumption rather than terminal wealth. Example 2.6 Logarithmic utility of consumption In the market of the previous example the investor now spends ct currency units at any time t 1. We assume that utility is derived from this consumption rather than terminal wealth, i.e. the goal is to maximize T E logct + logv ϕ,c T 2.6 t=1

31 31 subject to the affordability constraint that the investor s aggregate consumption cannot exceed her cumulative profits: 0 V ϕ,c t := v 0 + ϕ St t cs. The last term V ϕ,c T in 2.6 refers to consumption of the remaining wealth at the end. The investor s set of admissible controls is s=1 A := { ϕ, c predictable : V ϕ,c 0, ϕ, c0 = 0, 0 }. We try to come up with a reasonable candidate ϕ, c for the optimal control. As in the previous example, matters simplify in relative terms. We write κt = for the fraction of wealth that is consumed and ct, t = 1,..., T 2.7 V ϕ,c t 1 St 1 πt = ϕt V ϕ,c t 1 ct = ϕt St 1 V ϕ,c t 11 κt 2.8 for the relative portfolio. Since wealth after consumption at time t 1 is now V ϕ,c t 1 ct, the numerator in 2.8 had to be adjusted. Similarly to 2.3, the wealth is given by V ϕ,c t = v 0 t 1 + πs Xs1 κs = v 0 E π XtE κ It 2.9 s=1 We guess that the same relative portfolio as in the previous example is optimal in this modified setup, which leads to the candidate ϕ t = π t V ϕ,c t 1 c t. St 1 Moreover, it may seem natural that the investor tries to spread consumption of wealth evenly over time. This idea leads to κ t = 1/T + 2 t and hence c t = V ϕ,c t 1 T + 2 t because, at time t 1, there are T + 2 t periods are left for consumption. This candidate

32 32 CHAPTER 2. DYNAMIC PROGRAMMING pair ϕ, c corresponds to the candidate value process Jt, ϕ, c := = + E + log t logcs s=1 T s=t+1 V ϕ,c t log V ϕ,c t T r=t+1 s 1 r=t+1 1 κ r1 + π r Xr κ s 1 κ r1 + π r Xr F t t logcs + T + 1 t log V ϕ,c t s=1 + E T r=t+1 T + 1 r log 1 + π r Xr + T + 1 r log T +1 r T +2 r logt + 2 r F t, which is obtained if, starting from t + 1, we invest the candidate fraction π of wealth in the stock and consume at any time s 1 the candidate fraction κ s = 1/T +2 s of wealth.

33 33 In order to verify optimality, observe that t 1 EJt, ϕ, c F t 1 = logcs + T + 1 t log V ϕ,c t 1 + T + 1 te s=1 log 1 + πt Xt Ft 1 + E T + 1 t log1 κt + logκt Ft 1 + log Vϕ,c t 1 T + E T + 1 s log 1 + π s Xs s=t+1 + T + 1 s log T +1 s T +2 s logt + 2 s F t 1 t 1 logcs + T + 2 t log V ϕ,c t 1 s=1 + T + 1 te log 1 + π t Xt Ft 1 + E T + 1 t log1 1 logt + 2 t Ft 1 T +2 t T + E T + 1 s log 1 + π s Xs s=t+1 + T + 1 s log T +1 s T +2 s logt + 2 s F t 1 = Jt 1, ϕ, c t = logcs + T + 1 t log V ϕ,c t s=1 + E T r=t+1 T + 1 r log 1 + π r Xr + T + 1 r log T +1 r T +2 r logt + 2 r F t for any admissible control ϕ, c, where πt, κt are defined as in 2.8, 2.7. Indeed, the inequality holds because π t maximizes γ Elog1+γ Xt F t 1 and 1/T +2 t maximizes δ T + 2 t log1 δ + logδ. Again, equality holds if ϕ, c = ϕ, c. By Theorem 2.3, we conclude that ϕ, c is indeed optimal. The optimal consumption rate changes slightly if the objective is to maximize E T t=1 e δt 1 logc t + e δt logv ϕ,c T with some impatience rate δ 0. The two previous examples allow for a straightforward extension to d > 1 assets.

34 Chapter 3 Optimal Stopping An important subclass of control problems concerns optimal stopping, i.e. given some time horizon T and some adapted process X with Esup t {0,...,T } Xt <, the goal is to maximise the expected reward τ EXτ 3.1 over all stopping times τ with values in {0,..., T }. Remark 3.1 In the spirit of Chapter 2, a stopping time τ can be identified with the corresponding adapted process αt := 1 {t τ}, and hence Xτ = X0 + α XT. Put differently, A := {α predictable: α {0, 1}-valued, decreasing, α0 = 1} and uα := X0 + α XT in Chapter 2 lead to the above optimal stopping problem. Definition 3.2 The Snell envelope of X is the adapted process V defined as V t := ess sup { EXτ F t : τ stopping time with values in {t, t + 1,..., T } }. 3.2 The Snell envelope represents the maximal expected reward if we start at time t and have not stopped yet. The following martingale criterion may be helpful to verify the optimality of a candidate stopping time. We will apply its continuous-time version it in two examples in Chapter 8. Proposition Let τ be a stopping time with values in {0,..., T }. If V is an adapted process such that V τ is a martingale, V τ = Xτ, and V 0 = M0 for some martingale or at least supermartingale M X, then τ is optimal for 3.1 and V coincides up to time τ with the Snell envelope of Definition More generally, let τ be a stopping time with values in {t,..., T } and A F t. If V is an adapted process such that V τ V t is a martingale, V τ = Xτ, and V t = Mt on A for some martingale or at least supermartingale M with M X on A, then V s coincides on A for t s τ with the Snell envelope of Definition 3.2 and τ is optimal on A for 3.2, i.e. it maximises EXτ F t on A. Proof. 1. This follows from the second statement. 34

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels.

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels. Advanced Financial Models Example sheet - Michaelmas 208 Michael Tehranchi Problem. Consider a two-asset model with prices given by (P, P 2 ) (3, 9) /4 (4, 6) (6, 8) /4 /2 (6, 4) Is there arbitrage in

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

On the dual problem of utility maximization

On the dual problem of utility maximization On the dual problem of utility maximization Yiqing LIN Joint work with L. GU and J. YANG University of Vienna Sept. 2nd 2015 Workshop Advanced methods in financial mathematics Angers 1 Introduction Basic

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

The Asymptotic Theory of Transaction Costs

The Asymptotic Theory of Transaction Costs The Asymptotic Theory of Transaction Costs Lecture Notes by Walter Schachermayer Nachdiplom-Vorlesung, ETH Zürich, WS 15/16 1 Models on Finite Probability Spaces In this section we consider a stock price

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Birgit Rudloff Operations Research and Financial Engineering, Princeton University TIME CONSISTENT RISK AVERSE DYNAMIC DECISION MODELS: AN ECONOMIC INTERPRETATION Birgit Rudloff Operations Research and Financial Engineering, Princeton University brudloff@princeton.edu Alexandre Street

More information

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint that the ON PORTFOLIO OPTIMIZATION UNDER \DRAWDOWN" CONSTRAINTS JAKSA CVITANIC IOANNIS KARATZAS y March 6, 994 Abstract We study the problem of portfolio optimization under the \drawdown constraint" that the wealth

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Dynamic risk measures. Robust representation and examples

Dynamic risk measures. Robust representation and examples VU University Amsterdam Faculty of Sciences Jagiellonian University Faculty of Mathematics and Computer Science Joint Master of Science Programme Dorota Kopycka Student no. 1824821 VU, WMiI/69/04/05 UJ

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS

PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS Libo Li and Marek Rutkowski School of Mathematics and Statistics University of Sydney NSW 26, Australia July 1, 211 Abstract We

More information

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012 NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Portfolio Optimization in discrete time

Portfolio Optimization in discrete time Portfolio Optimization in discrete time Wolfgang J. Runggaldier Dipartimento di Matematica Pura ed Applicata Universitá di Padova, Padova http://www.math.unipd.it/runggaldier/index.html Abstract he paper

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) subject to for all t Jonathan Heathcote updated, March 2006 1. The household s problem max E β t u (c t ) t=0 c t + a t+1

More information

UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING

UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING J. TEICHMANN Abstract. We introduce the main concepts of duality theory for utility optimization in a setting of finitely many economic scenarios. 1. Utility

More information

arxiv: v1 [math.pr] 24 Sep 2018

arxiv: v1 [math.pr] 24 Sep 2018 A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

1.1 Definition of BM and its finite-dimensional distributions

1.1 Definition of BM and its finite-dimensional distributions 1 Brownian motion Brownian motion as a physical phenomenon was discovered by botanist Robert Brown as he observed a chaotic motion of particles suspended in water. The rigorous mathematical model of BM

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Optimal stopping for non-linear expectations Part I

Optimal stopping for non-linear expectations Part I Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

THE ASYMPTOTIC ELASTICITY OF UTILITY FUNCTIONS AND OPTIMAL INVESTMENT IN INCOMPLETE MARKETS 1

THE ASYMPTOTIC ELASTICITY OF UTILITY FUNCTIONS AND OPTIMAL INVESTMENT IN INCOMPLETE MARKETS 1 The Annals of Applied Probability 1999, Vol. 9, No. 3, 94 95 THE ASYMPTOTIC ELASTICITY OF UTILITY FUNCTIONS AND OPTIMAL INVESTMENT IN INCOMPLETE MARKETS 1 By D. Kramkov 2 and W. Schachermayer Steklov Mathematical

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

4. Conditional risk measures and their robust representation

4. Conditional risk measures and their robust representation 4. Conditional risk measures and their robust representation We consider a discrete-time information structure given by a filtration (F t ) t=0,...,t on our probability space (Ω, F, P ). The time horizon

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS JOHANNES RUF AND JAMES LEWIS WOLTER Appendix B. The Proofs of Theorem. and Proposition.3 The proof

More information

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Xiaowei Chen International Business School, Nankai University, Tianjin 371, China School of Finance, Nankai

More information

Example I: Capital Accumulation

Example I: Capital Accumulation 1 Example I: Capital Accumulation Time t = 0, 1,..., T < Output y, initial output y 0 Fraction of output invested a, capital k = ay Transition (production function) y = g(k) = g(ay) Reward (utility of

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Finance and Stochastics manuscript No. (will be inserted by the editor) Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Nicholas Westray Harry Zheng. Received: date

More information

Sensitivity analysis of the expected utility maximization problem with respect to model perturbations

Sensitivity analysis of the expected utility maximization problem with respect to model perturbations Sensitivity analysis of the expected utility maximization problem with respect to model perturbations Mihai Sîrbu, The University of Texas at Austin based on joint work with Oleksii Mostovyi University

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions. Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,

More information

Lecture 4 An Introduction to Stochastic Processes

Lecture 4 An Introduction to Stochastic Processes Lecture 4 An Introduction to Stochastic Processes Prof. Massimo Guidolin Prep Course in Quantitative Methods for Finance August-September 2017 Plan of the lecture Motivation and definitions Filtrations

More information

Worst Case Portfolio Optimization and HJB-Systems

Worst Case Portfolio Optimization and HJB-Systems Worst Case Portfolio Optimization and HJB-Systems Ralf Korn and Mogens Steffensen Abstract We formulate a portfolio optimization problem as a game where the investor chooses a portfolio and his opponent,

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

1 Markov decision processes

1 Markov decision processes 2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of

More information

Basic Definitions: Indexed Collections and Random Functions

Basic Definitions: Indexed Collections and Random Functions Chapter 1 Basic Definitions: Indexed Collections and Random Functions Section 1.1 introduces stochastic processes as indexed collections of random variables. Section 1.2 builds the necessary machinery

More information

Notes on Random Variables, Expectations, Probability Densities, and Martingales

Notes on Random Variables, Expectations, Probability Densities, and Martingales Eco 315.2 Spring 2006 C.Sims Notes on Random Variables, Expectations, Probability Densities, and Martingales Includes Exercise Due Tuesday, April 4. For many or most of you, parts of these notes will be

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Optimal Stopping and Applications

Optimal Stopping and Applications Optimal Stopping and Applications Alex Cox March 16, 2009 Abstract These notes are intended to accompany a Graduate course on Optimal stopping, and in places are a bit brief. They follow the book Optimal

More information

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

CONVERGENCE OF RANDOM SERIES AND MARTINGALES CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the

More information

The Skorokhod problem in a time-dependent interval

The Skorokhod problem in a time-dependent interval The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Math 735: Stochastic Analysis

Math 735: Stochastic Analysis First Prev Next Go To Go Back Full Screen Close Quit 1 Math 735: Stochastic Analysis 1. Introduction and review 2. Notions of convergence 3. Continuous time stochastic processes 4. Information and conditional

More information

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes This section introduces Lebesgue-Stieltjes integrals, and defines two important stochastic processes: a martingale process and a counting

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

A numerical method for solving uncertain differential equations

A numerical method for solving uncertain differential equations Journal of Intelligent & Fuzzy Systems 25 (213 825 832 DOI:1.3233/IFS-12688 IOS Press 825 A numerical method for solving uncertain differential equations Kai Yao a and Xiaowei Chen b, a Department of Mathematical

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Order book resilience, price manipulation, and the positive portfolio problem

Order book resilience, price manipulation, and the positive portfolio problem Order book resilience, price manipulation, and the positive portfolio problem Alexander Schied Mannheim University Workshop on New Directions in Financial Mathematics Institute for Pure and Applied Mathematics,

More information

Utility Maximization in Hidden Regime-Switching Markets with Default Risk

Utility Maximization in Hidden Regime-Switching Markets with Default Risk Utility Maximization in Hidden Regime-Switching Markets with Default Risk José E. Figueroa-López Department of Mathematics and Statistics Washington University in St. Louis figueroa-lopez@wustl.edu pages.wustl.edu/figueroa

More information

of space-time diffusions

of space-time diffusions Optimal investment for all time horizons and Martin boundary of space-time diffusions Sergey Nadtochiy and Michael Tehranchi October 5, 2012 Abstract This paper is concerned with the axiomatic foundation

More information

Risk-Minimality and Orthogonality of Martingales

Risk-Minimality and Orthogonality of Martingales Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2

More information