Optimal Stopping Problems for Time-Homogeneous Diffusions: a Review
|
|
- Jean Dorsey
- 5 years ago
- Views:
Transcription
1 Optimal Stopping Problems for Time-Homogeneous Diffusions: a Review Jesper Lund Pedersen ETH, Zürich The first part of this paper summarises the essential facts on general optimal stopping theory for time-homogeneous diffusion processes in R n. The results displayed are stated in a little greater generality, but in such a way that they are neither too restrictive nor too complicated. The second part presents equations for the value function and the optimal stopping boundary as a free-boundary Stefan) problem and further presents the principle of smooth fit. This part is illustrated by examples where the focus is on optimal stopping problems for the maximum process associated with a one-dimensional diffusion. 1. Introduction This paper reviews some methodologies used in optimal stopping problems for diffusion processes in R n. The first aim is to give a quick review of the general optimal stopping theory by introducing the fundamental concepts of excessive and superharmonic functions. The second aim is to introduce the common technique to transform the optimal stopping into a free-boundary Stefan) problem, such that explicit or numerical computations of the value function and the optimal stopping boundary are possible in specific problems. Problems of optimal stopping have a long history in probability theory and have been widely studied by many authors. Results on optimal stopping were first developed in the discrete case. The first formulations of optimal stopping problems for discrete time stochastic processes were in connection with sequential analysis in mathematical statistics, where the number of observations is not fixed in advance that is a random number) but terminated by the behaviour of the observed data. The results can be found in Wald [33]. Snell [3] obtained the first general results of optimal stopping theory for stochastic processes in discrete time. For a survey of optimal stopping for Markov sequences see Shiryaev [29] and the references therein. The first general results on optimal stopping problems for continuous time Markov processes were obtained by Dynkin [5] using the fundamental concepts of excessive and superharmonic functions. There is an abundance of work in general optimal stopping theory using these concepts, but one of the standard and master reference is the monograph of Shiryaev [29] where the definite results of general optimal stopping theory are stated and it also contains an extensive list of references to this topic. Another thorough exposition is founded in El Karoui [6]). This method gives results on the existence and uniqueness of an optimal stopping time, under very general conditions, of the gain function and the Markov process. Generally, for solving a specific problem the method is very difficult to apply. In a concrete problem with a smooth gain function 2 Mathematics Subject Classification. Primary 6G4, 6J6. Secondary 6J65. Key words and phrases. Optimal stopping, diffusion, Brownian motion, superharmonic excessive) functions, free-boundary Stefan) problem, the principle of smooth fit, maximum process, the maximality principle. 1
2 and a continuous Markov process, it is a common technique to formulate the optimal stopping problem as a free-boundary problem for the value function and the optimal stopping boundary along with the non-trivial boundary condition the principle of smooth fit also called smooth pasting [29]) or high contact principle [17]) ). The principle of smooth fit says that the first derivatives of the value function and the gain function agree at the optimal stopping boundary the boundary of the domain of continued observation). The principle was first applied by Mikhalevich [15] under leadership of Kolmogorov) for concrete problems in sequential analysis and later independently by Chernoff [1] and Lindley [13]. McKean [14] applied the principle to the American option problem. Other important papers in this respect are Grigelionis & Shiryaev [11] and van Moerbeke [32]. For a complete account of the subject and an extensive bibliography see Shiryaev [29]. Peskir & Shiryaev [24] introduced the principle of continuous fit solving sequential testing problems for Poisson processes processes with jumps). The background for solving concrete optimal stopping problems is the following. Before and in the seventies the investigated concrete optimal stopping problems were for one-dimensional diffusions where the gain process contained two terms: a function of the time and the process, and a path-dependent integral of the process see, among others, Taylor [31], Shepp [25] and Davis [3]). In the nineties the maximum process path-dependent functional) associated with a one-dimensional diffusion was studied in optimal stopping. Jacka [12] treated the case of reflected Brownian motion and later Dubins, Shepp & Shiryaev [4] treated the case of Bessel processes. In both papers the motivation was to obtain sharp maximal inequalities and guessing the nature of the optimal stopping boundary solved the problem. Graversen & Peskir [8] formulated the maximality principle for the optimal stopping boundary in the context of geometric Brownian motion. Peskir [23] showed that the maximality principle is equivalent to the superharmonic characterisation of the value function from the general optimal stopping theory and led to the solution of the problem for a general diffusion [23] also contains many references to this subject). In recent work, Graversen, Peskir & Shiryaev [1] formulated and solved an optimal stopping problem where the gain process was not adapted to the filtration. Optimal stopping problems appear in many connections and have a wide range of applications from theoretical to applied problems. The following applications illustrate this point. Mathematical finance. The valuation of American options is based on solving optimal stopping problems and is prominent in the modern optimal stopping theory. The literature devoted to pricing American options is extensive; for an account of the subject see the survey of Myneni [16] and the references therein. The most famous result in this direction is that of McKean [14] solving the standard American option in the Black-Scholes model. This example can further serve to determine the right time to sell the stocks Øksendal [17]). In Shepp & Shiryaev [26] the valuation of the Russian option is computed in the Black-Scholes model see Example 6.5). The payoff of the option is the maximum value of the asset between the purchase time and the exercise time. Sharp inequalities. Optimal stopping problems are a natural tool to derive sharp versions of known inequalities, as well as to deduce new sharp inequalities. By this method Davis [3] derived sharp inequalities for a reflected Brownian motion. Jacka [12] and Dubins, Shepp & Shiryaev [4] derived sharp maximal inequalities for a reflected Brownian motion and for Bessel processes, respectively. In the same direction see Graversen & Peskir [7] and [9] Doob s inequality for Brownian motion and Hardy-Littlewood inequality, respectively) and Pedersen [18] Doob s inequality for Bessel processes). 2
3 Optimal prediction. The development of optimal prediction of an anticipated functional of a continuous time process was recently initiated in Graversen, Peskir & Shiryaev [1] see Example 6.7). The general optimal stopping theory cannot be applied in this case since, due to the anticipated variable, the gain process is not adapted to the filtration. The problem under consideration in [1] is to stop a Brownian path as close as possible to the unknown ultimate maximum height of the path. A mean-square distance measures the closeness. This problem was extended in Pedersen [2] to cases where the closeness is measured by a L q distance and a probability distance. These problems can be viewed as an optimal decision that needs to be based on a prediction of the future behaviour of the observable motion. For example, when a trader is faced with a decision on anticipated market movements without knowing the exact date of the optimal occurrence. The argument can be carried over to other applied problems where such a prediction plays a role. Mathematical statistics. The Bayesian approach to sequential analysis of problems on testing two statistical hypotheses can be solved by reducing the initial problems to optimal stopping problems. Testing two hypotheses about the mean value of a Wiener process with drift was solved by Mikhalevich [15] and Shiryaev [28]. Peskir & Shiryaev [24] solved the problem of testing two hypotheses on the intensity of a Poisson process. Another problem in this direction is the quickest detection problem disruption problem). Shiryaev [27] investigated the problem of detecting alarm) a change in the mean value of a Brownian motion with drift with a minimal error false alarm). Again, a thorough exposition of the subject can be found in Shiryaev [29]. The remainder of this paper is structured as follows. The next section introduces the formulation of the optimal stopping problem under consideration. The concepts of excessive and superharmonic functions with some basic results can be found in Section 3. The main theorem on optimal stopping of diffusions is the point of discussion in Section 4. In Section 5, the optimal stopping problem is transformed into a free-boundary problem and the principle of smooth fit is introduced. The paper concludes with some examples in Section 6, where the focus is on optimal stopping problems for the maximum process associated with a diffusion. 2. Formulation of the problem Let X t ) t be a time-homogeneous diffusion process with state space R n associated with the infinitesimal generator n L X = μ i x) + 1 n σσ t 2 ) i,j x) x i 2 x i x j i=1 for x R n where μ : R n R n and σ : R n R n m are continuous and further σσ t is nonnegative definite. See Øksendal [17] for conditions on μ ) and σ ) that ensure existence and uniqueness of the diffusion process. Let Z t ) be a diffusion process depending on both time and space and hence is not time-homogeneous diffusion) given by Z t )=t, X t ) which under P z starts at z =t, x). Thus Z t ) is a diffusion process in R + R n associated with the infinitesimal generator for z =t, x) R + R n. i,j=1 L Z = t + L X 3
4 The optimal stopping problem to be studied in later sections is of the following kind. Let G : R + R n R be a gain function, which will be specified later. Consider the optimal stopping problem for the diffusion Z t )withthevalue function given by 2.1) V z) =supe z GZ ) ) where the supremum is taken over all stopping times for Z t ). At the elements ω Ω where ω) = set GZ )tobe. There are two problems to be solved in connection with the problem 2.1). The first problem is to compute the value function V and the second problem is to find an optimal stopping time, that is, a stopping time for Z t ) such that V z) =E z GZ )). Note that optimal stopping times may not exist, or be unique if they do. 3. Excessive and superharmonic functions This section introduces the two fundamental concepts of excessive and superharmonic functions that are the basic concepts in the next section for a characterisation of the value function in 2.1). For the facts presented here and a complete account including proofs) of this subject, consult Shiryaev [29]. In the main theorem in the next section it is assumed that the gain function belongs to the following class of functions. Let LZ) be the class consisting of all lower semicontinuous functions H : R + R n, ] satisfying either of the following two conditions 3.1) 3.2) E z sup s HZ s ) ) < E z inf s HZ s ) ) > for all z =t, x). If the function H is bounded from below then condition 3.2) is trivial fulfilled. The following two families of functions are crucial in the sequel presentation of the general optimal stopping theory. Definition 3.1. Excessive functions). A function H LZ) is called excessive for Z t )if for all s andallz =t, x). E z HZs ) ) Hz) Definition 3.2. Superharmonic functions). A function H LZ) is called superharmonic for Z t )if E z HZ ) ) Hz) for all stopping times for Z t )andallz =t, x). The basic and useful properties of excessive and superharmonic functions are stated in [29] and [17]. It is clear from the two definitions that a superharmonic function is excessive. Moreover, in some cases, the converse also holds which is not obvious. The result is stated in the next proposition. Proposition 3.3. Let H LZ) satisfy condition 3.2). Then H is excessive for Z t ) if and only if H is superharmonic for Z t ). The above definitions play a definite role in describing the structure of the value function in 2.1). The following definition is important in this direction. 4
5 Definition 3.4. The least superharmonic excessive) majorant). Let G LZ) be finite. A superharmonic excessive) function H is called a superharmonic excessive) majorant of G if H G. A function Ĝ is called the least superharmonic excessive) majorant of G if i) Ĝ is a superharmonic excessive) majorant of G. ii) If H is an arbitrary superharmonic excessive) majorant of G then Ĝ H. To complete this section, a general iterative procedure is presented for constructing the least superharmonic majorant under the condition 3.2). Proposition 3.5. Let G LZ) satisfy condition 3.2) and G <. Define the operator Q j [G]z) =Gz) E z GZ2 j) ) and set G j,n z) =Q n j [G]z) where Q n j is the n te power of the operator Q j. Then the function Ĝz) = lim lim G j,n z) j n is the least superharmonic majorant of G. There is a simple iterative procedure for the construction of Ĝ when the Markov process and the gain function are nice. Corollary 3.6. Let Z t ) be a Feller process and let G LZ) be continuous and bounded from below. Set G j z) =supe z Gj 1 Z t ) ) t for j 1 and G = G. Then Ĝz) = lim G j z) j is the least superharmonic majorant of G. Remark 3.7. Proposition 3.5 and Corollary 3.6 are both valid under condition 3.2) and excessive and superharmonic functions are the same in this case, according to Proposition 3.3. When condition 3.2) is violated, the least excessive majorant may differ from the least superharmonic majorant. In this case, the least excessive majorant is smaller than the least superharmonic majorant, since there are more excessive functions than superharmonic functions. The construction of the least superharmonic majorant follows a similar pattern but is generally more complicated see [29]). Remark 3.8. The iterative procedures to construct the least superharmonic majorant are difficult to apply to concrete problems. This makes it necessary to search for explicit or numerical computations of the least superharmonic majorant. 4. Characterisation of the value function The main theorem of general optimal stopping theory of diffusion processes is contained in the next theorem. The result gives existence and uniqueness of an optimal stopping time in problem 2.1). The result could have been stated in a more general setting, but is stated with a minimum of technical assumptions. For instance, the theorem also holds for a larger class of Markov process such as Lévy processes. For details of this and the main theorem consult Shiryaev [29]. 5
6 Theorem 4.1. Consider the optimal stopping problem 2.1) where the gain function G is lower semicontinuous and satisfies either 3.1) or 3.2). I). The value function V is the least superharmonic majorant of the gain function G with respect to the process Z t ) t, that is, V z) =Ĝz) for all z =t, x). II). Define the domain of continued observation C = { z R + R n Gz) <V z) } and let bethefirstexittimeofz t ) from C, that is, =inf{ t>:z t / C }. If < P z -a.s. for all z, then is an optimal stopping time for the problem 2.1), atleast when G is continuous and satisfies both 3.1) and 3.2). III). If there exists an optimal stopping time σ in problem 2.1), then σ P z -a.s. for all z and is also an optimal stopping time for problem 2.1). Remark 4.2. Part II) of the theorem gives the existence of an optimal stopping time. The conditions could have been stated with a little greater generality; again, for more details cf. [29]. Part III) of the theorem says that if there exists an optimal stopping time σ then is also an optimal stopping time and is the smallest among all optimal stopping times for problem 2.1). This extreme property of the optimal stopping time characterises it uniquely. Remark 4.3. Sometimes it is convenient to consider approximate optimal stopping times. An example is given in the setting of Theorem 4.1II), if the stopping time does not satisfy < P z -a.s. Then the following approximate stopping times are available. For ε>let C ε = { z R + R n Gz) <V z) ε }. Let ε be the first exit time of Z t )fromc ε,that is, ε =inf{ t> : Z t / C ε }. Then ε < P z -a.s. and ε is approximated optimal in the following sense lim ε E z GZ ε )) = V z) for all z =t, x). Furthermore, ε as ε. At first glance, it seems that the initial setting of the optimal stopping problem 2.1) and Theorem 4.1 only cover the cases where the gain process is a function of time and the state of the process X t ). But the next two examples illustrate that Theorem 4.1 also covers some cases where the gain process contains path-dependent functional of X t ), where it is a matter of properly defining Z t ). For simplicity, let n = 1 in the examples below and assume, moreover, that X t )solvesthe stochastic differential equation dx t = μx t ) dt + σx t ) db t where B t ) is a standard Brownian motion. Example 4.4. Optimal stopping problems involving an integral). Let F : R + R R and c : R R + be continuous functions. Consider the optimal stopping problem ) 4.1) W t, x) =supe x F t +,X ) cx u ) du. The integral term might be interpreted as an accumulated cost. This problem can be reformulated to fit in the setting of problem 2.1) and Theorem 4.1 by the following simple observations. 6
7 Set A t = t cx u) du and denote Z t )byz t = t, X t,a t ). Thus Z t ) is a diffusion process in R 3 associated with the infinitesimal generator for z =t, x, a). stopping problem L Z = t + L X cx) a Let Gz) =F t, x) a be a gain function and consider the new optimal V z) =supe z GZ ) ). This problem fits into the setting of Theorem 4.1 and it is clear that W t, x) =V t, x, ). Note that the gain function G is linear in a. Another approach is by Itô formula to reduce the problem 4.1) to the setting of the initial problem 2.1). Assume that the function x Dx) is smooth and satisfies L X Dx) =cx). Itô formula yields that DX t )=Dx)+ t L X DX u ) du + M t where M t = t D X u ) σx u ) db u is a continuous local martingale. The optional sampling implies that E x M ) = by localization and some uniform integrable conditions) and hence E x DX ) ) ) = Dx)+E x cx u ) du. Therefore, the problem 4.1) is equivalent to solving the initial problem 2.1) with the gain function Gt, x) =F t, x) Dx). Example 4.5. Optimal stopping problems for the maximum process). Peskir [23] made the following observation. Denote the maximum process associated with X t )bys t = max u t X u. It can be verified that the two-dimensional process Z t )=X t,s t ) with state space { x, s) R 2 x s } see Figure 1) is a continuous Markov process associated with the infinitesimal generator L Z = L X for x<s s = with L X given in Section 2. Hence the optimal stopping problem V x, s) =supe x,s GX,S ) ) for x s fits in the setting of Theorem 4.1. forx = s 5. The free-boundary problem and the principle of smooth fit For solving a specific optimal stopping problem the superharmonic characterisation is not easy to apply. To carry out explicit computations of the value function another methodology therefore is needed. This section considers the optimal stopping problem as a free-boundary Stefan) problem. This is also important for computations of the value function from a numerical point of view. First, the notation of characteristic generator see [17]) is introduced and is an extension of the infinitesimal generator. Let Z t ) be the diffusion process given in Section 2. For any open set U R + R n, associate U =inf{ t>:z t / U } to be the first exit time from U of Z t ). 7
8 Figure 1. A simulation of a path of the two-dimensional process B t ω), max u t B u ω)) where B t ) is a Brownian motion. Definition 5.1. Characteristic generator). The characteristic generator A Z of Z t )is defined by E z fzu ) ) fz) A Z fz) = lim U z E z U ) where the limit is to be understood in the following sense. The open sets U j decrease to the point z, thatis,u j+1 U j and j 1 U j = {z}. IfE z U )= for all open sets U z, thenset A Z fz) =. LetDA Z ) be the family of Borel functions f for which the limit exists. Remark 5.2. As already mentioned above the characteristic generator is an extension of the infinitesimal generator in the following sense that DL Z ) DA Z )andl Z f = A Z f for any f DL Z ). Assume in the sequel that the value function V in 2.1) is finite. Let C = { z R + R n V z) > Gz) } be the domain of continued observation see Theorem 4.1). Then the following result gives equations for the value function in the domain of continued observation. Theorem 5.3. Let the gain function G be continuous and satisfy both conditions 3.1) and 3.2). Then the value function V z) for z C belongs to DA Z ) and solves the equation 5.1) A Z V z) = for z C. Remark 5.4. Since the gain function G is continuous and the value function V is lower semicontinuous, the domain of continued observation C is an open set in R + R n.if C < P z -a.s. then it follows from Theorem 4.1 that V z) =E z GZC ) ). Then the general Markov process theory yields that the value function solves the equation 5.1) and Theorem 5.3 follows directly. In other words, one is led to formulate equation 5.1). If the value function is C 2 in the domain of continued observation, the characteristic generator can be replaced by the infinitesimal generator according to Remark 5.2. This has the advantage that the infinitesimal generator is explicitly given. 8
9 Equation 5.1) is referred to as a free-boundary problem. The domain of continued observation C is not known a priori but must be found along with unknown value function V. Usually, a free-boundary problem has many solutions and further conditions must be added e.g. the principle of smooth fit) which the value function V satisfies. These additional conditions are not always enough to determine V. In that case, one must either guess or find more sophisticated conditions e.g. the maximality principle, see Example 6.1 in the next section). The famous principle of smooth fit is one of the most frequently used non-trivial boundary conditions in optimal stopping. The principle is often applied in the literature see, among others, McKean [14], Jacka [12] and Dubins, Shepp & Shiryaev [4]). The principle of smooth fit. If the gain function G is smooth then a non-trivial boundary condition for the free-boundary problem for i =1,...,n might be the following V t z) z C = G t z) z C V z) = G z). x i z C x i z C A result in Shiryaev [29] states that the principle of smooth fit holds under fairly general assumptions. The principle of smooth fit is a very fine condition in the sense that the value function often is often precisely C 1 at the boundary of the domain of continued observation. This is demonstrated in the examples in the next section. The above results can be used to formulate the following method for solving a particular stopping problem. A recipe to solve optimal stopping problems. Step 1. First one tries to guess the nature of the optimal stopping boundary and then, by using ad hoc arguments, to formulate a free-boundary problem with the infinitesimal generator and some boundary conditions. The boundary conditions can be trivial ones e.g. the value function is continuous, odd/even, normal reflection etc.) or non-trivial, such as the principle of smooth fit and the maximality principle. Step 2. One solves the formulated free-boundary system and maximises over the family of solutions if there is no unique solution. Step 3. Finally, one must verify that the guessed at candidates for the value function and the optimal stopping time are indeed correct, e.g., using Itô formula). The methodology has been used in, among others, Dubins, Shepp & Shiryaev [4], Graversen & Peskir [8], Pedersen [18] and Shepp & Shiryaev [26]. It is generally difficult to find the appropriate solution of the partial) differential equation L Z V z) =. It is therefore of most interest to formulate the free-boundary problem such that the dimension of the problem is as small as possible. The two examples below present cases where the dimension can be reduced. For simplicity let n = 1 and assume, moreover, that X t ) solves the stochastic differential equation dx t = μx t ) dt + σx t ) db t where B t ) is a standard Brownian motion. Example 5.5. Integral and discounted problem). The general Markov process theory states that the free-boundary problem is one-dimensional in some special cases. 9
10 1. Let F : R R and c : R R + be continuous functions and let the gain function be given by Gx, a) =F x) a which is linear in a see Example 4.4). Let Z t )=X t,a t )where A t = t cx u) du and consider the two-dimensional optimal stopping problem V x) =supe x F X ) cx u ) du. At first glance, it seems to be a two-dimensional problem, but the Markov process theory yields that the free-boundary problem can formulated as L X V x) = cx) for x in the domain of continued observation, which is also clear from the last part of Example 4.4. This is a one-dimensional problem. 2. Given the gain function Gt, x) =e λt F x) whereλ> is a constant. Let Z t )=t, X t ) and consider the two-dimensional optimal stopping problem V x) =supe x e λ F X ) ). In this case, the free-boundary problem can be formulated as L X V x) =λv x) for x in the domain of continued observation. Again, this is a one-dimensional problem. Example 5.6. Deterministic time-change method). This example uses a deterministic time-change to reduce the problem. The method is described in Pedersen & Peskir [22]. Consider the optimal stopping problem ) V t, x) =supe x αt + ) X where α is a smooth non-linear function. Thus, the value function V might solve the following partial differential equation V t t, x)+l XV t, x) = for t, x) in the domain of continued observation. The time-change method transforms the original problem into a new optimal stopping problem, such that the new value function solves an ordinary differential equation. The problem is to find a deterministic time-change t σ t which satisfies following two conditions: i) t σ t is continuous and strictly increasing. ii) There exists a one-dimensional time-homogeneous diffusion Y t ) with infinitesimal generator L Y such that ασ t ) X σt = e λt Y t for some λ R. The condition i) ensures that is a stopping time for Y t ) if and only if σ is a stopping time for X t ). Substituting ii) in the problem, the new time-changed) value function becomes ) W y) =supe y e λ Y. As in Example 5.5 the new problem might solve the ordinary differential equation L Y W y) =λw y) ) 1
11 in the domain of continued observation. Given the diffusion X t ) the crucial point is to find the process Y t ) and the time-change σ t fulfilling the two conditions above. By Itô calculusit can be shown that the time-change given by { r } σ t =inf r> ρu) du > t where ρ ) satisfies that the two terms α t) αt) y + αt) μ y/βt) )) 1 ρt) and αt) 2 σ 2 y/αt) ) 1 ρt) do not depend on t, will fulfill the above two conditions. This clearly imposes the following conditions on α ) to make the method applicable μ y/αt) ) = γt) G 1 y) and σ 2 y/αt) ) = γt) αt) G 2y) where γt), G 1 y) andg 2 y) are functions required to exist. For more information and remaining details of this method see [22] see also [1]). 6. Examples and applications This section presents the solutions of three examples of stopping problems, which illustrate the method established in the previous section and some applications. The focus will be on optimal stopping problems for the maximum process associated with a one-dimensional diffusion. Let n = 1. Assume that X t ) is a non-singular diffusion with state space R, thatisx σx) > andx t ) solves the stochastic differential equation dx t = μx t ) dt + σx t ) db t where B t ) is a standard Brownian motion. The infinitesimal generator of X t )isgivenby 6.1) L X = μx) x σ2 x) 2 x. 2 Let S t =max u t X u s denote the maximum process associated with X t )andletitstart at s x under P x,s. The scale function and speed measure of X t )aregivenby x u ) μr) Sx) = exp 2 σ 2 r) dr 2 du and mdx) = S x) σ 2 x) dx for x R. The first example is important from the general optimal stopping theory point of view. Example 6.1. The maximality principle). The results of this example are found in Peskir [23]. Let x cx) > be a continuous cost) function. Consider the optimal stopping problem with the value function ) 6.2) V x, s) =supe x,s S cx u ) du where the supremum is taken over all stopping times for X t ) satisfying ) 6.3) E x,s cx u ) du < for all x s. The recipe from the previous section is applied to solve the problem. 11
12 1. The process X t,s t ) with state space { x, s) R 2 x s } changes only in the second coordinate when it hits the diagonal x = s in R 2 see Figure 1). It can be shown that it is not optimal to stop on the diagonal. Due to the positive cost function c ) the optimal stopping boundary might be a function which stays below the diagonal. Thus, the stopping time might be on the form =inf{ t>:x t g S t ) } for some function s g s) <sto be found. In other words, the domain of continued observation is on the form C = { x, s) R 2 g s) <x s }. It is now natural to formulate the following free-boundary problem that the value function and the optimal stopping boundary is a solution of 6.4) L X V x, s) =cx) for gs) <x<swith s fixed V s x, s) x=s 6.5) = normal reflection) 6.6) V x, s) = s instantaneous stopping) x=gs)+ V x x, s) x=gs)+ 6.7) = smooth fit). Note that 6.4) and 6.5) follow from Example 4.5 and Example 5.5. The condition 6.6) is clear and since the setting is smooth the principle of smooth fit should be satisfied, that is, condition 6.7) holds. The theorem below shows that the guessed system is indeed correct). 2. Define the function x ) 6.8) V g x, s) =s + Sx) Su) cu) mdu) gs) for gs) x s and set V g x, s) =s for x gs). Further, define the first order non-linear differential equation 6.9) g σ 2 gs)) S gs)) s) = 2c gs) ) Ss) Sgs)) ). For a solution s gs) <sof equation 6.9) the corresponding function V g x, s) in 6.8) solves the free-boundary problem in the region gs) <x<s. The problem now is to choose the right optimal stopping boundary s gs) <s. To do this a new principle is needed and it will be the maximality principle. The main observations in [23] are the following. i) g V g x, s) is increasing. ii) The function x, s, a) V g x, s) a is superharmonic for the Markov process Z t )= X t,s t,a t ) for stopping times satisfying 6.3)) where A t = t cx u) du + a. The superharmonic characterisation of the value function in Theorem 4.1 and the above two observations lead to the formulation of the following principle for determining the optimal stopping boundary. The maximality principle. The optimal stopping boundary s g s) for the problem 6.2) is the maximal solution of the differential equation 6.9) which stays strictly below the diagonal in R 2 and is simply called the maximal solution in the sequel). 3. In [23] it was proved that this principle is equivalent to the superharmonic characterisation of the value function. The result is formulated in the next theorem and is motivated by Theorem
13 Theorem 6.2. Consider the optimal stopping problem 6.2). I). Lets g s) denote the maximal solution of 6.9) which stays below the diagonal in R 2. Then the value function is given by x ) s + Sx) Su) cu) mdu) for g s) <x s V x, s) = g s) s for x g s). II). The stopping time =inf{ t> : X t g S t ) } is optimal whenever it satisfies condition 6.3). III). If there exists an optimal stopping time σ in 6.2) satisfying 6.3), then σ P x,s -a.s. for all x, s), and is also an optimal stopping time. IV). Ifthereisnomaximalsolutionof 6.9) which stays strictly below the diagonal in R 2, then V x, s) = for all x, s), and there is no optimal stopping time. For more information and details see [23]. A similar approach was used in Pedersen & Peskir [21] to compute expectation of Azéma-Yor stopping times. The theorem extends to diffusions with other state spaces in R. The non-negative diffusion version of the theorem is particularly interesting to derive sharp maximal inequalities, which will be applied in the next example. Peskir [23] conjectured that the maximality principle holds for the discounted version of problem 6.2). In Shepp & Shiryaev [26] and Pedersen [19] the principle is shown to hold in specific cases. A technical difficulty arises in verifying the conjecture because the corresponding free-boundary problem may have no simple solution and the optimal) boundary function is thus implicitly defined. Example 6.3. Doob s inequality for Brownian motion). This example is an application of the previous example see also Graversen & Peskir [7]). Consider the optimal stopping problem 6.2) with X t = B t + x p and cx) = cx p 2)/p for p > 1. Then X t ) is a nonnegative diffusion having as an instantaneously reflecting boundary point and the infinitesimal generator of X t )in, ) is given in 6.1) with μx) = 1 pp 1) 2 x1 2/p and σ 2 x) =p 2 x 2 2/p. If c> 1 2 pp+1 /p 1) p 1, it follows from Theorem 6.2 that the value function is given by V x, s) =s 2c g p 1 s) 1 1/p x 1/p + 2c g p s)+ where s g s) <sis the maximal solution of the differential equation 2c x pp 1) 6.1) g s) = p gs) 1/p 2c s 1/p gs). 1/p The maximal solution see Figure 2) can be found to be g s) =α s c where <α c < 1isthe greater root of the equation the maximality principle) α α 1 1/p + p/2c) =. The equation admits two roots if and only if c> 1 2 pp+1 /p 1) p 1. Further, the stopping time c) =inf{ t>:x t α c S t } satisfies E x,s c) p/2 ) < if and only if c> 1 2 pp+1 /p 1) p 1. Byanextendedversionof Theorem 6.2 for non-negative diffusions and an observation in Example 5.5, it follows by the definition of the value function for c> 1 2 pp+1 /p 1) p 1 that E x max t B t p) c pp 1) E x B p) + V x, x) c pp 1) xp 13
14 for all stopping times for B t ) satisfying E p/2 ) <. Letting c 1 2 pp+1 /p 1) p 1,the Doob s inequality follows. Figure 2. A computer drawing of solutions of the differential equation 6.1). The bold line is the maximal solution which stays below and never hits the diagonal in R 2. By the maximality principle, this solution equals g. Theorem 6.4. Let B t ) be a standard Brownian motion started at x under P x for x, let p>1 be given and fixed, and let be any stopping time for B t ) such that E x p/2 ) <. Then the following inequality is sharp E x max t B t p) p p 1 ) p Ex B p) p p 1 xp. The constants p/p 1)) p and p/p 1) are the best possible and the equality is attained through the stopping times =inf{ t>:x t α c S t } for c 1 2 pp+1 /p 1) p 1. For details see Graversen & Peskir [7]. The results are extended to Bessel processes in Dubins, Shepp & Shiryaev [4] and Pedersen [18]. Example 6.5. Russian option). This is an example of pricing an American option with infinite time horizon in the framework of the standard Black-Scholes model. The option under consideration is the Russian option see Shepp & Shiryaev [26]). If X t ) is the price process of a stock then the payment function of the Russian option is given by f t =max u t X u where the expiration time is infinity. Thus, it is a perpetual Lookback option see [2]). Assume a standard Black-Scholes model with a dividend paying stock; under the equivalent martingale measure the price process is thus the geometric Brownian motion dx t =r λ)x t dt + σx t db t with λ> the dividend yield, r> the interest rate and σ> the volatility. The infinitesimal generator of X t )on, ) is given in 6.1) with μx) =r λ)x and σx) =σx. 14
15 Under these assumptions, the fair price of the Russian option is according to the general pricing theory is the value of the optimal stopping problem ) E x,s e r S C x, s) =sup E x,s e r f ) =sup where the supremum is taken over all stopping times for X t ). To solve this problem, the idea is to apply Example 5.5 and the maximality principle for this discounted optimal stopping problem. The recipe from the previous section is applied to solve the problem. 1. As in Example 6.1, and using an observation in Example 5.5, it is natural to formulate the following free-boundary problem that the value function and the optimal stopping boundary is asolutionof L X Cx, s) =rcx, s) for gs) <x<s C s x, s) x=s = normal reflection) Cx, s) = s instantaneous stopping) x=gs)+ C x x, s) x=gs)+ = smooth fit). Since the setting is smooth, the principle of smooth fit should be satisfied. The theorem below shows that this system is indeed correct. 2. Letγ 1 < andγ 2 > 1 be the two roots of the quadratic equation 1 2 σ2 γ 2 +r λ 1 2 σ2 ) γ r = and set 1 1/γ2 β = 1 1/γ 1 The solutions to the free-boundary problem are γ 2 Cx, s) = s γ 2 γ 1 [ γ 2 x gs) ) 1/γ2 γ 1 ) where s gs) satisfies the nonlinear differential equation ) γ2 1 s g s) = 1 ) γ1 ) s / s gs) gs) gs) γ 1 < 1. ) γ1 ) γ2 ] x γ 1 gs) ) γ2 +1 ) γ1 +1) s. gs) The maximality principle says that maximal solution of the differential equation is the optimal stopping boundary. It can be shown that g s) =β s is the one. 3. The standard procedure of applying Itô formula, Fatou s lemma etc. can be used to verify that the estimated candidates are indeed correct. The result on the fair price of the Russian option is stated below. Theorem 6.6. The fair price of the Russian option is given by s x ) γ1 x ) ) γ2 C x, s) = γ 2 γ1 γ 2 γ 1 β s β s and the optimal stopping time is given by =inf{ t>:x t β S t }. 15
16 The fair price of the Russian option was calculated by Shepp & Shiryaev [26] which also should be consulted for more information and details. The result is extended in Pedersen [19] to Lookback options with fixed and floating strike. Example 6.7. Optimal prediction of the ultimate maximum of Brownian motion). This example presents solutions to the problem of stopping a Brownian path as close as possible to the unknown ultimate maximum height of the path. The closeness is first measured by a mean-square distance and next by a probability distance. The optimal stopping strategies can also be viewed as selling strategies for stock trading in the idealised Bachelier model. These problems do not fall under the general optimal stopping theory, since the gain process is not adapted to the natural filtration of the process. In this example the diffusion X t = B t.let Φx) = x 1 2π e u2 /2 du for x R denote the distribution function of a standard normal variable. Let S T be the family of all stopping times for B t ) satisfying T. Mean-square distance. This problem was formulated and solved by Graversen, Peskir & Shiryaev [1] and in Pedersen [2] the problem is solved for all L q -distances. Consider the optimal stopping problem with value function 6.11) V = inf E S 1 B ) 2). S 1 The idea is to transform problem 6.11) into an equivalent problem that can be solved by the recipe presented in the previous section. To follow the above plan, note that S 1 is square integrable; then in accordance with Itô-Clark representation theorem formula S 1 = ES 1 )+ 1 H u db u where H t ) is a unique adapted process satisfying E 1 H2 u du ) <. Furthermore, it is known that St B t H t =2 1 Φ )). 1 t If M t ) denote the square integrable martingale M t = t H u db u, then the martingale theory gives that ES 1 B ) 2 = E 1 2H u) du) + 1 for all S 1. Problem 6.11) can therefore be represented as ) ) Su B V = inf E c u du +1 S 1 1 u where cx) =4Φx) 3. By Lévy s theorem and general optimal stopping theory, the problem 6.11) is equivalent to ) ) Bu V = inf E c du +1. S 1 1 u The form of the gain function indicates that the deterministic time-change method introduced in Example 5.6 can be applied successfully. Let σ t =1 e 2t be the time-change and let Z t ) t 16
17 be the time-changed process given by Z t = B σt / 1 σ t.itcanbeshownbyitô formula that Z t ) solves the stochastic differential equation dz t = Z t dt + 2 dβ t where β t ) t is a Brownian motion. Hence Z t ) is a diffusion with the infinitesimal generator L Z = z z + 2 z 2 for z R. Substituting the time-change yields that σ ) V =inf E e 2u c Z u ) du +1. σ Hence the initial problem 6.11) reduces to solving σ ) 6.12) W z) =inf E z e 2u c Z u ) du σ where the infimum is taken over all stopping times σ for Z t )andv = W ) + 1. This is a problem that can be solve with the recipe from Section The domain of continued observation is a symmetric interval around zero, that is C = { z R z z,z ) } and the value function is an even C 1 -function or equivalent W ) =. From the observation in Example 5.5 one is led to formulate the corresponding free-boundary system of the problem 6.12) L Z W z) 2W z) = c z ) for z <z<z W ±z ) = instantaneous stopping) W ±z )= smoothfit) W ) = normal reflection). 2. The solution of the free-boundary problem is given by W z) =Φz ) 1+z 2) 2Φ z)+ 1 z 2) Φz) 3/2 for z [,z ]wherez is the unique solution of the equation 6.13). 3. By Itô formula it can be proved that W z) is the value function and σ =inf{ t> : Z t z } is an optimal stopping time. Transforming the value function and the optimal strategy back to the initial problem 6.11) the following result ensues for more details see [1]). Theorem 6.8. Consider the optimal stopping problem 6.11). Then the value function V is given by V =2Φz ) 1.73 where z 1.12 istheuniquerootoftheequation 6.13) 4Φz ) 2z Φ z ) 3=. The following stopping time is optimal see Figure 3) 6.14) =inf{ t>: max u t B u B t z 1 t }. 17
18 Figure 3. A computer drawing of the optimal stopping strategy 6.14). Probability distance. The problem was formulated and solved in Pedersen [2]. Consider the optimal stopping problem with value function 6.15) W ε) =sup S 1 P S 1 B ε ) for ε>. Furthermore, in this case, the gain process is discontinuous. Using the stationary independents increments of B t ) yields that W ε) =supe E 1 [,ε] S 1 B ) ) ) F S 1 =supe F 1 ε); S B ε ) S 1 where F t ε) =2Φε/ t) 1 is the distribution function of S t. By Lévy s theorem and the general optimal stopping theory the stopping problem 6.15) is equivalent to solving W ε) t, x) = sup E x F1 t ε); B ε ) S 1 t for t 1andx R. It can be shown that it is only optimal to stop if B = ε on the set { <1 t}. This observation together with the Brownian scaling property indicates that the optimal stopping time is of the form ε t =inf{ <u 1 t : B u = εb t t + u) } inf =1 t) where the boundary function b t t) = if t<t and b t t) =1elsewhereforsome t 1to be found. This shows that the principle of smooth fit is not satisfied in the sense that the value function W ε) is not C 1 at all points of the boundary of the domain of continued observation. More precisely, the smooth fit breaks down in the state variable x because of the discontinuous gain function. However, due to the definition of the gain function the smooth fit should still hold in the time variable t and this implies together with Itô formula and the shape of the domain of continued observation that the principle of smooth fit at a single point should hold. This approach provides a method to determine t. 18
19 Set W ε) t, x) =E x F1 t ε 1 t)ε); B ε 1 t) ε ) = E x F1 t ε 1 t)ε); ε 1 t ) + P x ε > 1 t) 1 [,ε] x ). For fixed t<1, x W ε) t, x) is in general only continuous at x = ε. Let ε 1.17 be the point satisfying that x W ε),x) is differentiable at x = ε. The result is the following theorem. Theorem 6.9. Consider the optimal stopping problem 6.15). Sett =1 ε/ε ) 2 ). i) If t =, then the value function is given by seefigure5) W ε) = W ε), ) 1 F 1 y ε) =1+2ε y 3/2 4 W ε) ) 2k +1)ε 1) k 2k +1)ϕ dy y k= 1) k Φ 2k +1)ε ) k= ii) If t >, then the value function is given by see Figure 5) = W ε) t, ) = 2 W ε) t,x ) ) x ϕ t dx t where W ε) t, x) =1+ for x<εand 2 1 t F 1 t y ε) k= k x +2k +1)ε 1) y 3/2 1) k sgn x +2k +1)ε ) Φ k= W ε) t, x) = 1 t F 1 t y ε) x ε y 3/2 ) x +2k +1)ε ϕ dy y ) x +2k +1)ε 1 t ) x ε ϕ dy y for x>ε. In both cases, the optimal stopping time is given by see Figure 4) 6.16) =inf{ t <t 1: max u t B u B t = ε } inf =1). References [1] Chernoff, H. 1961). Sequential tests for the mean of a normal distribution. Proc. 4th Berkeley Sympos. Math. Statist. and Prob. 1, Univ. California Press 79-91). [2] Conze, A. and Viswanathan, R. 1991). Path dependent options: The case of lookback options. J. Finance ). [3] Davis, B. 1976). On the L p norms of stochastic integrals and other martingales. Duke Math. J ). [4] Dubins, L.E., Shepp, L.A. and Shiryaev, A.N. 1993). Optimal stopping rules and maximal inequalities for Bessel processes. Theory Probab. Appl ). [5] Dynkin, E.B. 1963). Optimal choice of the stopping moment of a Markov process. Dokl. Akad. Nauk SSSR ). In Russian). 19
20 Figure 4. A computer drawing of the optimal stopping strategy 6.16) when ε =.75 and ε =1.12. Then t =.75 and t =.9 respectively. Figure 5. A drawing of the value function W ε) as a function of ε. [6] El Karoui, N. 1981). Les aspects probabilites du contrôle stochastique. Lecture Notes in Math. 876, Springer ). In French). [7] Graversen, S.E.and Peskir, G. 1997). On Doob s maximal inequality for Brownian motion. Stochastic Process. Appl ). [8] Graversen, S.E.and Peskir, G. 1998). Optimal stopping and maximal inequalities for geometric Brownian motion. J. Appl. Probab ). [9] Graversen, S.E.and Peskir, G. 1998). Optimal stopping in the L log L-inequality of Hardy and Littlewood. Bull. London Math. Soc ). [1] Graversen, S.E., Peskir, G. and Shiryaev, A.N. 21). Stopping Brownian motion without anticipation as close as possible to its ultimate maximum. Theory Probab. Appl ). [11] Grigelionis, B.I.and Shiryaev, A.N. 1966). On the Stefan problem and optimal stopping rules for Markov processes. Theory Probab. Appl ). [12] Jacka, S.D. 1991). Optimal stopping and bests constant for Doob-like inequalities I: The case p =1. Ann. Probab ). 2
21 [13] Lindley, D.V. 1961). Dynamic programming and decision theory. Appl. Statist ). [14] McKean, H.P. 1965). A free-boundary problem for the heat equation arising from the problem of mathematical economics. Industrial Managem. Review ). [15] Mikhalevich, V.S. 1958). Bayesian choice between two hypotheses for the mean value of a normal process. Visnik Kiiv. Univ ). in Ukrainian). [16] Myneni, R. 1992). The pricing of the American option. Ann. Appl. Probab ). [17] Øksendal, B. 1998). Stochastic differential equations. An introduction with applications. Fifth edition). Springer. [18] Pedersen, J.L. 2). Best bounds in Doob s maximal inequality for Bessel processes. J. Multivariate Anal ). [19] Pedersen, J.L. 2). Discounted optimal stopping problems for the maximum process. J. Appl. Probab ). [2] Pedersen, J.L. 23). Optimal prediction of the ultimate maximum of Brownian motion. Stoch. Stoch. Rep ). [21] Pedersen, J.L.and Peskir, G. 1998). Computing the expectation of the Azéma-Yor stopping time. Ann. Inst. H. Poincare Probab. Statist ). [22] Pedersen, J.L.and Peskir, G. 2). Solving non-linear optimal stopping problems by the method of time-change. Stochastic Anal. Appl ). [23] Peskir, G. 1998). Optimal stopping of the maximum process: The maximality principle. Ann. Probab ). [24] Peskir, G.and Shiryaev, A.N. 2). Sequential testing problems for Poisson processes. Ann. Statist ). [25] Shepp, L.A. 1969). Explicit solutions to some problems of optimal stopping. Ann. Math. Statist ). [26] Shepp, L.A.and Shiryaev, A.N. 1993). The Russian option: Reduced regret. Ann. Appl. Probab ). [27] Shiryaev, A.N. 1961). The problem of quickest detection of a violation of stationary behavior. Dokl. Akad. Nauk SSSR ). In Russian). [28] Shiryaev, A.N. 1969). Two problems of sequential analysis. Cybernetics ). [29] Shiryaev, A.N. 1978). Optimal stopping rules. Springer. [3] Snell, J.L. 1952). Application of martingale system theorems. Trans. Amer. Math. Soc ). [31] Taylor, H.M. 1968). Optimal stopping in a Markov process. Ann. Math. Statist ). [32] van Moerbeke, P. 1974). Optimal stopping and free boundary problems. RockyMountain.J.Math ). [33] Wald, A. 1947). Sequential Analysis. John Wiley and Sons. Jesper Lund Pedersen RiskLab, Department of Mathematics, ETH-Zentrum CH-892 Zürich, Switzerland pedersen@math.ethz.ch 21
The Azéma-Yor Embedding in Non-Singular Diffusions
Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let
More informationOptimal Prediction of the Ultimate Maximum of Brownian Motion
Optimal Prediction of the Ultimate Maximum of Brownian Motion Jesper Lund Pedersen University of Copenhagen At time start to observe a Brownian path. Based upon the information, which is continuously updated
More informationMaximum Process Problems in Optimal Control Theory
J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard
More informationA Barrier Version of the Russian Option
A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr
More informationPavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns
Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns Article (Accepted version) (Refereed) Original citation: Gapeev, Pavel V. and
More informationThe Wiener Sequential Testing Problem with Finite Horizon
Research Report No. 434, 3, Dept. Theoret. Statist. Aarhus (18 pp) The Wiener Sequential Testing Problem with Finite Horizon P. V. Gapeev and G. Peskir We present a solution of the Bayesian problem of
More informationOptimal Stopping Problems and American Options
Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master
More informationPredicting the Time of the Ultimate Maximum for Brownian Motion with Drift
Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with
More informationSolving the Poisson Disorder Problem
Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem
More informationOn Reflecting Brownian Motion with Drift
Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability
More informationOn the sequential testing problem for some diffusion processes
To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We
More informationOn the principle of smooth fit in optimal stopping problems
1 On the principle of smooth fit in optimal stopping problems Amir Aliev Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory, 119992, Moscow, Russia. Keywords:
More informationAlbert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes
Albert N. Shiryaev Steklov Mathematical Institute On sharp maximal inequalities for stochastic processes joint work with Yaroslav Lyulko, Higher School of Economics email: albertsh@mi.ras.ru 1 TOPIC I:
More informationBayesian quickest detection problems for some diffusion processes
Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process
More informationOn the American Option Problem
Math. Finance, Vol. 5, No., 25, (69 8) Research Report No. 43, 22, Dept. Theoret. Statist. Aarhus On the American Option Problem GORAN PESKIR 3 We show how the change-of-variable formula with local time
More informationOPTIMAL STOPPING OF A BROWNIAN BRIDGE
OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian
More informationIntroduction to Random Diffusions
Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationMulti-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form
Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationOptimal Stopping Games for Markov Processes
SIAM J. Control Optim. Vol. 47, No. 2, 2008, (684-702) Research Report No. 15, 2006, Probab. Statist. Group Manchester (21 pp) Optimal Stopping Games for Markov Processes E. Ekström & G. Peskir Let X =
More informationLectures in Mathematics ETH Zürich Department of Mathematics Research Institute of Mathematics. Managing Editor: Michael Struwe
Lectures in Mathematics ETH Zürich Department of Mathematics Research Institute of Mathematics Managing Editor: Michael Struwe Goran Peskir Albert Shiryaev Optimal Stopping and Free-Boundary Problems Birkhäuser
More informationA Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1
Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable
More informationON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS
PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary
More informationOn the quantiles of the Brownian motion and their hitting times.
On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]
More informationApplications of Optimal Stopping and Stochastic Control
Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed
More informationLecture 4: Introduction to stochastic processes and stochastic calculus
Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London
More informationON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER
ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose
More informationON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME
ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationOn the submartingale / supermartingale property of diffusions in natural scale
On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property
More informationOptimal Stopping and Maximal Inequalities for Poisson Processes
Optimal Stopping and Maximal Inequalities for Poisson Processes D.O. Kramkov 1 E. Mordecki 2 September 10, 2002 1 Steklov Mathematical Institute, Moscow, Russia 2 Universidad de la República, Montevideo,
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationOn Optimal Stopping Problems with Power Function of Lévy Processes
On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:
More informationThe Wiener Disorder Problem with Finite Horizon
Stochastic Processes and their Applications 26 11612 177 1791 The Wiener Disorder Problem with Finite Horizon P. V. Gapeev and G. Peskir The Wiener disorder problem seeks to determine a stopping time which
More informationA MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT
A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields
More informationarxiv: v1 [math.pr] 24 Sep 2018
A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b
More informationAn essay on the general theory of stochastic processes
Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16
More informationThe Russian option: Finite horizon. Peskir, Goran. MIMS EPrint: Manchester Institute for Mathematical Sciences School of Mathematics
The Russian option: Finite horizon Peskir, Goran 25 MIMS EPrint: 27.37 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: And by contacting:
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationSTOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED
J. Appl. Prob. 42, 826 838 (25) Printed in Israel Applied Probability Trust 25 STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED X. GUO, Cornell University J. LIU, Yale University
More informationA new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009
A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance
More informationErnesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010
Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance
More informationOn Doob s Maximal Inequality for Brownian Motion
Stochastic Process. Al. Vol. 69, No., 997, (-5) Research Reort No. 337, 995, Det. Theoret. Statist. Aarhus On Doob s Maximal Inequality for Brownian Motion S. E. GRAVERSEN and G. PESKIR If B = (B t ) t
More informationSome SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen
Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text
More informationSquared Bessel Process with Delay
Southern Illinois University Carbondale OpenSIUC Articles and Preprints Department of Mathematics 216 Squared Bessel Process with Delay Harry Randolph Hughes Southern Illinois University Carbondale, hrhughes@siu.edu
More informationRepresentations of Gaussian measures that are equivalent to Wiener measure
Representations of Gaussian measures that are equivalent to Wiener measure Patrick Cheridito Departement für Mathematik, ETHZ, 89 Zürich, Switzerland. E-mail: dito@math.ethz.ch Summary. We summarize results
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationMan Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***
JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.
More informationOn pathwise stochastic integration
On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic
More informationOptimal Stopping and Applications
Optimal Stopping and Applications Alex Cox March 16, 2009 Abstract These notes are intended to accompany a Graduate course on Optimal stopping, and in places are a bit brief. They follow the book Optimal
More informationChange detection problems in branching processes
Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University
More informationPavel V. Gapeev, Neofytos Rodosthenous On the drawdowns and drawups in diffusion-type models with running maxima and minima
Pavel V. Gapeev, Neofytos Rodosthenous On the drawdowns and drawups in diffusion-type models with running maxima and minima Article (Accepted version) (Refereed) Original citation: Gapeev, Pavel V. and
More informationSome Aspects of Universal Portfolio
1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More informationStochastic Volatility and Correction to the Heat Equation
Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century
More informationMoment Properties of Distributions Used in Stochastic Financial Models
Moment Properties of Distributions Used in Stochastic Financial Models Jordan Stoyanov Newcastle University (UK) & University of Ljubljana (Slovenia) e-mail: stoyanovj@gmail.com ETH Zürich, Department
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction
ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES ALEKSANDAR MIJATOVIĆ AND MARTIJN PISTORIUS Abstract. In this note we generalise the Phillips theorem [1] on the subordination of Feller processes by Lévy subordinators
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More informationNoncooperative continuous-time Markov games
Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and
More informationMi-Hwa Ko. t=1 Z t is true. j=0
Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional
More informationLimit theorems for multipower variation in the presence of jumps
Limit theorems for multipower variation in the presence of jumps Ole E. Barndorff-Nielsen Department of Mathematical Sciences, University of Aarhus, Ny Munkegade, DK-8 Aarhus C, Denmark oebn@imf.au.dk
More informationThe absolute continuity relationship: a fi» fi =exp fif t (X t a t) X s ds W a j Ft () is well-known (see, e.g. Yor [4], Chapter ). It also holds with
A Clarification about Hitting Times Densities for OrnsteinUhlenbeck Processes Anja Göing-Jaeschke Λ Marc Yor y Let (U t ;t ) be an OrnsteinUhlenbeck process with parameter >, starting from a R, that is
More informationGeneralized Hypothesis Testing and Maximizing the Success Probability in Financial Markets
Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City
More informationStochastic Calculus for Finance II - some Solutions to Chapter VII
Stochastic Calculus for Finance II - some Solutions to Chapter VII Matthias hul Last Update: June 9, 25 Exercise 7 Black-Scholes-Merton Equation for the up-and-out Call) i) We have ii) We first compute
More informationRichard F. Bass Krzysztof Burdzy University of Washington
ON DOMAIN MONOTONICITY OF THE NEUMANN HEAT KERNEL Richard F. Bass Krzysztof Burdzy University of Washington Abstract. Some examples are given of convex domains for which domain monotonicity of the Neumann
More informationThe Pedestrian s Guide to Local Time
The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments
More informationA Class of Fractional Stochastic Differential Equations
Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,
More informationMAXIMUM PROCESS PROBLEMS IN OPTIMAL CONTROL THEORY
MAXIMUM PROCESS PROBLEMS IN OPTIMAL CONTROL THEORY GORAN PESKIR Receied 16 January 24 and in reised form 23 June 24 Gien a standard Brownian motion B t ) t and the equation of motion dx t = t dt+ 2dBt,wesetS
More informationOn a class of optimal stopping problems for diffusions with discontinuous coefficients
On a class of optimal stopping problems for diffusions with discontinuous coefficients Ludger Rüschendorf and Mikhail A. Urusov Abstract In this paper we introduce a modification of the free boundary problem
More informationERRATA: Probabilistic Techniques in Analysis
ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,
More informationContinuous Time Finance
Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale
More informationITÔ S ONE POINT EXTENSIONS OF MARKOV PROCESSES. Masatoshi Fukushima
ON ITÔ S ONE POINT EXTENSIONS OF MARKOV PROCESSES Masatoshi Fukushima Symposium in Honor of Kiyosi Itô: Stocastic Analysis and Its Impact in Mathematics and Science, IMS, NUS July 10, 2008 1 1. Itô s point
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationApplications of Ito s Formula
CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationPROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS
PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS Libo Li and Marek Rutkowski School of Mathematics and Statistics University of Sydney NSW 26, Australia July 1, 211 Abstract We
More informationPoint Process Control
Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued
More informationfor all f satisfying E[ f(x) ] <.
. Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if
More informationGoodness of fit test for ergodic diffusion processes
Ann Inst Stat Math (29) 6:99 928 DOI.7/s463-7-62- Goodness of fit test for ergodic diffusion processes Ilia Negri Yoichi Nishiyama Received: 22 December 26 / Revised: July 27 / Published online: 2 January
More informationOptimal Mean-Variance Selling Strategies
Math. Financ. Econ. Vol. 10, No. 2, 2016, (203 220) Research Report No. 12, 2012, Probab. Statist. Group Manchester (20 pp) Optimal Mean-Variance Selling Strategies J. L. Pedersen & G. Peskir Assuming
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationTotal Expected Discounted Reward MDPs: Existence of Optimal Policies
Total Expected Discounted Reward MDPs: Existence of Optimal Policies Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony Brook, NY 11794-3600
More informationSome Tools From Stochastic Analysis
W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click
More information(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS
(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University
More informationIntroduction to Algorithmic Trading Strategies Lecture 3
Introduction to Algorithmic Trading Strategies Lecture 3 Trend Following Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com References Introduction to Stochastic Calculus with Applications.
More informationON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS
ON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS SAVAS DAYANIK Department of Operations Research and Financial Engineering and the Bendheim Center for Finance Princeton University, Princeton,
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationRisk-Minimality and Orthogonality of Martingales
Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationReflected Brownian Motion
Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide
More informationTAUBERIAN THEOREM FOR HARMONIC MEAN OF STIELTJES TRANSFORMS AND ITS APPLICATIONS TO LINEAR DIFFUSIONS
Kasahara, Y. and Kotani, S. Osaka J. Math. 53 (26), 22 249 TAUBERIAN THEOREM FOR HARMONIC MEAN OF STIELTJES TRANSFORMS AND ITS APPLICATIONS TO LINEAR DIFFUSIONS YUJI KASAHARA and SHIN ICHI KOTANI (Received
More informationStochastic Differential Equations.
Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)
More informationMultivariate Generalized Ornstein-Uhlenbeck Processes
Multivariate Generalized Ornstein-Uhlenbeck Processes Anita Behme TU München Alexander Lindner TU Braunschweig 7th International Conference on Lévy Processes: Theory and Applications Wroclaw, July 15 19,
More informationMinimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization
Finance and Stochastics manuscript No. (will be inserted by the editor) Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Nicholas Westray Harry Zheng. Received: date
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More information