Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Similar documents
HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES

On the Multi-Dimensional Controller and Stopper Games

Stochastic optimal control with rough paths

Controlled Markov Processes and Viscosity Solutions

Constrained Optimal Stopping Problems

Thomas Knispel Leibniz Universität Hannover

Sébastien Chaumont a a Institut Élie Cartan, Université Henri Poincaré Nancy I, B. P. 239, Vandoeuvre-lès-Nancy Cedex, France. 1.

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control

Shadow prices and well-posedness in the problem of optimal investment and consumption with transaction costs

CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES

Prof. Erhan Bayraktar (University of Michigan)

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference

Example 1. Hamilton-Jacobi equation. In particular, the eikonal equation. for some n( x) > 0 in Ω. Here 1 / 2

Optimal Stopping Problems and American Options

Viscosity Solutions for Dummies (including Economists)

On continuous time contract theory

A Barrier Version of the Russian Option

Utility Maximization in Hidden Regime-Switching Markets with Default Risk

Optimal Execution Tracking a Benchmark

A Representation of Excessive Functions as Expected Suprema

On the quantiles of the Brownian motion and their hitting times.

Lipschitz continuity for solutions of Hamilton-Jacobi equation with Ornstein-Uhlenbeck operator

A Remark on IVP and TVP Non-Smooth Viscosity Solutions to Hamilton-Jacobi Equations

The Azéma-Yor Embedding in Non-Singular Diffusions

On nonlocal Hamilton-Jacobi equations related to jump processes, some recent results

Optimal portfolio strategies under partial information with expert opinions

Some aspects of the mean-field stochastic target problem

Hamilton-Jacobi-Bellman Equation of an Optimal Consumption Problem

Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations

On Stopping Times and Impulse Control with Constraint

Controlled Markov Processes and Viscosity Solutions

Solution of Stochastic Optimal Control Problems and Financial Applications

A probabilistic approach to second order variational inequalities with bilateral constraints

for all f satisfying E[ f(x) ] <.

Liquidity risk and optimal dividend/investment strategies

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

On Ergodic Impulse Control with Constraint

2012 NCTS Workshop on Dynamical Systems

Minimization of ruin probabilities by investment under transaction costs

Continuous dependence estimates for the ergodic problem with an application to homogenization

A Model of Optimal Portfolio Selection under. Liquidity Risk and Price Impact

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

On the Bellman equation for control problems with exit times and unbounded cost functionals 1

A Numerical Scheme for the Impulse Control Formulation for Pricing Variable Annuities with a Guaranteed Minimum Withdrawal Benefit (GMWB)

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

Maximum Process Problems in Optimal Control Theory

THE DIRICHLET PROBLEM FOR THE CONVEX ENVELOPE

Mean field games and related models

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching

Mean Field Games on networks

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization

Some Aspects of Universal Portfolio

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Mean-Field Games with non-convex Hamiltonian

Introduction to Random Diffusions

Noncooperative continuous-time Markov games

Characterizing controllability probabilities of stochastic control systems via Zubov s method

LONG-TERM OPTIMAL REAL INVESTMENT STRATEGIES IN THE PRESENCE OF ADJUSTMENT COSTS

On semilinear elliptic equations with measure data

A problem of portfolio/consumption choice in a. liquidity risk model with random trading times

On the Smoothness of Value Functions and the Existence of Optimal Strategies

Deterministic Dynamic Programming

On the dual problem of utility maximization

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience

Utility maximization problem with transaction costs - Shadow price approach

Worst Case Portfolio Optimization and HJB-Systems

University Of Calgary Department of Mathematics and Statistics

Nonzero-Sum Games of Optimal Stopping for Markov Processes

Introduction Optimality and Asset Pricing

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Asymptotic Perron Method for Stochastic Games and Control

Smooth Fit Principle for Impulse Control of Multi-dimensional Diffusion Processes

Introduction to Algorithmic Trading Strategies Lecture 3

Integro-differential equations: Regularity theory and Pohozaev identities

Uniformly Uniformly-ergodic Markov chains and BSDEs

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

On the principle of smooth fit in optimal stopping problems

Obstacle problems for nonlocal operators

SMOOTH FIT PRINCIPLE FOR IMPULSE CONTROL OF MULTI-DIMENSIONAL DIFFUSION PROCESSES

OBSTACLE PROBLEMS FOR NONLOCAL OPERATORS: A BRIEF OVERVIEW

A Nonlinear PDE in Mathematical Finance

1 Brownian Local Time

Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

Discussion Paper #1542. On the Smoothness of Value Functions

Portfolio Optimization with unobservable Markov-modulated drift process

OPTIMAL DIVIDEND AND REINSURANCE UNDER THRESHOLD STRATEGY

Nonlinear stabilization via a linear observability

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

Optimal Consumption, Investment and Insurance Problem in Infinite Time Horizon

1 Markov decision processes

c 2004 Society for Industrial and Applied Mathematics

LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES

ANALYSIS OF THE OPTION PRICES IN JUMP DIFFUSION MODELS

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Transcription:

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct 11, 2012 Y.Yang (MU) Oct 11, 2012 1 / 35

Outline Problem Formulation Related Literature Dirichlet Form and Dynkin Game Dynkin Game and Free Boundary Problem The Multi-dimensional Stochastic Singular Control Problem Concluding Remarks and Future Research References Y.Yang (MU) Oct 11, 2012 2 / 35

Problem Formulation Problem Formulation Given a probability space (Ω, F, F t, X, θ t, P x ), we are concerned with a multi-dimensional diffusion on R n : where X t = X 1t. X nt dx t = µ(x t )dt + σ(x t )db t, X 0 = x, (1), µ(x t) = µ 1. µ n σ 11, σ(x t) =. σ n1 σ 1m. σ nm, B t = in which µ i, σ i,j (1 i n, 1 j m) are functions of X 1t,..., X (n 1)t satisfying the usual conditions, and B t is m-dimensional Brownian motion with m n. B 1t. B mt (2), Y.Yang (MU) Oct 11, 2012 3 / 35

Problem Formulation There is a cost function associated with this process: ( k S (x) = E x e αt h(x t )dt + 0 And there is control on the underlying process: dx 1t = µ 1 dt + σ 11 db 1t + + σ 1m db mt,... 0 ( ) ) e αt f 1 (X t )da (1) t + f 2 (X t )da (2) t, f 1 (x), f 2 (x) > 0, x R n. (3) dx nt = µ n dt + σ n1 db 1t + + σ nm db mt + da (1) t da (2) t, X 0 = x. Y.Yang (MU) Oct 11, 2012 4 / 35

Problem Formulation A control policy is defined as a pair (A (1) t, A (2) t ) = S of F t adapted processes which are right continuous and nondecreasing in t. A (1) t A (2) t is the minimal decomposition of a bounded variation process into the difference of two nondecreasing processes. Problem Formulation: one looks for a control policy S that minimizes the cost function k S (x), W (x) = min k S(x), (4) S S where S is the set of admissible policies. Y.Yang (MU) Oct 11, 2012 5 / 35

Problem Formulation Applications? A decision maker observes the expenses of a company under a multi-factor situation and wants to minimize the total expense by adjusting one factor. An investor observes the prices of several assets in a portfolio and wants to maximize the total wealth by adjusting the investment on one asset. Y.Yang (MU) Oct 11, 2012 6 / 35

Related Literature Related Literature This is a free boundary multi-dimensional singular control problem. The classical approach is to use the dynamic programming principle to derive the Hamilton-Jacobi-Bellman (HJB) equation and solve the PDE, if one can, e.g., [Pham (2009), Ma and Yong(1999)]. Viscosity solution techniques, e.g., [Fleming and Soner(2006), Crandall, Ishii and Lions (1992)]. Existence, uniqueness and regularity of the solution to the HJB equation are hard to analyze [Soner and Shreve(1989)]. Y.Yang (MU) Oct 11, 2012 7 / 35

Related Literature The value function of the stochastic singular control problem is closely related to the value of a zero-sum game, called Dynkin game, e.g., [Fukushima and Taksar(2002), Taksar(1985), Guo and Tomecek(2008)]. The value of the Dynkin game coincides with the solution of a variational inequality problem involving Dirichlet forms, e.g., [Nagai(1978), Zabczyk (1984), Karatzas(2005)]. Using an approach via Dynkin game and Dirichlet form, [Fukushima and Taksar(2002)] proved the existence of a classical solution to a one-dimensional stochastic singular control problem. Y.Yang (MU) Oct 11, 2012 8 / 35

Dirichlet Form and Dynkin Game Variational Inequality Problem Involving Dirichlet Form Let f 1 F. Nagai [Nagai(1978)] showed that there exist a quasi continuous function w F which solves the variational inequality problem w f 1, E α (w, u w) 0, u F with u f 1, and a properly exceptional set N such that for all x R n /N, ( w(x) = sup E x e ασ [ f 1 (X σ )] ) ) = E x (e αˆσ [ f 1 (Xˆσ )], σ where ˆσ = inf{t 0; w(x t ) = f 1 (X t )}. Moreover, w is the smallest α-potential dominating the function f 1 m-a.e. Y.Yang (MU) Oct 11, 2012 9 / 35

Dirichlet Form and Dynkin Game [Zabczyk (1984)] then extended this result to the solution of the zero-sum game ( Dynkin game) by showing that, there exist a quasi continuous function V (x) K which solves the variational inequality where E α (V, u V ) 0, u K, (5) K = {u F : f 1 u f 2 m-a.e.}, f 1, f 2 F, and a properly exceptional set N such that for all x R n /N, V (x) = sup inf J x(τ, σ) = inf σ τ τ sup J x (τ, σ) (6) σ for any stopping times τ and σ, where ) J x (τ, σ) = E x (e α(τ σ) ( I σ τ f 1 (X σ ) + I τ<σ f 2 (X τ )). (7) Y.Yang (MU) Oct 11, 2012 10 / 35

Dirichlet Form and Dynkin Game What is more, if define E 1 = {x R n /N : V (x) = f 1 (x)}, then the hitting times ˆτ = τ E2, ˆσ = τ E1 E 2 = {x R n /N : V (x) = f 2 (x)}, is the saddle point of the game J x (ˆτ, σ) J x (ˆτ, ˆσ) J x (τ, ˆσ) (8) for any x R n /N and any stopping times τ, σ, where J x is given in (7). In particular V (x) = J x (ˆτ, ˆσ), x R n /N. (9) Y.Yang (MU) Oct 11, 2012 11 / 35

Dirichlet Form and Dynkin Game [Fukushima and Menda (2006)] further showed that if the transition probability function of the underlying process satisfies the absolute continuity condition: p t (x, ) m( ), (10) and f 1, f 2 are finite finely continuous functions satisfying the following separability condition: Assumption There exist finite α-excessive functions v 1, v 2 F such that, for all x R n, f 1 (x) v 1 (x) v 2 (x) f 2 (x), (11) then there does not exist the exceptional set N, and the solution V (x) is finite finely continuous. Y.Yang (MU) Oct 11, 2012 12 / 35

Dynkin Game and Free Boundary Problem Dynkin Game and Free Boundary Problem We are concerned with a multi-dimensional Dynkin game over a region D = R n 1 (A( x), B( x)), where A, B are two bounded, smooth and uniformly Lipschitiz functions. The associated cost function is ( τ σ ) J x (τ, σ) =E x e αt H(X t )dt 0 ) (12) + E x (e α(τ σ) ( I σ τ f 1 (X σ ) + I τ<σ f 2 (X τ )), where H is the holding cost, and f 1, f 2 are boundary penalty costs. Y.Yang (MU) Oct 11, 2012 13 / 35

Dynkin Game and Free Boundary Problem Two players P 1 and P 2 observe the underlying process X t in (1) with accumulated income, discounted at present time, equalling σ 0 e αt H(X t )dt, for any stopping time σ. If P 1 stops the game at time σ, he pays P 2 the amount of the accumulated income plus the amount f 2 (X σ ), which after been discounted equals e ασ f 2 (X σ ). If the process is stopped by P 2 at time σ, he receives from P 1 the accumulated income less the amount f 1 (X σ ), which after been discounted equals e ασ f 1 (X σ ). P 1 tries to minimize his payment while P 2 tries to maximize his income. Thus the value of this Dynkin game is given by V (x) = inf τ sup J x (τ, σ), x R n. (13) σ Y.Yang (MU) Oct 11, 2012 14 / 35

Dynkin Game and Free Boundary Problem Define the following Dirichlet form on D: E(u, v) = u(x) A v(x)m(dx), u, v F, (14) where F = {u L 2 (D) : u is continuous, D A(x) = 1 2 σσt is assumed to be uniformly elliptic, and m(dx) = e b x dx, in which b = A 1 µ. Consider the solution V F, f 1 V f 2 of D u(x) T u(x)m(dx) < }, E α (V, u V ) (H, u V ), u F, f 1 u f 2, (15) where E α (u, v) = E(u, v) + α u(x)v(x)m(dx). R n Y.Yang (MU) Oct 11, 2012 15 / 35

Dynkin Game and Free Boundary Problem Theorem Assume some usual conditions on H, f 1, f 2 F and the separability condition, we put ( τ σ ) J x (τ, σ) =E x e αt H(X t )dt 0 ) (16) + E x (e α(τ σ) ( I σ τ f 1 (X σ ) + I τ<σ f 2 (X τ )) for finite stopping times τ, σ. Then the solution of (15) admits a finite and continuous value function of the game V (x) = inf τ sup σ J x (τ, σ) = sup inf J x(τ, σ), x R n. (17) σ τ Y.Yang (MU) Oct 11, 2012 16 / 35

Dynkin Game and Free Boundary Problem Theorem Furthermore if we let E 1 = {x R n : V (x) = f 1 (x)}, E 2 = {x R n : V (x) = f 2 (x)}, (18) then the hitting times ˆτ = τ E2, ˆσ = τ E1 is the saddle point of the game J x (ˆτ, σ) J x (ˆτ, ˆσ) J x (τ, ˆσ) (19) for any x R n and any stopping times τ, σ. In particular, ˆτ, ˆσ are finite a.s. and V (x) = J x (ˆτ, ˆσ), x R n. (20) Regularities? Optimal control policies? Y.Yang (MU) Oct 11, 2012 17 / 35

Dynkin Game and Free Boundary Problem Assumption There exist smooth and uniformly Lipschitz continuous functions a( x), b( x), x R n 1, such that, (α L)f 1 ( x, a( x)) + H( x, a( x)) = 0, (α L)f 2 ( x, b( x)) H( x, b( x)) = 0, and A( x) < a( x) < 0 < b( x) < B( x), x R n 1. Y.Yang (MU) Oct 11, 2012 18 / 35

Main Result Dynkin Game and Free Boundary Problem Theorem Let V (x) be the solution of the multi-dimensional Dynkin game, then there exist unique smooth functions a( x), b( x) such that A( x) < a( x) < 0 < b( x) < B( x) and f 1 (x) < V (x) < f 2 (x), x R n 1 (a, b), (21) V (x) = f 1 (x), V (x) = f 2 (x), x R n 1 (, a], x R n 1 [b, ), (22) V u ( x, a( x)) = f 1u ( x, a( x)), V u ( x, b( x)) = f 2u ( x, b( x)), x R n 1, (23) where V u represents the directional derivative along the u direction. Y.Yang (MU) Oct 11, 2012 19 / 35

Dynkin Game and Free Boundary Problem Theorem (Continued) Furthermore V is C 1,,1,1 on R n, C 2,,2 on R n 1 (a, b) R n 1 (, a) R n 1 (b, ) and αv (x) LV (x) = H(x), x R n 1 (a, b), αv (x) LV (x) > H(x), x R n 1 (, a), αv (x) LV (x) < H(x), x R n 1 (b, ), (24) where L is the infinitesimal generator. Y.Yang (MU) Oct 11, 2012 20 / 35

Dynkin Game and Free Boundary Problem Sketch of Proof Proposition For any ( x, x n ) with x n < a( x), (α L)f 1 ( x, a( x)) + H( x, a( x)) < 0, and for any ( x, x n ) with x n > a( x), (α L)f 1 ( x, a( x)) + H( x, a( x)) > 0. Similarly, for any ( x, x n ) with x n < b( x), (α L)f 2 ( x, b( x)) H( x, b( x)) > 0, and for any ( x, x n ) with x n > b( x), (α L)f 2 ( x, b( x)) H( x, b( x)) < 0. Y.Yang (MU) Oct 11, 2012 21 / 35

Dynkin Game and Free Boundary Problem Proposition There exist A( x) < 0 < B( x), x R n 1 such that the diffusion M = (X t, P x ) on D associated with the Dirichlet form (14) satisfies ( τ0 τ A ) E ξ1 e αt H(X t )dt < 2M, 0 ( τ0 τ B ) E ξ2 e αt H(X t )dt > 2M, 0 (25) for some ξ 1 R n 1 (A( x), 0), and ξ 2 R n 1 (0, B( x)), where τ 0, τ A, τ B denote the hitting times to the graphs of x n = 0, A( x), B( x) respectively. Y.Yang (MU) Oct 11, 2012 22 / 35

Dynkin Game and Free Boundary Problem Proposition It is not optimal for P 1 to stop the game when X t < 0 (or equivalently H(X t ) < 0), and it is not optimal for P 2 to stop the game when X t > 0 (or equivalently H(X t ) > 0). For any starting point x 0 = ( x 0, x n ) R n of the game, if x n A( x 0 ), it is optimal for P 2 to stop the game immediately; and if x n B( x 0 ), it is optimal for P 1 to stop the game immediately. Let (ˆτ, ˆσ) be the saddle point in (16), then ˆτ, ˆσ are finite a.s., hence V (x) = J x (ˆτ, ˆσ). Y.Yang (MU) Oct 11, 2012 23 / 35

Dynkin Game and Free Boundary Problem Proposition f 1 ( x, 0) < V ( x, 0) < f 2 ( x, 0), x R n 1. V (x) > f 1 (x) for x R n 1 (0, B) and V (x) < f 2 (x) for x R n 1 (A, 0). For each x E 1, and for each x E 2, where E 1, E 2 were given in (18). (α L)f 1 (x) + H(x) 0, (α L)f 2 (x) H(x) 0, Y.Yang (MU) Oct 11, 2012 24 / 35

Dynkin Game and Free Boundary Problem Proposition If x = ( x, x n ) E 1, then for any point ( x, y) with y < x n, (α L)f 1 ( x, y) + H( x, y) < 0. If x = ( x, x n ) E 2, then for any point ( x, y) with y > x n, (α L)f 2 ( x, y) H( x, y) < 0. If (α L)f 1 ( x, x n ) + H( x, x n ) < 0, x R n 1, then it is optimal for P 2 to stop the game immediately. If (α L)f 2 ( x, x n ) H( x, x n ) > 0, x R n 1, then it is optimal for P 1 to stop the game immediately. Y.Yang (MU) Oct 11, 2012 25 / 35

Dynkin Game and Free Boundary Problem Conclusion: R n 1 (, a] = E 1, R n 1 [b, ) = E 2, R n 1 (a, b) = E. Remarks: The optimal control is given by two curves a( x) and b( x), x R n 1. Y.Yang (MU) Oct 11, 2012 26 / 35

The Multi-dimensional Stochastic Singular Control Problem The Multi-dimensional Stochastic Singular Control Problem Define h(x), W (x), x R n, as h( x, y) = W ( x, y) = y 0 y a( x) H( x, u)du + C( x), (26) V ( x, u)du, x R n 1, y R, (27) where C( x) is a function of x such that lim αw ( x, y) LW ( x, y) h( x, y) = 0, y a( x)+ then h( x, y) and W ( x, y) satisfy the following: Y.Yang (MU) Oct 11, 2012 27 / 35

The Multi-dimensional Stochastic Singular Control Problem Theorem W is C 2,,2 on R n and there exist unique smooth and uniformly Lipschitz functions a( x), b( x) such that A( x) < a( x) < 0 < b( x) < B( x) and αw (x) LW (x) = h(x), x R n 1 (a, b), αw (x) LW (x) < h(x), x R n 1 (, a) R n 1 (b, ), f 1 (x) < W (x) < f 2 (x), x n W (x) = f 1 (x), x n W (x) = f 2 (x), x n and x R n 1, 1 k n, 2 x n x k W ( x, a( x)) = f 1 x k ( x, a( x)), x R n 1 (a, b), x R n 1 (, a], x R n 1 [b, ), 2 x n x k W ( x, b( x)) = f 2 x k ( x, b( x)). Y.Yang (MU) Oct 11, 2012 28 / 35

The Multi-dimensional Stochastic Singular Control Problem Sketch of Proof The function αw (x) LW (x), is continuous, so is C(x). For fixed x, consider the function U(y) = αw ( x, y) LW ( x, y) h( x, y) with U (y) = αv ( x, y) LV ( x, y) H( x, y), and we know U(a( x)) = 0. Notice that U (y) = 0 for a( x) < y < b( x); U (y) > 0 for y < a( x); U (y) < 0 for y > b( x), the function U(y) is continuous, it can be seen that αw ( x, y) LW ( x, y) < h( x, y), for y < a( x) or y > b( x). Y.Yang (MU) Oct 11, 2012 29 / 35

The Multi-dimensional Stochastic Singular Control Problem Define the following notations: A (i) t = A (i) t A (i) t, t 0, i = 1, 2, X t = X t X t, t 0, W (X t ) = W (X t ) W (X t ), t 0. Let γ = (0, 0,..., 0, 1) T, then the reflected diffusion can be written as dx t = µ(x t )dt + σ(x t )db t + γda (1) t γda (2) t, (28) where A (1) t increases only at the boundary a and A (2) t increases only at the boundary b. We call a quadruplet S = (S, X t, A (1) t, A (2) t ) (S = (A (1) t, A (2) t ) for simplicity) admissible policy if it satisfies some usual conditions. Y.Yang (MU) Oct 11, 2012 30 / 35

The Multi-dimensional Stochastic Singular Control Problem Let k S (x) be the cost function given by the following k S (x) = ( E x 0 + E x ) ( ( e αt h(x t )dt + E x e αt f 1 (X t )da (1),c t 0 0 t< ( Xnt e αt + A (1) t X nt ) ) + f 2 (X t )da (2),c t )dy) Xnt f 1 (X t )dy + f 2 (X t, X nt A (2) t then: Y.Yang (MU) Oct 11, 2012 31 / 35

The Multi-dimensional Stochastic Singular Control Problem Theorem 1. For any admissible policy S, W (x) k S (x), x R n. 2. W (x) = k S (x), x R n, if and only if S = R n 1 [a, b], and the process X t is the reflecting diffusion on S, i.e., the optimal policy is such that A (1) t increases only when X t is on the boundary ( x, a( x)) and A (2) t increases only when X t is on the boundary ( x, b( x)), x R n 1. Proved using verification theorem and the conditions of W in Theorem 12. Corollary Under the given conditions on H, f 1, f 2 and the definition (26) for function h, the solution W C 2,,2 (R n ) and the functions a( x), b( x) in Theorem 12 are uniquely determined. The function W (x) (x R n ) coincides with the optimal value function. Y.Yang (MU) Oct 11, 2012 32 / 35

The Multi-dimensional Stochastic Singular Control Problem Skorohod Problem The two curves a and b are smooth and uniformly Lipschitz. Let n(x) be the inward normal for x at the boundary, then it can be shown that there exist positive constants ν 1, ν 2 such that x = ( x, a( x)), (γ, n(x)) ν 1, x = ( x, b( x)), (γ, n(x)) ν 2. Using a localization technique and Theorem 4.3 in [Lions and Sznitman (1984)] we can show that there exists a solution (X t, A (1) t, A (2) t ) to the reflected diffusion (28). Y.Yang (MU) Oct 11, 2012 33 / 35

Concluding Remarks and Future Research Concluding Remarks 1. The value function V of the multi-dimensional Dynkin game is characterized as the solution of a variational inequality problem involving Dirichlet form. 2. The integrated form of V is shown to be the optimal value function W of the multi-dimensional singular control problem. 3. Regularities of V imply the smoothness of W, hence the existence of a classical solution to the HJB equation. 4. The optimal control policy is shown to be given by two curves and the controlled process is the reflected diffusion between these two curves. Y.Yang (MU) Oct 11, 2012 34 / 35

Future Research Concluding Remarks and Future Research 1. Time inhomogeneous stochastic singular control via game theory and Dirichlet form. 2. Finite horizon stochastic singular control problems. 3. Option pricing via game theoretical models. Thank you! Y.Yang (MU) Oct 11, 2012 35 / 35

References M.G. Crandall, H. Ishii and P.L. Lions, User s guide to viscosity solutions of second order partial differential equations, American Mathematical Society. Bulletin. New Series 27 (1) pp. 1 67, 1992. W.H. Fleming and H.M Soner, Controlled Markov Processes and Viscosity Solutions, Springer, 2nd edition, 2006. M. Fukushima, Y. Oshima and M. Takeda, Dirichlet Forms and Symmetric Markov Processes, Walter de Gruyter, Berlin, New York, 1994. M. Fukushima and M. Taksar, Dynkin Games Via Dirichlet Froms and Singular Control of One-Dimensional Diffusion, SIAM J. Control Optim., 41(3) pp. 682 699, 2002. M. Fukushima and K. Menda, Refined Solutions of Optimal Stopping Games for Symmetric Markov Processes, Technology Reports of Kansai University, 48 pp. 101 110, 2006. X. Guo and P. Tomecek, Solving singular control from optimal switching, Special issue for Asian Pacific Financial Market, 2008. Y.Yang (MU) Oct 11, 2012 35 / 35

References I. Karatzas and I.M. Zamfirescu, Game approach to the optimal stopping problem, Stochastics, 77(5) pp. 401 435, 2005. P.L. Lions and A.S. Sznitman, Stochastic Differential Equations with Reflecting Boundary Conditions, Communications on Pure and Applied Mathematics, Vol. XXXVII pp. 511 537, 1984. J. Ma and J. Yong, Dynamic Programming for Multidimensional Stochastic Control Problems, ACTA MATHEMATICA SINICA, 15(4) pp. 485 506, 1999. H. Nagai, On An Optimal Stopping Problem And A Variational Inequality, J. Math. Soc. Japan, 30 pp. 303 312, 1978. H. Pham, Continuous-time Stochastic Control and Optimization with Financial Applications, Springer-Verlag Berlin Heidelberg 2009. H.M. Soner and S.E. Shreve, Regularity of the Value Function for a Two-Dimensional Singular Stochastic Control Problem, SIAM J. Control and Optimization, 27(4) pp. 876 907, 1989. Y.Yang (MU) Oct 11, 2012 35 / 35

References M. Taksar, Average Optimal Singular Control and a Related Stopping Problem, Math. Oper. Res., 10 pp. 63 81, 1985. J. Zabczyk, Stopping Games for Symmetric Markov Processes, Probab. Math. Statist., 4(2) pp. 185 196, 1984. Y.Yang (MU) Oct 11, 2012 35 / 35