Ulam Quaterly { Volume 2, Number 4, with Convergence Rate for Bilinear. Omar Zane. University of Kansas. Department of Mathematics

Similar documents
March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

Some Aspects of Universal Portfolio

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES

Continuous-time Gaussian Autoregression Peter Brockwell, Richard Davis and Yu Yang, Statistics Department, Colorado State University, Fort Collins, CO

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

Stochastic Differential Equations.

Optimal portfolio strategies under partial information with expert opinions

Poisson Approximation for Structure Floors

An Analytic Method for Solving Uncertain Differential Equations

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

On the quantiles of the Brownian motion and their hitting times.

On a class of stochastic differential equations in a financial network model

A problem of portfolio/consumption choice in a. liquidity risk model with random trading times

Supplement A: Construction and properties of a reected diusion

Randomization in the First Hitting Time Problem

Pathwise Construction of Stochastic Integrals

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

An Uncertain Control Model with Application to. Production-Inventory System

Gaussian, Markov and stationary processes

c 2002 Society for Industrial and Applied Mathematics

Stochastic Calculus February 11, / 33

Solution of Stochastic Optimal Control Problems and Financial Applications

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Numerical methods for solving stochastic differential equations

Exact and high order discretization schemes. Wishart processes and their affine extensions

Conditioned stochastic dierential equations: theory, examples and application to nance

Numerical Integration of SDEs: A Short Tutorial

Jump-type Levy Processes

= a. a = Let us now study what is a? c ( a A a )

Discretization of SDEs: Euler Methods and Beyond

Convex Stochastic Control and Conjugate Duality in a Problem of Unconstrained Utility Maximization Under a Regime Switching Model

A new method of nonparametric density estimation

1 Brownian Local Time

White noise generalization of the Clark-Ocone formula under change of measure

The Cameron-Martin-Girsanov (CMG) Theorem

Introduction to Diffusion Processes.

arxiv: v1 [math.pr] 24 Sep 2018

A Class of Fractional Stochastic Differential Equations

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

On It^o's formula for multidimensional Brownian motion

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

On the Multi-Dimensional Controller and Stopper Games

An Introduction to Malliavin calculus and its applications

Portfolio Optimization with unobservable Markov-modulated drift process

On the relation between the Smith-Wilson method and integrated Ornstein-Uhlenbeck processes

A numerical method for solving uncertain differential equations

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations

Stochastic Processes

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Chapter 2 Event-Triggered Sampling

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Lecture 2: Review of Prerequisites. Table of contents

Nonparametric volatility estimation in scalar diusions: Optimality across observation frequencies.

Stochastic differential equation models in biology Susanne Ditlevsen

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Hybrid Atlas Model. of financial equity market. Tomoyuki Ichiba 1 Ioannis Karatzas 2,3 Adrian Banner 3 Vassilios Papathanakos 3 Robert Fernholz 3

Linear-Quadratic Stochastic Differential Games with General Noise Processes

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

Stochastic Calculus for Finance II - some Solutions to Chapter VII

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

Information and Credit Risk

ON THE ASYMPTOTIC STABILITY IN TERMS OF TWO MEASURES FOR FUNCTIONAL DIFFERENTIAL EQUATIONS. G. Makay

A Characterization of Einstein Manifolds

The WENO Method for Non-Equidistant Meshes

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Lecture 4: Introduction to stochastic processes and stochastic calculus

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

Utility Maximization in Hidden Regime-Switching Markets with Default Risk

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

Hamilton-Jacobi-Bellman Equation of an Optimal Consumption Problem

Thomas Knispel Leibniz Universität Hannover

A TOUR OF PROBABILITY AND STATISTICS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007

The Wiener Itô Chaos Expansion

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

Hardy-Stein identity and Square functions

Backward martingale representation and endogenous completeness in finance

Kolmogorov Equations and Markov Processes

C.I.BYRNES,D.S.GILLIAM.I.G.LAUK O, V.I. SHUBOV We assume that the input u is given, in feedback form, as the output of a harmonic oscillator with freq

Perturbative Approaches for Robust Intertemporal Optimal Portfolio Selection

Shadow prices and well-posedness in the problem of optimal investment and consumption with transaction costs

PolyGamma Functions of Negative Order

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Representations of Gaussian measures that are equivalent to Wiener measure

R = µ + Bf Arbitrage Pricing Model, APM

K. Ito's stochastic calculus isacollection of tools which permitusto perform operations

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

Approximating diffusions by piecewise constant parameters

Faculty of Engineering, Mathematics and Science School of Mathematics

Vast Volatility Matrix Estimation for High Frequency Data

ECON 582: The Neoclassical Growth Model (Chapter 8, Acemoglu)

Convex polytopes, interacting particles, spin glasses, and finance

System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Transcription:

Ulam Quaterly { Volume 2, Number 4, 1994 Identication of Parameters with Convergence Rate for Bilinear Stochastic Dierential Equations Omar Zane University of Kansas Department of Mathematics Lawrence, Kansas Abstract We consider the parameter identication problem for both drift and diffusion coecients of bilinear stochastic dierential equations. A strongly consistent estimator for the coecient of the drift is known for the case in which we have continuous observation of the state. The values of the parameters for the diusion coecients are known after an arbitrarily small positive interval of time: formulas for the actual computation of their values are given. For the case in which we observe the state only at discrete moments of time, we discretize the formulas. It is shown that the discretization of these estimators does not converge to the parameters but to quantities that depend both on the values of the parameters and on the disretization step. The expression of these quantities are given explicitly. Results on the rate of convergence of these estimators are given. Finally we give strongly consistent estimators for the discrete observations case. 1 Introduction We consider the bilinear stochastic dierential equations, for i = 1; ::; d, i (t) = b i X i (t)dt + P d j=1 ij X i (t) dw j (t); X = x ; (1.1) where b = (b 1 ; ::; b d ) is a constant vector, = ( ij ) d i;j=1 a constant matrix, and W a d-dimensional Brownian motion. This kind of equations has found applications in nancial mathematics as models for the evolution of prices in the stock market. In particular, an application to problems of optimal investment and consumption (see [K]) shows optimal policies that depend explicitly on the values of b and A :=. These parameters (average rate of return of the stocks and volatility of the market) are not known by the investor, who has the need for estimating their values in such a way that 65

66 Identication of Parameters with Convergence Rate he/she can dene the policy of investment and consumption. In this paper strongly consistent estimators for b and A are given both for the case in which we observe X(t) = (X 1 (t); ::; X d (t)) for every t and for the case in which the observations are made only at discrete moments t 2 ft k := kg1 k=, where is a positive constant. For the case in which we have continuous observations a strongly consistent estimator of b is known and so is its rate of convergence (see [DPD1], [DPD2]). We give a discretized version of this estimator and show that the discretization introduces a bias that depends on the discretization step and that can be found explicitly. This fact allows us to dene a strongly consistent estimator for the discrete observation case. The value of A is known after an arbitrarily small positive period of time for the case in which we have continuous observations; a formula for the computation is given. Just like in the previous case the discretization of this formula gives us a biased estimator for A; the value of the biased is found and a strongly consistent estimator for the discrete observation case is given. Finally results on the rate of convergence of the discretized estimators are given. The paper is organized as follows: in Section 2 estimators for both b and A are given for the case with continuous observations. The convergence of their discretization with results on the rate of convergence is given in Section 3. The results of the previous section are then used in Section 4 to prove the strong consistency of the estimators that are given for both b and A for the case in which we have discrete observations. The results are then tested through simulation in Section 5. 2 Continuous observations Let us consider the case in which we want to estimate the parameters b and A in (1.1) and we have continuous observations of the state X(t). The estimator of b, given for the case d = 1 in [BF], [DPD1], [K] and [S], can be used also for the case d > 1. In fact, we have the following result Proposition 2.1 For every i=1,..,d, let b i (t) be dened for t > by bi (t) = 1 t Z t i (s) X i (s) We have that these estimators are strongly consistent, i.e. (2.1) lim t!1 bi = b i a:s: (2.2) Proof. Using (1.1) we get = 1 t Z t b i ds + j=1 bi (t) = 1 t Z t i (s) X i (s) ij dw j (s) = b i + j=1 ij W j (t) t (2.3)

Omar Zane 67 and the result follows from the strong law of large numbers for Brownian motion. Remark 2.1 The rate of convergence of the estimator is discussed in [DPD2] using the law of iterated logarithm We can now move our attention to A: we observe that there is no need to estimate the entries of A. In fact these are known adter an arbitrarily small period of time of positive length provided that we have continuous observations of the state. It is enough to evaluate a stochastic integral to get the exact values Proposition 2.2 For every i,j = 1,..,d and t > Z A ij = 1 t d(x i (s)x j (s)) t X i (s)x j (s) b i (t) b j (t) (2.4) Proof. Applying Ito's dierential rule we get + 1 2 tr( 1 1 d(x i (s)x j (s)) = X i (s) j (s) + X j (s) i (s) (2.5) Xi (s) i1 :::X i (s) id Xi (s) i1 :::X i (s) id ) ds X j (s) j1 :::X j (s) jd X j (s) j1 :::X j (s) jd = X i (s) j (s) + X j (s) i (s) + k=1 ik jk X i (s)x j (s) ds dividing both sides by X i (s)x j (s) and integrating from to t we get the result. 3 Discrete observations I In this section we dene estimators for b and A for the case in which we have observations only at discrete moments ft k := kg1 k=, >. We exploit the fact that explicit solutions for equations (1.1) are known for i = 1; ::; d, and given by ([KS]) X i (t) = x expf(b i 1 2 ij 2 )t + j=1 j=1 ij W j (t)g (3.1) We proceed by giving a discretization of the estimators in (2.1)and (2.4); let ^b i (k) := 1 kx X i (t l ) X i(t ) l 1 k X i (t ) (3.2) l 1 ^A ij (k) := 1 k f kx l=1 l=1 X i (t l )X j(t l ) X i(t l 1 )X j(t l 1 ) X i (t l 1 )X j(t l 1 ) (3.3)

68 Identication of Parameters with Convergence Rate X i(t l ) X i(t l 1 ) X i (t l 1 ) Remark 3.1 If we dene i (k) by X j(tl ) X j(t ) l 1 X j (t ) g l 1 i (k) := X i(t k ) X i(t k 1 ) X i (t k 1 ) (3.4) then the denitions given in (3.2) and (3.3) are equivalent to the following ^b i (k) := 1 k ^A ij(k) := 1 k kx l=1 kx l=1 i (l) (3.5) i (l) j(l) (3.6) Remark 3.2 By dention of Ito stochastic integral for! we have that ^b i (k) and ^A ij (k) converge to b i (t k ) and A ij respectively. The estimators dened in (3.5) and (3.6) converge almost surely, as k approaches innity, to quantities that depend not only on b and A but also on. Theorem 3.1 For every > and i=1,..,d lim k!1 ^b i (k) = expf b ig 1 and for every > and i,j=1,..,d a:s: (3.7) lim k!1 ^A ij (k) = expf(b i + b j ) + A ij )g expf b i g expf b j g + 1 a:s: (3.8) Proof. Let Y i (t) := log(x i (t)); it follows from (3.1) that at time t k+1 the distribution of Y i is Y i (t k+1 ) Y i (t k ) + (b i 1 2 A ii) + p m=1 im w m (k + 1) (3.9) where fw(k + 1) := (w 1 (k + 1); ::; w d (k + 1))g 1 k= is a sequence of independent Gaussian random vectors and the symbol indicates that the random variables on the two sides have the same distribution. >From this it follows that i (l), dened in (3.4), is such that i (l) expf(b i 1 2 A ii) + p P d m=1 imw m (k + 1)g 1 (3.1) If we x i and, we have that the sequence f (l) i g 1 l=1 is a sequence of independent, identically distributed random variables. Applying the strong

law of large numbers (see [Sh]) and (3.5) we get 1 lim k!1 ^b i (k) = lim k!1 k that proves (3.7). For proving (3.8) observe that + p Omar Zane 69 kx l=1 i (l) = expf b ig 1 (3.11) i (l) j(l) = expf (b i + b j 1 2 (A ii + A jj )) (3.12) m=1 ( im + jm )w m (l)g i (l) j(l) + 1 and similarly to the previous case using (2.1), (2.16) and the strong law of large numbers we have the result. In order to get results on the rate of convergence of these estimators, we use a version of the law of iterated logarithm proved by Hartman and Wintner (see [Sh], pages 372-374) Theorem 3.2 Let f i g 1 i=1 be a sequence of independent identically distributed P random variables with Ef i g = and Efi 2g = 2 > ; let n S n = i=1 i. Then lim sup n!1 js n j p = 1 a:s: (3.13) 2 2 n log log n Using the previous theorem, we can prove the following results Proposition 3.3 For every i = 1; ::; d, and > we have that where and j^b i lim sup (k) b1 i; j k!1 k p 1 2i; b a:s: (3.14) 2 b 1 i; = expfb ig 1 (3.15) b i; = Ef( i )2 g 2 (b 1 i;) 2 (3.16) Moreover, for every i; j = 1; ::; d, > and where and j ^A ij(k) A 1 i;j; lim sup j k!1 k p 1 2i;j; A a:s: (3.17) 2 A 1 i;j; = expf((b i + b j ) + A ij )g expf b i g expf b j g + 1 (3.18) A i;j; = Ef( i j) 2 g 2 (A 1 i;j;) 2 (3.19)

7 Identication of Parameters with Convergence Rate Proof. j^b i P flim sup (k) b1 i; j k!1 k p 1 2i;g b 2 j^b i P flim sup (k) b1 i; j p k p p 2 b k!1 log log k i;g P j 1 k P (l) i k l=1 1 k k l=1 = P flim sup b1 i; j p k p p 2 k!1 log log i; b g k = P flim sup k!1 P k j i b 1 l=1 i; q2 )j 1g = 1 a:s: bi; k log log k by theorem 3.2. The proof of the second part of the statement is analogous. Remark 3.3 The expected values in 3.16 and 3.19 can be easily evaluated using the fact that if X N(a; 2 ) then EfexpftXgg = expfat + 2 t 2 2 g. 2 1.8 1.6 1.4 A11, A12, A22 1.2 1.8.6.4.2 5 1 15 2 25 t(k) Fig. 1: delta=.1

Omar Zane 71 1.9.8.7.6 b1, b2.5.4.3.2.1 5 1 15 2 25 t(k) Fig. 2: delta=.1 4 Discrete observations II Let us dene strongly consistent estimators for b and A in the case in which we have discrete obsrevations. For i; j = 1; ::; d, let b b i () and d A ij () be set equal to arbitrary constants. Let Q and R denote Q(; i; k) = ^b i (k) + 1 (4.1) R(; i; j; k) = A ^ ij (k) 1 + expf ^b i (k)g + expf ^b j (k)g (4.2) and dene recursively for every i = 1; ::; d and k > bb i (k) := 8 < : logfq(; i; k)g if Q(; i; k) > (4.3) bb i (k 1) otherwise and for i; j = 1; ::; d and k > d A ij (k) := 8 < : logfr(; i; j; k)g ^b i (k) ^b j (k) if R(; i; j; k) > (4.4) da ij (k 1) otherwise

72 Identication of Parameters with Convergence Rate 2 1.8 1.6 1.4 A11, A12, A22 1.2 1.8.6.4.2 5 1 15 2 25 t(k) Fig. 3: delta=.1 We have that Theorem 4.1 For every > and i=1,..,d and for every > and i,j=1,..,d lim k!1 b b i (k) = b i a:s: (4.5) lim k!1 d A ij (k) = A ij a:s: (4.6) i.e., the estimators dened above are strongly consistent. Proof. The strong consistency of the estimators follows straightforwardly from Theorem 2.3. 5 Simulation In this section we analyze, through a simulation, the eect that dierent choices of (length of the interval between observations) have on the discretized estimators illustrating therefore the results of theorem 3.1. We generate a sample path of the solution of (1.1) using the property mentioned

Omar Zane 73 1.9.8.7.6 b1, b2.5.4.3.2.1 5 1 15 2 25 t(k) Fig. 4: delta=.1 in 3.9. We simulate the system for the case in which d = 2; the discretization step is t = :1 and the time horizon is T = 25. Using the sample paths for X 1 (t) and X 2 (t) obtained from this simulation, we compute the values of the estimators for the cases in which we have 1) 1 observations per unit of time (i.e. = :1) 2) 1 observations per unit of time (i.e. = :1) The values of the parameters that have been used are :6 b := :5 :7 :7 = 1 :5 :98 1:5 A = 1:5 1:25 The theoretical values, obtained in theorem 3.1, for the three cases are given in Table 1.

74 Identication of Parameters with Convergence Rate b 1 1; b 1 2; A 1 1;1; A 1 1;2; A 1 2;2;.1.618.513 1.3 1.72 1.273.1.6184.5127 1.1991 1.2675 1.4978 Table 1 The gures 1-4 show that the results that are obtained are exactly what we are expecting from the theory. Acknowledgment I wish to thank Professor T.E. Duncan and Professor B. Pasik-Duncan for the stimulating conversations on this topic. [BF] [DPD1] [DPD2] [K] [KS] References Bielecki, T. and Frei, M. G., Identication and Control in the Partially Known Merton Portfolio Selection Model, Journal of Optimization Theory and Applications, 77, 399-42, (1993). Duncan, T. E., and Pasik- Duncan, B., Adaptive Control of a Continuous-Time Investment and Consumption Model, Journal of Optimization Theory and Applications, 61, 47-52, (1989). Duncan, T. E., and Pasik- Duncan, B., Rate of Convergence for an Estimator in a Portfolio and Consumption Model, Journal of Optimization Theory and Applications, 61, 53-59, (1989). Karatzas, I., Optimization Problems in the Theory of Continuous Trading, SIAM, Journal of Control and Optimization, 27, 1221{ 1259, 1989. Karatzas, I. and Shreve, S. E.,Brownian Motion and Stochastic Calculus, Springer-Verlag, (1988). [Sh] Shiryayev, A. N.,Probability, Springer-Verlag, (1984) [S] Stoyanov,J. M., Problems of Estimation in Continuous-Discrete Stochastic Models, Proceedings of the Seventh Conference on Probability Theory (ed. M. Iosifescu), E.A.R.S.R.,(1984). This electronic publication and its contents are ccopyright 1994 by Ulam Quarterly. Permission is hereby granted to give away the journal and it contents, but no one may \own" it. Any and all nancial interest is hereby assigned to the acknowledged authors of the individual texts. This notication must accompany all distribution of Ulam Quarterly.