Mutual Information for Stochastic Differential Equations*

Similar documents
Likelihood Functions for Stochastic Signals in White Noise* TYRONE E. DUNCAN

Theoretical Tutorial Session 2

i. Bonic R. and Frampton J., Differentiable functions on certain Banach spaces, Bull. Amer. Math. Soc. 71(1965),

Stochastic Differential Equations.

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Generalized Gaussian Bridges of Prediction-Invertible Processes

I forgot to mention last time: in the Ito formula for two standard processes, putting

Reflected Brownian Motion

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

n E(X t T n = lim X s Tn = X s

Citation Osaka Journal of Mathematics. 41(4)

("-1/' .. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS. R. T. Rockafellar. MICHIGAN MATHEMATICAL vol. 16 (1969) pp.

Lecture 22 Girsanov s Theorem

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

On the quantiles of the Brownian motion and their hitting times.

Strong Solutions and a Bismut-Elworthy-Li Formula for Mean-F. Formula for Mean-Field SDE s with Irregular Drift

1 Brownian Local Time

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Weakly interacting particle systems on graphs: from dense to sparse

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

Representations of Gaussian measures that are equivalent to Wiener measure

Some Properties of NSFDEs

Weak solutions of mean-field stochastic differential equations

Chain Rule. MATH 311, Calculus III. J. Robert Buchanan. Spring Department of Mathematics

DOOB S DECOMPOSITION THEOREM FOR NEAR-SUBMARTINGALES

Self-intersection local time for Gaussian processes

Research Article A Necessary Characteristic Equation of Diffusion Processes Having Gaussian Marginals

THE MAXIMUM OF SUMS OF STABLE RANDOM VARIABLES

Kolmogorov Equations and Markov Processes

Exact Simulation of Diffusions and Jump Diffusions

Applications of Ito s Formula

Properties of the Autocorrelation Function

Kolmogorov equations in Hilbert spaces IV

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1

Gaussian, Markov and stationary processes

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Higher order weak approximations of stochastic differential equations with and without jumps

Linear-Quadratic Stochastic Differential Games with General Noise Processes

Harnack Inequalities and Applications for Stochastic Equations

Applicable Analysis and Discrete Mathematics available online at

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Weak convergence for a type of conditional expectation: application to the inference for a class ofasset price models

SIMILAR MARKOV CHAINS

On Reflecting Brownian Motion with Drift

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

LAN property for sde s with additive fractional noise and continuous time observation

ON GENERATORS OF L/R2 LIE ALGEBRAS

THE VALUE FUNCTION IN ERGODIC CONTROL OF DIFFUSION PROCESSES WITH PARTIAL OBSERVATIONS II

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

On a class of stochastic differential equations in a financial network model

GAUSSIAN PROCESSES GROWTH RATE OF CERTAIN. successfully employed in dealing with certain Gaussian processes not possessing

MA 8101 Stokastiske metoder i systemteori

A Class of Fractional Stochastic Differential Equations

Maximum Process Problems in Optimal Control Theory

ON THE DIFFUSION OPERATOR IN POPULATION GENETICS

A Concise Course on Stochastic Partial Differential Equations

PERTURBATION ANAYSIS FOR THE MATRIX EQUATION X = I A X 1 A + B X 1 B. Hosoo Lee

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

13 The martingale problem

Some Arithmetic Functions Involving Exponential Divisors

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

Wiener Measure and Brownian Motion

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

Soo Hak Sung and Andrei I. Volodin

Memoirs of My Research on Stochastic Analysis

Lecture 12. F o s, (1.1) F t := s>t

NIL, NILPOTENT AND PI-ALGEBRAS

Stochastic Differential Equations

ENGI 4430 PDEs - d Alembert Solutions Page 11.01

Verona Course April Lecture 1. Review of probability

Section 2.8: The Existence and Uniqueness Theorem

Nonparametric Drift Estimation for Stochastic Differential Equations

An introduction to rough paths

Multivariate Generalized Ornstein-Uhlenbeck Processes

Session 1: Probability and Markov chains

A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH SPACE

Backward Stochastic Differential Equations with Infinite Time Horizon

Homogeneous Stochastic Differential Equations

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Robust Control of Linear Quantum Systems

NOTES ON NEWLANDER-NIRENBERG THEOREM XU WANG

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience

Tools of stochastic calculus

Representing Gaussian Processes with Martingales

PROBABILISTIC METHODS FOR A LINEAR REACTION-HYPERBOLIC SYSTEM WITH CONSTANT COEFFICIENTS

SPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Math 212-Lecture 8. The chain rule with one independent variable

LogFeller et Ray Knight

Basic Definitions: Indexed Collections and Random Functions

Common Fixed Point Theorems for Generalisation of R-Weak Commutativity

Introduction to numerical simulations for Stochastic ODEs

The Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge

Stochastic Volatility and Correction to the Heat Equation

Transcription:

INFORMATION AND CONTROL 19, 265--271 (1971) Mutual Information for Stochastic Differential Equations* TYRONE E. DUNCAN Department of Computer, Information and Control Engineering, College of Engineering, University of Michigan, Ann Arbor, Michigan 48104 Mutual information is calculated for processes described by stochastic differential equations. The expression for the mutual information has an interpretation in filtering theory. 1. INTRODUCTION In many problems in information theory it is necessary to calculate the mutual information between two processes. For some processes described by stochastic differential equations, we shall calculate the mutual information. The expression for the mutual information is obtained as a function of certain filtering errors which are related to some filtering problems for the original processes. 2. PRELIMINARIES The processes that we shall consider will be described by the following stochastic differential equations: dxt = a(t, Xt, Y,) dt + dbt, dy t = c(t, X,, Y,) at @ dbt, (2) the functions a and c are continuous in t and globally Lipschitz continuous in the remaining variables t ~ [0, 1], X 0 ~- Y0 ~ 0, and the processes B and/~ are independent standard Brownian motions. With the above assumptions on (1) and (2), the solutions exist, are unique, and are functionals of the past Brownian motions (K. ItG, 1961). * Research supported by the Office of Naval Research under Contract 67-A-0181-0021, NR 041-376 and by the United States Air Force Grant AFOSR 69-1767. Reproduction in whole or in part is permitted for any purpose of the United States Government. 643/19/3-6 265

266 DUNCAN For subsequent discussion it will be convenient to denote the measures for the processes B,/~, X, Y and X and Y described by (1) and (2) as/zb,/z~, tzx, /xr, and l~xr, respectively. It is known (Girsanov, 1960) that the measures/xxr and/xb/x ~ are mutually absolutely continuous, i.e., izxr ~ txgt~. We shall prove an additional result for the absolute continuity of certain measures which will be useful for calculating the mutual information between the processes X and Y. LEMMA. Let tzb and lxr be the measures for the processes B and I1, respectively, given in (2)./xy ~-~/x~ and r~ 1 -- exp [j ~s dy~- ½ j ~2ds], (3) e s = E[c(s, X~, Y~) ] Yu ; 0 <~ u <~ s] (4) and Y in (3) has the t~b measure. Proof. Using a result of Girsanov (1960) we know that i~xr ~ ix~k~ and dlxxr dlxl~dlx --exp[fa~dxs--½faeeds+ fc~dys--½fc,2ds], (5) a~ --= a(s, X~, Y~), c~ = c(s, X~, Y~) and X and Y have the/x~ and/x B measure, respectively. We only need to show that when we reduce the Radon-Nikodym derivative (5) to dlxr/dk~ B,we have (3). Let A = dtxxr/dl~d.., (6) and 7t~ = E[A, I Y,;0 ~<u ~<s]. (7) The functional ~ has the following representation (Kunita and S. Watanabe, 1967; Hitsuda 1968; Duncan 1970a): W = exp [f d?~dy~ -- ½ f d?~2 ds], (8),~ is measurable with respect to ~(Y~ ; 0 ~ u ~< s) (the sub-a-field generated by {Yu ; 0 ~< u ~< s} a standard Brownian motion since the measure is/xb) and f 4s 2 ds < oo a.s. /xb. (9) d

MUTUAL INFORMATION FOR STOCHASTIC DIFFERENTIAL EQUATIONS 267 Applying the formula for stochastic dfferentials (K. Ito, 1951) to Y, we have Yt = 1 + jt &ul, dy,. (10) 0 We shall now relate 4 to the terms in the exponential for /l. Using the formula for stochastic differentials, the functional /l can be written as A, = 1 + jt as& dx, +,: C/l, dy,. 0 (11) Since /l is a Radon-Nikodym derivative, the stochastic integrals in (11) are martingales and since X and Y in (11) are independent Brownian motions, we have E [I t a,fl,dx,jy,;o<u<t =O. 0 1 (12) Consider now the second stochastic integral in (11). Clearly, (c,aj ds < co a.s. EMBED s. Thus we can define the stochastic integral as a limit in L1 of integrals of uniformly stepwise functions (K. Ito, 1951). Since the product Cal is almost surely sample function continuous we can select an arbitrary sequence of partitions of [0, I] that become dense such that the integrals of the uniformly stepwise functions {an} defined as an(s) = Cth)fltpd t(n) < s < t!92) z z a+1' {I:~)} is a partition of [0, l] with 0 = t?) < t?) <... < tp) = 1 converge in L1 to s c,a, dy,. Fix s E [0, 11. By properties of the Radon-Nikodym derivative and conditiona expectation we have or

268 DUNCAN Since the sub-e-fields ~(Y~ ; 0 ~ u ~ s) are continuous and we have for all t ~ (0, 1) f~2 ds < o~ a.s., lim E o~(s) dy~ ] Y~ ; 0 <~ u <~ t = ~.T~ dye, (13) n-~oo 1; 0 the limit is taken with respect to the topology induced by convergence in probability. Since conditional expectation and L 1 limits commute, we also have ~-~lim~ E [fl a~(s)dy~[y~;o <~u~ t] This limit uses the topology of L 1 convergence. Therefore, we have for all t E[foc~A~dY~ I Y~;0 ~u ~ t] = fotst~dy~ a.s./~a, (15) so that ~ is a member of the same equivalence class as ~. Thus = exp [f 6 -- f e: ds]. (16) To show/~b ~/~r the arguments proceed as above. To compute the mutual information we shall use the following result obtained by Gelfand and Yaglom (1957), Chiang (1958), and Perez (1957). THEOREM 1. Let ~ and ~7 be two random vectors with joint probability measure Pen and marginal probability measures Pc and P,, respectively. Assume that Pen ~ PeP, Then the mutual information between ~ and 71 is j(~, r/) = f o~(x, y) log a(x, y) dpe(x ) dpn(y ), (17) dpe'(x'y) (18) ~(x, y) = dp~(x) ae~(y)

MUTUAL INFORMATION FOR STOCHASTIC DIFFERENTIAL EQUATIONS 269 3. MAIN RESULT We shall now obtain an expression for the mutual information between the processes X and Y. THEOREM 2. The mutual information J(X, Y) between the processes {X~ ; 0 ~ u ~ 1} and {Y~ ; 0 <~ u <~ 1} described by (1) and (2) is = Ill, --, + fj- 'o (c(s, X,, Y~) -- e(s, Xs, y,))2 ds], (19) d(s, Xs, Y,) = E[a(s, X,, Y,)] Xu ; 0 ~ u ~ s], e(s, Xs, Y,) = E[c(s, X,, Y,) I g,~ ; 0 <~ u <. s]. (20) (20 Proof. To compute the mutual information J(X, Y), using Theorem 1, we must determine the Radon-Nikodym derivative dtzxy(x, Y) &x(x) d~(y)" Using Girsanov's result (1960) for absolute continuity for diffusion processes, we have d~xy --exp [f as dx, -- ½ f as" ds + f c, dys -- ½ f <? ds], area d~,~ (22), as in (5), X and Y are Brownian motions with measures tz~ and/z 8, respectively. From the 1emma, we have and d/*~ -- exp ~x FP P "1 Ld,).i ~ = E[a(s, Xs, Ys) I X~ ; 0 <~ u <~ s]. (25)

270 Let We have DUNCAN = dtzxr/dllxdlzr. (26) j = f ~ log ~ dtzx dl~r (27) Using the chain rule for Radon-Nikodym derivatives, we have dt*xr d~b dlz~ (28) By the transformation of measures, we have logq~ = f (a~--d~)dx~ 4- f (c~--g~)dy~ ½ f (as 2 + cs 2) ds + ½ f (ds 2 + 8~ 2) ds. (29) Since we now have the measure i~xr, we use (1) and (2) to rewrite log ~ as log ~ = f (as -- d~)(a~ ds 4- db~) + f (c~ - 6)(es d, + abe) -½ f (a: + c,2) d, + ½ f + d,. (3o) Since the stochastic integrals are martingales on/xxr, we obtain J(X, Y) = E.xr[log q)] = ½E [fl (a8- ds)2d, + -'j': (c~- 8~) 2 dsj.- I (31) Remark 1. In many applications the stochastic differential equation for X, the signal, does not depend on Y, the observations, and Theorem 2 reduces to various known results (Gelfand and Yaglom, 1957; Van Trees, 1968; Baker 1969; Duncan 1970b). Remark 2. We can calculate mutual information when the diffusion coefficient for (1) is a function of t and X and the diffusion coefficient for (2) is a function of t and Y. If the former diffusion coefficient is a function of Y

MUTUAL INFORMATION FOR STOCHASTIC DIFFERENTIAL EQUATIONS 271 or the latter is a function of X, then we have a singular situation which causes the mutual information to be infinite (Gelfand and Yaglom, 1957; Pinsker 1964). RECEIVED: July 29, 1969; REVISED May 12, 1971. REFERENCES BAKER, C. R. (1969), Mutual information for Gaussian processes, to appear. TsE-PEI, CHIANG (1958), Remark on the definition of the quantity of information, Teor. Veroyatnost. i. Primenen 3, 99-103; English translation in Amer. Math. Soc. Transl. Ser. 2, 12 (1959), 247-250. DUNCAN, T. E. (1970a), On the absolute continuity of measures, Ann. Math. Stat. 41, 30-38. DUNCAN, T. E. (1970b), On the calculation of mutual information, SIAM J. Appl. Math. 19, 215-220. GELFAND, I. M. AND YAOLOM, A. M. (1957), Calculation of the amount of information about a random function contained in another such function, Usp. Mat. Nauk 12, 3-52; English translation in Amer. Math. Soc. Transl. Ser. 2 12 (1959), 199-246. GII~SANOV, I. V. (1960), On transforming a certain class of stochastie processes by absolutely continuous substitution of measures, Theor. Probability Appl. 5, 285-301. HITSUDA, M. (1968), Representation of Gaussian processes equivalent to Wiener process, Osaka J. Math. 5, 299-312. IT6, K. (1951), On a formula concerning stochastic differentials, Nagoya Math. J. 3, 55-65. IT6, K. (1961), "Lectures on Stochastic Processes," Tata Institute for Fundamental Research, Bombay, India. KUNITA, H. AND WATANABE, S. (1967), On square integrable martingales, Nagoya Math. J. 30, 209-245. PEREZ, A. (1957), Notions g6ngralis6es d'incertitude, d'entropie et d'information du point de vue de la th6orie de martingales, hz "Transactions of the First Prague Conference on Information Theory, Statistical Decision Functions, Random Processes," pp. 183-208, Czech. Academy" of Sciences, Prague. PINSKER, M. S. (1964), "Information and Information Stability of Random Variables and Processes," Holden-Day, San Francisco. VAN TREES, H. L. (1968), "Detection, Estimation, and Modulation Theory, Part 1." Wiley, New York.