Lecture 17 : Stochastic Processes II

Similar documents
6. Stochastic processes (2)

6. Stochastic processes (2)

More metrics on cartesian products

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

Strong Markov property: Same assertion holds for stopping times τ.

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

Appendix B. Criterion of Riemann-Stieltjes Integrability

Randomness and Computation

NUMERICAL DIFFERENTIATION

Lecture 3: Probability Distributions

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Chapter 13: Multiple Regression

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Continuous Time Markov Chain

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Lecture 4: Universal Hash Functions/Streaming Cont d

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

REAL ANALYSIS I HOMEWORK 1

Introduction to Random Variables

Computing MLE Bias Empirically

Lecture Notes on Linear Regression

A random variable is a function which associates a real number to each element of the sample space

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Feature Selection: Part 1

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Fluctuation Results For Quadratic Continuous-State Branching Process

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Chapter 11: Simple Linear Regression and Correlation

Expected Value and Variance

Limited Dependent Variables

Bernoulli Numbers and Polynomials

A particle in a state of uniform motion remain in that state of motion unless acted upon by external force.

Appendix B. The Finite Difference Scheme

Foundations of Arithmetic

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

APPENDIX A Some Linear Algebra

Lecture 10 Support Vector Machines II

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions

PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 7, July 1997, Pages 2119{2125 S (97) THE STRONG OPEN SET CONDITION

Another converse of Jensen s inequality

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

Economics 101. Lecture 4 - Equilibrium and Efficiency

Notes on Frequency Estimation in Data Streams

Dirichlet s Theorem In Arithmetic Progressions

Generalized Linear Methods

Convergence of option rewards for multivariate price processes

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

CALCULUS CLASSROOM CAPSULES

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1

An (almost) unbiased estimator for the S-Gini index

Chapter 8 Indicator Variables

Excess Error, Approximation Error, and Estimation Error

Module 1 : The equation of continuity. Lecture 1: Equation of Continuity

Numerical Heat and Mass Transfer

5 The Rational Canonical Form

Poisson brackets and canonical transformations

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP

Math 217 Fall 2013 Homework 2 Solutions

PHYS 705: Classical Mechanics. Calculus of Variations II

Engineering Risk Benefit Analysis

Integrals and Invariants of Euler-Lagrange Equations

Lecture 16 Statistical Analysis in Biomaterials Research (Part II)

Complete subgraphs in multipartite graphs

Bayesian epistemology II: Arguments for Probabilism

Estimation: Part 2. Chapter GREG estimation

a b a In case b 0, a being divisible by b is the same as to say that

Lecture 4: September 12

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Inference from Data Partitions

Problem Solving in Math (Math 43900) Fall 2013

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

Convergence of random processes

Lecture 10: May 6, 2013

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

χ x B E (c) Figure 2.1.1: (a) a material particle in a body, (b) a place in space, (c) a configuration of the body

Société de Calcul Mathématique SA

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Random Partitions of Samples

Applied Stochastic Processes

Market structure and Innovation

A NOTE ON CES FUNCTIONS Drago Bergholt, BI Norwegian Business School 2011

Asymptotics of the Solution of a Boundary Value. Problem for One-Characteristic Differential. Equation Degenerating into a Parabolic Equation

Case Study of Markov Chains Ray-Knight Compactification

PhysicsAndMathsTutor.com

Transcription:

: Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss on dscrete-tme stochastc processes. We now turn our focus to the study of contnuous-tme stochastc processes. In most cases, t s dffcult to exactly descrbe the probablty dstrbuton for contnuous-tme stochastc processes. Ths was also dffcult for dscrete tme stochastc processes, but for them, we descrbed the dstrbuton n terms of the ncrements X k+1 X k nstead; ths s mpossble for contnuous tme stochastc processes. An alternate way whch s commonly used s to frst descrbe the propertes satsfed by the probablty dstrbuton, and then to show that there exsts a probablty dstrbuton satsfyng the gven propertes. Unfortunately, the second part above, the acutal constructon, requres a non-trval amount of work and s beyond the scope of ths class. Hence here we provde a bref ntroducton to the framework, and mostly just state the propertes of the stochastc processes of nterest. Interested readers can take more advanced probablty courses for deeper understandng. To formally defne a stochastc process, there needs to be an underlyng probablty space (Ω, P). A stochastc process X s then a map from the unverse Ω to the space of real functons defned over [0, ). Hence the probablty of the stochastc process takng a partcular path n some set A can be computed by computng the probablty P(X 1 (A)). When there s a sngle stochastc process, t s more convenent to just consder Ω as the space of all possble paths. Then P drectly descrbes the probablty dstrbuton of the stochastc process. The more abstract vew of takng an underlyng abstract unverse Ω s useful when there are several stochastc processes under consderaton (for example, when changng measure). We use the letter ω to denote an element of Ω, or one possble path of the process (n most cases, the two descrbe the same object). 1

2 Standard Brownan moton We frst ntroduce a contnuous-tme analogue of the smple random walk, known as the standard Brownan moton. It s also refered to as the Wener process, named after Norbert Wener, who was a professor at MIT. The frst person who actually consdered ths process s Bacheler, who used Brownan moton to evaluate stocks and optons n hs Ph.D thess wrtten n 1900 (see [3]). Theorem 2.1. There exsts a probablty dstrbuton over the set of contnuous functons B : R R satsfyng the followng condtons: () B(0) = 0. () (statonary) for all 0 s < t, the dstrbuton of B(t) B(s) s the normal dstrbuton wth mean 0 and varance t s, and () (ndependent ncrement) the random varables B(t ) B(s ) are mutually ndependent f the ntervals [s, t ] are nonoverlappng. We refer to a partcular nstance of a path chosen accordng to the Brownan moton as a sample Brownan path. One way to thnk of standard Brownan moton s as a lmt of smple random walks. To make ths more precse, consder a smple random walk {Y 0, Y 1,, } whose ncrements are of mean 0 and varance 1. Let Z be a pecewse lnear functon from [0, 1] to R defned as ( ) t Z = Y t, n for t = 0,, n, and s lnear at other ponts. As we take larger values of n, the dstrbuton of the path Z wll get closer to that of the standard Brownan moton. Indeed, we can check that the dstrbuton of Z(1) converges to the dstrbuton of N(0, 1), by central lmt theorem. More generally, the dstrbuton of Z(t) converges to N(0, t). Example 2.2. () [From wkpeda] In 1827, the botanst Robert Brown, lookng through a mcroscope at partcles found n pollen grans n water, noted that the partcles moved through the water but was not able to determne the mechansms that caused ths moton. Atoms and molecules had long been theorzed as the consttuents of matter, and many decades later, Albert Ensten publshed a paper n 1905 that explaned n precse detal how the moton that Brown had observed was a result of the pollen beng moved by ndvdual water molecules. () Stock prces can also be modelled usng standard Brownan motons. 2

Here are some facts about the Brownan moton: 1. Crosses the x-axs nfntely often. 2. Has a very close relaton wth the curve x = y 2 (t does not devate from ths curve too much). 3. Is nowhere dfferentable. Note that n real-lfe we can only observe the value of a stochastc process up to some tme resoluton (n other words, we can only take fntely many sample ponts). The fact above mples that standard Brownan moton s a reasonable model, at least n ths sense, snce the real-lfe observaton wll converge to the underlyng theoretcal stochastc process as we take smaller tme ntervals, as long as the dscrete-tme observatons behave lke a smple random walk. Suppose we use the Brownan moton as a model for daly prce of a stock. What s the dstrbuton of the days range? (the max value and mn value over a day) Defne M(t) = max 0 s t B(s), and note that M(t) s well-defned snce B s contnuous and [0, t] s compact. (Φ(t) s the cumulatve dstrbuton functon of the normal random varable) Proposton 2.3. The followng holds: a P(M(t) a) = 2P(B(t) > a) = 2 2Φ( ). t Proof. Let τ a = mn s {s : B(s) = a} and note that τ a s a stoppng tme. Note that for all 0 s < t, we have Hence we see that P(B(t) B(s) > 0) = P(B(t) B(s) < 0). P(B(t) B(τ a ) > 0 τ a < t) = P(B(t) B(τ a ) < 0 τ a < t). Here we assumed that the dstrbuton of B(t) B(τ a ) s not affected by the fact that we condtoned on τ a < t. Ths s called the Strong Markov Property of the Brownan moton. Ths can be rewrtten as P(B(t) > a τ a < t) = P(B(t) < a τ a < t), 3

and s also known as the reflecton prncple. Now observe that P(M t a) = P(τ a < t) = P(B(t) > a τ a < t) + P(B(t) < a τ a < t). = 2P(B(t) > a τ a < t). Snce our clam follows. P(B(t) > a τ a < t) = P(B(t) > a), The proposton above also has very nterestng theoretcal mplcaton. Usng the proposton above, we can prove the followng result. Proposton 2.4. For each t 0, the Brownan moton s almost surely not dfferentable at t. Proof. Fx a real t 0 and suppose that the Brownan moton B s dfferentable at t 0. Then there exst constants A and ε 0 such that for all 0 < ε < ε 0, B(t) B(t 0) Aε holds for all 0 < t t 0 ε. Let E ε,a denote ths event, and E A = ε E ε,a. Note that P(E ε,a ) = P(E(t) E(t 0 ) Aε for all 0 < t t 0 ε) = P(M(ε) Aε) = 2(1 Φ(A ε)), where the rght hand sde tends to zero as ε goes to zero. Therefore, P(E A ) = 0. By countable addtvty, we see that there can be no constant A satsfyng above (t suffces to consder nteger values of A). Dvoretsky, Erdős, and Kakutan n fact proved a stronger statment assertng that the Brownan moton B s nowhere dfferentable wth probablty 1. Hence a sample Brownan path s contnuous but nowhere dfferentable! The proof s slghtly more nvolved and requres a lemma from probablty theory (Borel-Cantell lemma). Theorem 2.5. (Quadratc varaton) For a partton Π = {t 0, t 1,, t j } of an nterval [0, T ], let Π = max (t +1 t ). A Brownan moton B t satsfes the followng equaton wth probablty 1: lm (B t B t ) 2 = T. Π 0 +1 4

Proof. For smplcty, here we only consder parttons where the gaps t +1 t are unform. In ths case, the sum (Bt+1 B t ) 2 s a sum of..d. random varables wth mean t +1 t, and fnte second moment. Therefore, by the law of large numbers, as max{t +1 t } 0, we have (Bt+1 B t ) 2 = T wth probablty 1. Why s ths theorem nterestng? Suppose that nstead of a Brownan moton, we took a functon f that s contnuously dfferentable. Then ( ) 2 f(t +1 ) f(t ) (t +1 t) 2 f (s ) 2 max f (s) 2 (t+1 t ) 2. s [0,T ] max f (s) 2 max{ t +1 t } T. s [0,T ] As max{t +1 t } 0, we see that the above tends to zero. Hence ths shows that Brownan moton fluctuates a lot. The above can be summarzed by the dfferental equaton (db) 2 = dt. As we wll see n the next lecture, ths fact wll have very nterestng mplcatons. Example 2.6. (Brownan moton wth drft) Let B(t) be a Brownan moton, and let µ be a fxed real. The process X(t) = B(t) + µt s called a Brownan moton wth drft µ. By defnton, t follows that E[X(t)] = µt. Queston : as tme passes, whch term wll domnate? B(t) or µt? It can be shown that µt domnates the behavor of X(t). For example, for all fxed ε > 0, after long enough tme, the Brownan moton wll always be between the lnes y = (µ ε)t and y = (µ + ε)t. What s the man advantage of the contnuous-world aganst the dscrete world? The beauty, of course, s one advantage. A more practcal advantage s the powerful toolbox of calculus. Unfortunately, we saw that t s mpossble to dfferentate Brownan moton. Surprsngly, there exsts a theory of generalzed calculus that can handle Brownan motons, and other contnuous-tme stochastc processes. Ths wll be the topc of the remanng lectures. 5

Suppose we want to go further. As dscussed n prevous lecture, when modellng the prce of a stock, t s more reasonable to assume that the percentle change follows a normal dstrbuton. Ths can be wrtten n the followng dfferental equaton: ds t = σs t db t. Can we wrte the dstrbuton of S t n terms of the dstrbuton of B t? Is t S t = e σbt? Surprsngly, the answer s no. References [1] S. Ross, A frst course n probablty [2] D. Bertsekas, J. Tstskls, Introducton to probablty [3] L. Bacheler, Théore de la spéculaton, Annales Scentfques de l Ecole Normale Supéreure, 3, 21-86. [4] R. Durrett, Probablty: Theory and Examples, 3rd edton. [5] S.R.Srnvasa Varadhan, Lecture Note (http://www.math.nyu.edu/faculty/varadhan/sprng06/sprng06.1.pdf) 6

MIT OpenCourseWare http://ocw.mt.edu 18.S096 Topcs n Mathematcs wth Applcatons n Fnance Fall 2013 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms.