From Random Variables to Random Processes. From Random Variables to Random Processes

Similar documents
Stochastic Calculus February 11, / 33

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Stochastic Calculus Made Easy

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

Stochastic Calculus (Lecture #3)

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

I forgot to mention last time: in the Ito formula for two standard processes, putting

1. Stochastic Process

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Exercises. T 2T. e ita φ(t)dt.

On pathwise stochastic integration

MA8109 Stochastic Processes in Systems Theory Autumn 2013

1.1 Definition of BM and its finite-dimensional distributions

(B(t i+1 ) B(t i )) 2

1. Stochastic Processes and filtrations

Lecture 19 L 2 -Stochastic integration

On Integration-by-parts and the Itô Formula for Backwards Itô Integral

Random Variables and Their Distributions

JUSTIN HARTMANN. F n Σ.

MATH4210 Financial Mathematics ( ) Tutorial 7

MFE6516 Stochastic Calculus for Finance

I. ANALYSIS; PROBABILITY

Lecture 21: Stochastic Differential Equations

Brownian Motion and Poisson Process

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

Brief Review of Probability

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Numerical methods for solving stochastic differential equations

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

SDS 321: Introduction to Probability and Statistics

Stochastic Differential Equations.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

n E(X t T n = lim X s Tn = X s

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Topic 3: The Expectation of a Random Variable

Probability and Distributions

Some Tools From Stochastic Analysis

Stochastic differential equation models in biology Susanne Ditlevsen

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

ECONOMETRICS II, FALL Testing for Unit Roots.

Deep Learning for Computer Vision

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

MA 8101 Stokastiske metoder i systemteori

Northwestern University Department of Electrical Engineering and Computer Science

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT

18.440: Lecture 19 Normal random variables

Quick Tour of Basic Probability Theory and Linear Algebra

Taylor and Maclaurin Series

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Lecture 22 Girsanov s Theorem

Introduction to Algorithmic Trading Strategies Lecture 4

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

Brownian Motion and the Dirichlet Problem

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

CNH3C3 Persamaan Diferensial Parsial (The art of Modeling PDEs) DR. PUTU HARRY GUNAWAN

A Short Introduction to Diffusion Processes and Ito Calculus

More on Distribution Function

Introduction to Algorithmic Trading Strategies Lecture 3

Discretization of SDEs: Euler Methods and Beyond

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

Stochastic Differential Equations

Chapter 4. Continuous Random Variables 4.1 PDF

Integral representations in models with long memory

MAT137 - Term 2, Week 2

Richard S. Palais Department of Mathematics Brandeis University Waltham, MA The Magic of Iteration

An Introduction to the Schramm-Loewner Evolution Tom Alberts C o u(r)a n (t) Institute. March 14, 2008

Stochastic Modelling in Climate Science

Lecture 4: Introduction to stochastic processes and stochastic calculus

Review: mostly probability and some statistics

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Discrete Random Variables

Sample Spaces, Random Variables

Stochastic Processes and Advanced Mathematical Finance

Clases 11-12: Integración estocástica.

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

3 Multiple Discrete Random Variables

Theoretical Tutorial Session 2

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input,

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1 Random Variable: Topics

Mathematical Preliminaries

A numerical method for solving uncertain differential equations

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Advanced Calculus Math 127B, Winter 2005 Solutions: Final. nx2 1 + n 2 x, g n(x) = n2 x

Math Refresher - Calculus

Transcription:

Random Processes In probability theory we study spaces (Ω, F, P) where Ω is the space, F are all the sets to which we can measure its probability and P is the probability. Example: Toss a die twice. Ω = {(1, 1), (1, 2),..., (6, 6)} F = All the subsets of Ω. P assigns the number 1 36 to each pair. We can do things like computing the probability of getting a 1 in the first trial or computing the probability of the sum of the two being larger than 3. Each possible result of the experiment is denoted with the letter ω.

Random Processes Now, we can also define a function X : Ω R For example X = first coordinate or X = sum of the two numbers. X induces a probability on the real line by doing: P(A) = P(X A) P is what is usually called the distribution of X. From X we can measure expected value (E(X )), variance (Var(X )), standard deviation, etc. The numeric result of an experiment can be denoted by X (ω).

Random Processes Now, suppose that you were to repeat the experiment every day. Instead of X we would have X t or X (t). After 1 year of doing so we would have created a whole function If we were to do it again we would obtain: X (t, ω 2 ) X (t) is a stochastic process. The result of an experiment X (t, ω) is a random function.

Random Processes From this X we can measure the expected value at a given time E(X (t 0 )) the expected value of the difference at two different times E(X (t 1 ) X (t 0 )) the variance at a given time Var(X (t 0 )) etc.

Random Processes In many examples Ω is not explicitly known What is Ω in finance?

B(t) is a Brownian Motion if: 1) B(t) is cont in t. 2) B(t + h) B(t) N(0, (h)). 3) Indep. increments: B(t n ) B(t n 1 ),..., B(t 1 ) B(t 0 ) are independent. Notice that, in particular, Var (B(t j ) B(t j 1 )) = t j t j 1.

Now, consider an interval I = [0, T ] and consider a partition of I. Π = t 0, t 1,..., t n with 0 = t 0 < t 1 <... < t n = T The mesh of the partition is defined as Π = max j t j t j 1 Given a function f we define the first variation and the quadratic variation as: j=n FV (f )(T ) = lim Π >0 f (t j ) f (t j 1 ) j=0 j=n < f > (T ) = lim Π >0 f (t j ) f (t j 1 ) 2 j=0

If f is differentiable: and therefore f (t j ) f (t j 1 ) = f (t j )(t j t j 1 ) and FV (f )(T ) = T O f (t) dt j=n < f > (T ) = lim Π >0 f (tj ) 2 (t j t j 1 ) 2 j=0

j=n lim Π >0 Π f (tj ) 2 (t j t j 1 ) = 0 j=0 Fact: < B > (t) = T (This would say, in particular, that B(t) is no differentiable) Define D k = B(t k+1 ) B(t k ) The quadratic variation is the limit of expressions like: n 1 Q Π = k=0 D 2 k

We want to prove that lim Π >0 (Q Π T ) = 0 Consider n 1 Q Π T = Dk 2 (t k+1 t k ) k=0 It is easy to see that the expected value is 0.

Because of independent increments n 1 Var(Q Π T ) = Var(Dk 2 (t n 1 k+1 t k )) = E(Dk 2 (t k+1 t k )) 2 k=0 k=0 n 1 = E(Dk 4 2D2 k (t k+1 t k ) + (t k+1 t k ) 2 ) k=0 n 1 = (3(t k+1 t k ) 2 2(t k+1 t k ) 2 + (t k+1 t k ) 2 ) k=0 n 1 = 2(t k+1 t k ) 2 2 Π T > 0 k=0 Which is what we wanted

Now, notice that E((B(t k+1 ) B(t k )) 2 (t k+1 t k )) = 0 Var((B(t k+1 ) B(t k )) 2 (t k+1 t k )) = 2(t k+1 t k ) 2 (To see this consider X N(0, σ). Then E((X 2 σ 2 ) 2 ) = E(X 4 ) σ 4, but E(X 4 ) = 3σ 4 )

Or, which is the same: E((B(t k+1 ) B(t k )) 2 ) = (t k+1 t k ) E(((B(t k+1 ) B(t k )) 2 (t k+1 t k )) 2 ) = 2(t k+1 t k ) 2 (To see this consider X N(0, σ). Then E((X 2 σ 2 ) 2 ) = E(X 4 ) σ 4, but E(X 4 ) = 3σ 4 )

Therefore, since (t k+1 t k ) 2 is very small when (t k+1 t k ) is small : In differential notation: (B(t k+1 ) B(t k )) 2 (t k+1 t k ) db(t)db(t) = dt

Remind: Riemann-Stieltjes In calculus we study Riemann integrals: b a f (x)dx = lim f (x i )(x i x i 1 ) where the x i s form a partition of the interval [a, b] and the limit is taken as the norm of the partition goes to zero. In that sum (x i x i 1 ) represent the weight (or measure) we give to the interval [x i, x i 1 ]. There is a generalization of that concept called Riemann-Stieltjes integral in which the weight assigned to each interval is given by a transformation g(x). b a f (x)dg(x) = lim f (x i )(g(x i ) g(x i 1 ))

Remind: Riemann-Stieltjes For us to be able to do that g(x) has to satisfy some conditions. For example if g is an increasing function we can do it. In that case g represents a deformation of the original homogeneous measure. A general class of functions for which this can be done is formed by the functions that have finite first variation.

Back to Brownian Motion Given a function f how do we compute df (B(t))? In calculus we do d dt f (B(t)) = f (B(t))B (t)dt In differential notation df (B(t)) = f (B(t))B (t)dt = f (B(t))dB(t) But now B(t) is not differentiable, in particular B(t + h) B(t) is too big. However, we know that (B(t + h) B(t)) 2 h so we try adding an extra term to the Taylor expansion:

So, let us think about the chain rule in both cases: Let us take a smooth function f which we are going to compose with: 1 a function g also smooth. 2 a brownain motion B. f (g(t k )) f (g(t k 1 )) = f (ξ)(g(t k ) g(t k 1 )) where ξ is between g(t k ) and g(t k 1 ). Let us now divide both sides by (t k t k 1 ) and let t k and t k 1 be very close to each other to obtain: (f g) (t k ) = f (g(t k ))g (t k )

If, instead of developing up to order 1 we had developed up to order 2: f (g(t k )) f (g(t k 1 )) = f (g(t k 1 ))(g(t k ) g(t k 1 )) + 1 2 f (ξ)(g(t k ) g(t k 1 )) 2 Again, dividing both sides by (t k t k 1 ) and letting t k and t k 1 we see that the second term vanishes (why?). So, the result is the same.

Now, what happens if instead of g we have B? Suppose that we stop at order 1: f (B(t k )) f (B(t k 1 )) = f (ξ)(b(t k ) B(t k 1 )) We can t divide by (t k t k 1 ) as before ( B(t k) B(t k 1 ) (t k t k 1 ) blows up in the limit).

If we try going up to order 2: f (B(t k )) f (B(t k 1 )) = f (B(t k 1 ))(B(t k ) B(t k 1 )) + 1 2 f (ξ)(b(t k ) B(t k 1 )) 2 We still can t divide. But, what we can do is to make t k and t k+1 be very close, sum over al the t k s and see if the two terms on the right make sense. It turns out that this can be done (we will do this soon).

So, in the end: df (B(t)) = f (B(t))dB(t) + 1 2 f (B(t))(dB(t)) 2 = f (B(t))dB(t) + 1 2 f (B(t))dt In integral form f (B(T )) f (B(0)) = + 1 2 T 0 T 0 f (B(t))dt f (B(t))dB(t)

Example: f (x) = 1 2 x 2 B 2 (T ) 2 f (x) = x, f (x) = 1 = We should compare this to T 0 T 0 B(t)dB(t) + 1 2 T xdx = T 2...so, in stochastic calculus, we have an extra term 2

Remark: When one defines the stochastic integral one finds that T E( B(s)dB(s)) = 0 0 One the other hand we know that E( B2 (T ) 2 ) = T 2. So...if we did not have the extra term we would be in trouble.

Itô Let us assume that the return of stocks is governed by: In continuous time: S t+1 S t+1 S t = µt + φ where φ N(0, σ) ds = µdt + σdb. S How do I solve that? (how do I find S t?)

Itô If we were talking regular calculus the solution would be log(s). So, let s try the same solution: Using Taylor: log(s t ) = log(s 0 ) + 1 S 0 ds 0 1 2 I can replace ds 0 = S 0 µdt + S 0 σdb. 1 (ds 0 ) 2 Also, (ds 0 ) 2 = S 2 0 µ2 dt 2 + S 2 0 σ2 db 2 + 2µσS 2 0 dtdb However, I know that db 2 dt. So, the term containing db 2 is the biggest of them three. S 2 0

Itô If I now discard all the terms smaller than dt we end up with: log(s t ) = log(s 0 ) + 1 (S 0 µdt + S 0 σdb) 1 S 0 2 S 0 2 σ 2 dt or log(s t ) = log(s 0 ) + (µdt + σdb) 1 2 σ2 dt S t = S 0 e (µ 1 2 σ2 )t+σb(t)

Itô In general, from time t to time t + h the solution evolves as: S t+h = S t e (µ 1 2 σ2 )h+σ(b(t+h) B(t)) But we know that B(t + h) B(t) N(0, h). The we can rewrite as: S t+h = S t e (µ 1 2 σ2 )h+σ hx where X N(0, 1)