M5A44 COMPUTATIONAL STOCHASTIC PROCESSES

Size: px
Start display at page:

Download "M5A44 COMPUTATIONAL STOCHASTIC PROCESSES"

Transcription

1 M5A44 COMPUTATIONAL STOCHASTIC PROCESSES Professor G.A. Pavliotis Department of Mathematics Imperial College London, UK WINTER TERM /01/2015 Pavliotis (IC) CompStochProc 1 / 298

2 Lectures: Mondays, 11:00-13:00, Wednesdays 12:00-13:00, Huxley 658. Office Hours: Mondays 14:00-15:00, Wednesdays 14:00-15:00 or by appointment. Huxley 621. Course webpage: Text: Lecture notes, available from the course webpage. Also, recommended reading from various textbooks/review articles. Pavliotis (IC) CompStochProc 2 / 298

3 This is an introductory course on computational stochastic processes, aimed towards 4th year, MSc and MRes students in applied mathematics applied mathematics and theoretical physics. Prior knowledge of basic stochastic processes in continuous time, scientific computing and a programming language such as matlab or C is assumed. Pavliotis (IC) CompStochProc 3 / 298

4 1 PART I: INTRODUCTION Random number generators. Random variables, probability distribution functions. Introduction to Monte Carlo techniques. Variance reduction techniques. 2 SIMULATION OF STOCHASTIC PROCESSES Introduction to stochastic processes. Brownian motion and related stochastic processes and their simulation. Gaussian stochastic processes. Karhunen-Loeve expansion and simulation of Gaussian random fields. Numerical solution of ODEs and PDEs with random coefficients. Pavliotis (IC) CompStochProc 4 / 298

5 3. NUMERICAL SOLUTION OF STOCHASTIC DIFFERENTIAL EQUATIONS. Basic properties of stochastic differential equations. Itô s formula and stochastic Taylor expansions. Examples of numerical methods: The Euler-Marayama and Milstein schemes. Theoretical issues: convergence, consistency, stability. Weak versus strong convergence. Implicit schemes and stiff SDEs. Pavliotis (IC) CompStochProc 5 / 298

6 4. ERGODIC DIFFUSION PROCESSES Deterministic methods: numerical solution of the Fokker-Planck equation. Numerical calculation of the invariant measure/steady state simulation. Calculation of expectation values, the Feynman-Kac formula. 5. MARKOV CHAIN MONTE CARLO (MCMC). The Metropolis-Hastings and MALA algorithms. Diffusion limits for MCMC algorithms. Peskun and Tierney orderings and Markov Chain comparison. Optimization of MCMC algorithms. Bias correction and variance reduction techniques. Pavliotis (IC) CompStochProc 6 / 298

7 6. STATISTICAL INFERENCE FOR DIFFUSION PROCESSES. Estimation of the diffusion coefficient (volatility). Maximum likelihood estimation of drift coefficients in SDEs. Nonparametric and Bayesian techniques. 7. APPLICATIONS Probabilistic methods for the numerical solution of partial differential equations. Exit time problems. Molecular dynamics/computational statistical mechanics. Molecular motors, stochastic resonance. Pavliotis (IC) CompStochProc 7 / 298

8 Prerequisites Basic knowledge of ODEs and PDEs. Familiarity with the theory of stochastic processes. Stochastic differential equations. Basic knowledge of functional analysis and stochastic analysis Numerical methods and scientific computing. Familiarity with a programming language: Matlab, C, Fortran... Pavliotis (IC) CompStochProc 8 / 298

9 Assessment Based on 1 project (25%) and a final exam (75%). Mastery exam: additional project/paper to study and write a report. The project will be (mostly) computational. The exam will be theoretical (e.g. analysis of algorithms). Pavliotis (IC) CompStochProc 9 / 298

10 Bibliography Lecture notes will be provided for all the material that we will cover in this course. They will be posted on the course webpage. Books that cover (parts of) the contents of this course are G.A. Pavliotis Stochastic Processes and Applications, Springer (2014). Kloeden & Platen: Numerical Solution of Stochastic Differential Equations, Springer (1992). S.M. Iacus Simulation and inference for stochastic differential equations, Springer (2008). Asmussen and Glynn, Stochastic Simulation, Springer (2007). Huynh, Lai, Soumare Stochastic simulation and applications in finance with Matlab programs. Wiley (2008). Pavliotis (IC) CompStochProc 10 / 298

11 Lectures Slides and whiteboard. Computer experiments in Matlab. Pavliotis (IC) CompStochProc 11 / 298

12 why Computational Stochastic Processes? Many models in science, engineering and economics are probabilistic in nature and we have to deal with uncertainty. Many deterministic problems can be solved more efficiently using probabilistic techniques. Pavliotis (IC) CompStochProc 12 / 298

13 Deriving dynamical models from paleoclimatic records F. Kwasniok, and G. Lohmann, Phys. Rev. E, 80, 6, (2009) Pavliotis (IC) CompStochProc 13 / 298

14 Fit this data to a bistable SDE dx = V (x; a) dt +σ dw, V(x) = 4 a j x j. (1) j=1 Estimate the coefficients in the drift from the palecolimatic data using the unscented Kalman filter. the resulting potential is highly asymmetric. Pavliotis (IC) CompStochProc 14 / 298

15 Pavliotis (IC) CompStochProc 15 / 298

16 Computational Statistical Physics Sample from the Boltzmann distribution: π(x) = 1 Z e βu(x), (2) where β = (k B T) 1 is the inverse temperature and the normalization constant Z is the partition function Z = e βu(x) dx. (3) R 3k x denotes the configuration of the system x = {x i : i = 1,... k}, x i = (x i1, x i2, x i3 ). The potential U(x) is a sum of pairwise interactions U(x) = [ (σ 12 ( σ Φ( x i x j ), Φ(r) = 4ε r) r i,j ) 6 ]. Pavliotis (IC) CompStochProc 16 / 298

17 We need to be able to calculate Z(β), which is a very high dimensional integral. We want to be able to calculate expectations with respect to π(x): E π f(x) = f(x)π(x) dx. R 3k One method for doing this is by simulating a diffusion process (MCMC): dx t = U(X t ) dt + 2β 1 dw t. Pavliotis (IC) CompStochProc 17 / 298

18 Some interesting questions How many experiments should we do? How de we compute expectations with respect to stationary distributions? Can we exploit the problem structure to speed up the computation? How do we efficiently compute probabilities of rare events? How do we estimate the sensitivity of a stochastic model to changes in a parameter? How we we use stochastic simulation to optimize our choice of decision parameters? Pavliotis (IC) CompStochProc 18 / 298

19 Example: Brownian motion in a periodic potential Consider the Langevin dynamics in a smooth periodic potential q = V(q) γ q+ 2γβ 1 Ẇ, (4) where W(t) denotes standard d-dimensional Brownian motion, γ > 0 the friction coefficient and β the inverse temperature (strength of the noise). Write as a first order system: dq t = p t dt, (5a) dp t = q V(q t ) dt γp t dt + 2γβ 1 dw t. (5b) Pavliotis (IC) CompStochProc 19 / 298

20 At long times the solution to q t performs an effective Brownian motion with diffusion coefficient D, Eq 2 t 2Dt, t 1. We compute the diffusion coefficient for V(q) = cos(q) using the formula Var(q t ) D = lim, t 2t as a function of γ, at a fixed temperature β 1. The SDE (5) has to be solved numerically using an appropriate scheme. The numerical simulations suggest that D 1, for both γ 1 and γ 1. (6) γ Pavliotis (IC) CompStochProc 20 / 298

21 10 2 γ = D eff γ = 0.01 γ = γ = D 0 eff /γ /γ γ = < x 2 >/2t D eff t γ a. Var(q t ) /2t vs t b. D vs γ Figure: Variance and effective diffusion coefficient for various values of γ. Pavliotis (IC) CompStochProc 21 / 298

22 The One-Dimensional Random Walk We let time be discrete, i.e. t = 0, 1,... Consider the following stochastic process S n : S 0 = 0; at each time step it moves to ±1 with equal probability 1 2. In other words, at each time step we flip a fair coin. If the outcome is heads, we move one unit to the right. If the outcome is tails, we move one unit to the left. Alternatively, we can think of the random walk as a sum of independent random variables: n S n = X j, j=1 where X j { 1, 1} with P(X j = ±1) = 1 2. Pavliotis (IC) CompStochProc 22 / 298

23 We can simulate the random walk on a computer: We need a (pseudo)random number generator to generate n independent random variables which are uniformly distributed in the interval [0,1]. If the value of the random variable is 1 2 then the particle moves to the left, otherwise it moves to the right. We then take the sum of all these random moves. The sequence {S n } N n=1 indexed by the discrete time T = {1, 2,...N} is the path of the random walk. We use a linear interpolation (i.e. connect the points {n, S n } by straight lines) to generate a continuous path. Pavliotis (IC) CompStochProc 23 / 298

24 50 step random walk Figure: Three paths of the random walk of length N = 50. Pavliotis (IC) CompStochProc 24 / 298

25 1000 step random walk Figure: Three paths of the random walk of length N = Pavliotis (IC) CompStochProc 25 / 298

26 Every path of the random walk is different: it depends on the outcome of a sequence of independent random experiments. We can compute statistics by generating a large number of paths and computing averages. For example, E(S n ) = 0, E(S 2 n) = n. The paths of the random walk (without the linear interpolation) are not continuous: the random walk has a jump of size 1 at each time step. This is an example of a discrete time, discrete space stochastic processes. The random walk is a time-homogeneous (the probabilistic law of evolution is independent of time) Markov (the future depends only on the present and not on the past) process. If we take a large number of steps, the random walk starts looking like a continuous time process with continuous paths. Pavliotis (IC) CompStochProc 26 / 298

27 Consider the sequence of continuous time stochastic processes Z n t := 1 n S nt. In the limit as n, the sequence {Z n t } converges (in some appropriate sense) to a Brownian motion with diffusion coefficient D = x2 2 t = 1 2. Pavliotis (IC) CompStochProc 27 / 298

28 2 1.5 mean of 1000 paths 5 individual paths 1 U(t) Figure: Sample Brownian paths. t Pavliotis (IC) CompStochProc 28 / 298

29 Brownian motion W(t) is a continuous time stochastic processes with continuous paths that starts at 0 (W(0) = 0) and has independent, normally. distributed Gaussian increments. We can simulate the Brownian motion on a computer using a random number generator that generates normally distributed, independent random variables. Pavliotis (IC) CompStochProc 29 / 298

30 We can write an equation for the evolution of the paths of a Brownian motion X t with diffusion coefficient D starting at x: dx t = 2DdW t, X 0 = x. This is an example of a stochastic differential equation. The probability of finding X t at y at time t, given that it was at x at time t = 0, the transition probability density ρ(y, t) satisfies the PDE ρ t = ρ D 2 y2, ρ(y, 0) = δ(y x). This is an example of the Fokker-Planck equation. The connection between Brownian motion and the diffusion equation was made by Einstein in Pavliotis (IC) CompStochProc 30 / 298

31 ELEMENTS OF PROBABILITY THEORY Pavliotis (IC) CompStochProc 31 / 298

32 A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable unions of its elements. A sub-σ algebra is a collection of subsets of a σ algebra which satisfies the axioms of a σ algebra. A measurable space is a pair (Ω,F) where Ω is a set and F is a σ algebra of subsets of Ω. Let (Ω,F) and (E,G) be two measurable spaces. A function X : Ω E such that the event {ω Ω : X(ω) A} =: {X A} belongs to F for arbitrary A G is called a measurable function or random variable. Pavliotis (IC) CompStochProc 32 / 298

33 Let (Ω,F) be a measurable space. A function µ : F [0, 1] is called a probability measure if µ( ) = 1, µ(ω) = 1 and µ( k=1 A k) = k=1 µ(a k) for all sequences of pairwise disjoint sets {A k } k=1 F. The triplet (Ω, F, µ) is called a probability space. Let X be a random variable (measurable function) from (Ω, F, µ) to (E,G). If E is a metric space then we may define expectation with respect to the measure µ by E[X] = X(ω) dµ(ω). More generally, let f : E R be G measurable. Then, E[f(X)] = f(x(ω)) dµ(ω). Ω Ω Pavliotis (IC) CompStochProc 33 / 298

34 Let U be a topological space. We will use the notation B(U) to denote the Borel σ algebra of U: the smallest σ algebra containing all open sets of U. Every random variable from a probability space (Ω, F, µ) to a measurable space (E, B(E)) induces a probability measure on E: µ X (B) = PX 1 (B) = µ(ω Ω; X(ω) B), B B(E). The measure µ X is called the distribution (or sometimes the law) of X. Example Let I denote a subset of the positive integers. A vector ρ 0 = {ρ 0,i, i I} is a distribution on I if it has nonnegative entries and its total mass equals 1: i I ρ 0,i = 1. Pavliotis (IC) CompStochProc 34 / 298

35 We can use the distribution of a random variable to compute expectations and probabilities: E[f(X)] = f(x) dµ X (x) and P[X G] = G S dµ X (x), G B(E). When E = R d and we can write dµ X (x) = ρ(x) dx, then we refer to ρ(x) as the probability density function (pdf), or density with respect to Lebesque measure for X. When E = R d then by L p (Ω;R d ), or sometimes L p (Ω;µ) or even simply L p (µ), we mean the Banach space of measurable functions on Ω with norm ( X L p = E X p) 1/p. Pavliotis (IC) CompStochProc 35 / 298

36 Example Consider the random variable X : Ω R with pdf ) γ σ,m (x) := (2πσ) 1 2 exp ( (x m)2. 2σ Such an X is termed a Gaussian or normal random variable. The mean is EX = xγ σ,m (x) dx = m and the variance is E(X m) 2 = (x m) 2 γ σ,m (x) dx = σ. R R Pavliotis (IC) CompStochProc 36 / 298

37 Example (Continued) Let m R d and Σ R d d be symmetric and positive definite. The random variable X : Ω R d with pdf γ Σ,m (x) := ( (2π) d detσ ) ( 1 2 exp 1 ) 2 Σ 1 (x m),(x m) is termed a multivariate Gaussian or normal random variable. The mean is E(X) = m (7) and the covariance matrix is ( ) E (X m) (X m) = Σ. (8) Pavliotis (IC) CompStochProc 37 / 298

38 Since the mean and variance specify completely a Gaussian random variable on R, the Gaussian is commonly denoted by N(m, σ). The standard normal random variable is N(0, 1). Since the mean and covariance matrix completely specify a Gaussian random variable on R d, the Gaussian is commonly denoted by N(m, Σ). Pavliotis (IC) CompStochProc 38 / 298

39 The Characteristic Function Many of the properties of (sums of) random variables can be studied using the Fourier transform of the distribution function. The characteristic function of the random variable X is defined to be the Fourier transform of the distribution function φ(t) = e itλ dµ X (λ) = E(e itx ). (9) R The characteristic function determines uniquely the distribution function of the random variable, in the sense that there is a one-to-one correspondance between F(λ) and φ(t). The characteristic function of a N(m,Σ) is φ(t) = e m,t 1 2 t,σt. Pavliotis (IC) CompStochProc 39 / 298

40 Lemma Let {X 1, X 2,... X n } be independent random variables with characteristic functions φ j (t), j = 1,... n and let Y = n j=1 X j with characteristic function φ Y (t). Then φ Y (t) = Π n j=1 φ j(t). Lemma Let X be a random variable with characteristic function φ(t) and assume that it has finite moments. Then E(X k ) = 1 i kφ(k) (0). Pavliotis (IC) CompStochProc 40 / 298

41 Types of Convergence and Limit Theorems One of the most important aspects of the theory of random variables is the study of limit theorems for sums of random variables. The most well known limit theorems in probability theory are the law of large numbers and the central limit theorem. There are various different types of convergence for sequences or random variables. Pavliotis (IC) CompStochProc 41 / 298

42 Definition Let {Z n } n=1 be a sequence of random variables. We will say that (a) Z n converges to Z with probability one (almost surely) if P ( lim n + Z n = Z ) = 1. (b) Z n converges to Z in probability if for every ε > 0 (c) Z n converges to Z in L p if lim P( Z n Z > ε ) = 0. n + lim n + E[ Zn Z p] = 0. (d) Let F n (λ), n = 1, +, F(λ) be the distribution functions of Z n n = 1, + and Z, respectively. Then Z n converges to Z in distribution if lim n + F n(λ) = F(λ) Pavliotis (IC) CompStochProc 42 / 298

43 Let {X n } n=1 be iid random variables with EX n = V. Then, the strong law of large numbers states that average of the sum of the iid converges to V with probability one: P ( lim n + 1 N ) N X n = V = 1. n=1 The strong law of large numbers provides us with information about the behavior of a sum of random variables (or, a large number or repetitions of the same experiment) on average. We can also study fluctuations around the average behavior. Indeed, let E(X n V) 2 = σ 2. Define the centered iid random variables Y n = X n V. Then, the sequence of random variables 1 N n=1 Y n converges in distribution to a N(0, 1) random σ N variable: lim P n + ( 1 σ N ) N Y n a = n=1 a 1 2π e 1 2 x2 dx. Pavliotis (IC) CompStochProc 43 / 298

44 Assume that E X < and let G be a sub σ algebra of F. The conditional expectation of X with respect to G is defined to be the function E[X G] : Ω E which is G measurable and satisfies E[X G] dµ = X dµ G G. G We can define E[f(X) G] and the conditional probability P[X F G] = E[I F (X) G], where I F is the indicator function of F, in a similar manner. G Pavliotis (IC) CompStochProc 44 / 298

45 Random Number Generators How can a deterministic machine produce random numbers? We can use a chaotic dynamical system: x n+1 = f(x n ), x 0 = x, n = 1, 2,... Provided that this dynamical system is sufficiently chaotic, then we can hope that for n sufficiently large the sequence of numbers x n, x n+1, x n+2,... is random with nice statistical properties. Statistical tests can be used in order to determine whether this sequence of pseudo-random numbers can be used in order to perform Monte Carlo simulations, simulate stochastic processes etc. Modern programming languages have well tested build in RNG. In Matlab: rand U(0, 1). randn N(0, 1). Pavliotis (IC) CompStochProc 45 / 298

46 Definition A uniform pseudo-number generator is an algorithm which, starting from an initial value x 0 (the seed), produces a sequence u i = D i u 0 of values in [0, 1]. Lemma Suppose Y U(0, 1) and F a 1d cumulative distribution function. Then X = F 1 (U) with F 1 (u) = inf{x; F(x) u} has the distribution F. We can use the Box-Muller algorithm to generate Gaussian random variables, given a pseudonumber generator of uniform r.v.: Given U 1, U 2 U(0, 1) independent, then are independent N(0, 1) r.v. Z 0 = 2 ln U 1 cos(2πu 2 ), Z 1 = 2 ln U 1 sin(2πu 2 ) Pavliotis (IC) CompStochProc 46 / 298

47 Fast Chaotic Noise Fast/slow system: dx dt = x x3 + λ ε y 2, dy 1 = 10 dt ε 2(y 2 y 1 ), dy 2 = 1 dt ε 2(28y 1 y 2 y 1 y 3 ), dy 3 = 1 dt ε 2(y 1y y 3) Effective Dynamics: [Melbourne, Stuart 11] dx t = A ( X t X t 3 ) dt + σ dw t true values: A = 1, λ = 2 45, σ = 2λ2 0 lim T 1 T ψ s (y)ψ s+t (y) ds dt T 0 Pavliotis (IC) CompStochProc 47 / 298

48 Fast Chaotic Noise Estimators Values for σ reported in the literature (ε = 10 3/2 ) 0.126± via Gaussian moment approx. 0.13±0.01 via HMM here: ε = 10 1 ˆσ and ε = 10 3/2 ˆσ But we estimate also  Pavliotis (IC) CompStochProc 48 / 298

49 Generation of Multidimensional Gaussian Random Variables We are given Z N(0, I), where I R d d denotes the identity matrix. We want to generate a Gaussian random vector Y N(b, Σ), EY = b, E((Y b)(y b) T ) = Σ. X = Y b is mean zero, sufficient to consider this case. We can use the Cholesky decomposition: Assuming that Σ > 0 there exists a unique lower triangular matrix such that Then: X = Λ Z is X N(0,Σ). The computational cost is O(d 3 ). In matlab: chol(σ) Σ = ΛΛ T. (10) Pavliotis (IC) CompStochProc 49 / 298

50 Eigenvalue decomposition of Cov(X) The Cholesky decomposition is not unique for covariance matrices that are not strictly positive definite. The eigenvalues and eigenvectors of the covariance matrix provide us complete information about the Gaussian random vector. Principal Component Analysis. Let X N(0,Σ), Σ 0. Then Σ = PDP T, PP T = I, D = diag(d 1, d d ). (11) Set L = P D and use the Cholesky decomposition. In Matlab: [P,D] = eig (Σ) Pavliotis (IC) CompStochProc 50 / 298

51 Other techniques for generating random numbers with a given distribution: Acceptance-rejection algorithm. Markov Chain Monte Carlo. Pavliotis (IC) CompStochProc 51 / 298

52 Monte Carlo Techniques Example: I = 1 N D f(x) dx = Ef(X), x (U(D)) d N f(x i ) =: I N, x i iid U(D). i=1 To prove convergence use the (strong) law of large numbers: 1 lim N + N N f(x i ) = I i=1 a.s. Pavliotis (IC) CompStochProc 52 / 298

53 To obtain an estimate on the rate of convergence use the central limit theorem: N ( IN I ) N(0,σ 2 ), in distribution, where σ 2 = Var π (f(x)). The error of the Monte Carlo approximation is O(N 1/2 ), regardless of the dimensionality of x. For smooth functions, deterministic approximations (Riemann sum, trapezoidal rule..) can have better convergence rate, but they don t scale well as the number of dimensions increases. Pavliotis (IC) CompStochProc 53 / 298

54 More generally, consider the integral I = f(x)π(x) dx, (12) R d where π(x) is a probability distribution. We can always write integrals in this form (multiply and divide by π(x)). Monte Carlo estimator: I N = 1 N N f(x i ), X i π(x), iid. (13) i=1 The MC estimator is unbiased: Variance of the estimator: E π I N = I. σ 2 I := var π(i N ) = 1 N var π(f(x)) = 1 N R d (f(x) I) 2 π(x) dx. Pavliotis (IC) CompStochProc 54 / 298

55 From the strong law of large numbers: lim I N = I N + a.s. From the weak LLN and Chebyshev s inequality: P( I N I ε) 1 σ2 I ε 2 = 1 σ2 f Nε2. (14) Accuracy of the MC estimator depends on N and on the variance σ 2 I. In order for P(I N [I ε, I +ε]) = 1 α, for a confidence level α (0, 1) we need N > σ2 f αε 2. MC error scales like 1/ N, independently of the number of dimensions. It is expected that σ 2 f increases as the number of dimensions increases. Pavliotis (IC) CompStochProc 55 / 298

56 Example (Huynh et al Stochastic Simulation and Applications in finance ch. 5) 1 function [Call, VarCall]=CalculateCall2(N) 2 % Price a call option 3 %N: Number of generated points to compute the expectation 4 5 x = exp(sqrt(0.1) * randn(1,n) + 5); 6 Call = mean( max(x - 110,0)); 7 VarCall = var(max(x - 110,0))/N; 8 9 end Pavliotis (IC) CompStochProc 56 / 298

57 VARIANCE REDUCTION TECHNIQUES To improve the accuracy of the MC estimator and to reduce the computational cost we can use variance reduction techniques. Examples of such techniques are: Antithetic variables. Control variates. Importance sampling. To implement these techniques we need to be able to estimate the mean and variance "on the fly": I N+1 = 1 N + 1 (NI N + f(x N+1 )) and ( ) var(i N ) = 1 1 N f(x i ) 2 N N N 1 N 1 I2 N. i=1 Pavliotis (IC) CompStochProc 57 / 298

58 Antithetic Variables: generate additional random variables that have the same distribution as X i, i = 1,... N but are negatively correlated to X i. Example: suppose we want to estimate the mean µ X of iid random variables X i, i = 1,... N. We introduce the auxiliary r.v. Xi a, i = 1,... N with E[(X a i µ x )(X j µ X )] = δ ij σ 2 X. The variance of the sample mean is Var( µ a ) = 1 2N 2σ2 X + 1 4N 2 = 1 4N σ2 X. N E[(Xi a µ x )(X j µ X )] k,j=1 Not possible in general to find perfectly anti-correlated random variables. Pavliotis (IC) CompStochProc 58 / 298

59 Example Let X U(0, 1). Then X a = 1 X is an antithetic variable of X Example Let X N(µ,σ 2 ). Then X a = 2µ X is an antithetic variable of X. Example Let X N(µ, Σ) a 2d Gaussian random vector. We use antithetic variables to reduce the variance and increase the accuracy of the estimation of Ef(X 1, X 2 ). We will estimate the expectation of Y j = e X j, j = 1, 2. These integrals can be calculated analytically. Pavliotis (IC) CompStochProc 59 / 298

60 (Huynh et al Stochastic Simulation and Applications in finance ch. 5) 1 function Rep=SimulationsAnti 2 %calculate the mean of an exponential gaussian using 3 %antithetic variables. 4 NbTraj=100; 5 % construct the covariance matrix 6 sigma1=0.2; sigma2=0.5;rho=-0.3; 7 MoyTheo=[10;15]; 8 CovTheo=[sigma1^2,rho*sigma1*sigma2;rho*sigma1*sigma2,sigma2^2 9 L=chol(CovTheo)'; 10 %Simulation of the random variables. 11 Sample=randn(2,NbTraj); 12 SampleSimple=repmat(MoyTheo,1,NbTraj)+L*Sample; 13 %Introduction of the antithetic variables 14 XAnti=cat(2,SampleSimple,2*repmat(MoyTheo,1,NbTraj)-SampleSimp 15 %Transformation of Xi into Zi 16 Z1Anti=exp(XAnti(1,:)) 17 Z2Anti=exp(XAnti(2,:)) 18 Rep=mean(Z1Anti,2); 19 end Pavliotis (IC) CompStochProc 60 / 298

61 CONTROL VARIATES Let Z be a random variable. We want to estimate EZ. We look for a r.v. W that is strongly correlated with Z and with known EW. Consider the new r.v. We have X = Z +α(w EW). (15) Var(X) = Var(Z)+α 2 Var(W)+2αCov(Z, W). We minimize the variance with respect to α: Cov(Z, W) α = Var(W) and Var min (X) = Var(Z)(1 ρ 2 ). Pavliotis (IC) CompStochProc 61 / 298

62 CONTROL VARIATES The estimator (16) is unbiased and the variance is reduced when the correlation coefficient ρ 2 = Cov(Z,W) is close to 1. Var(Z)Var(W) The optimal value of α and Var(X) can be estimated using the empirical means. We can also construct confidence intervals using the empirical values. We can add R control variates: X = Z + R α(w j EW j ). (16) j=1 A similar idea can be used for vector valued r.v. X, or even in infinite dimensions. Pavliotis (IC) CompStochProc 62 / 298

63 Importance sampling: choose a good probability distribution from which to simulate random variables: f(x)π(x) f(x)π(x) dx = ψ(x) dx R d R d ψ(x) ( ) f(x)π(x) = E ψ. ψ(x) where ψ(x) is a probability distribution. Consider the MC estimator I ψ N = 1 N N j=1 f(x j )π(x j ), X j ψ(x). ψ(x j ) The estimator is unbiased. The variance is var(i ψ N ) = 1 N var ( f(x)π(x) ψ(x) ). Pavliotis (IC) CompStochProc 63 / 298

64 We can to choose ψ(x) so that it has similar shape to f(x)π(x). The closer f(x)π(x) ψ(x) is to a constant the smaller the variance is. Example We use importance sampling to estimate the integral I = 1 0 c exp( a(x b) 4 ) dx = with c = 100, a = 10000, b = 0.5. we choose ψ = γ(0.5, 0.05); For N = we have I N = , var(i N ) = , err N = , I IC = , var(i IC ) = , err(i IC ) = Pavliotis (IC) CompStochProc 64 / 298

65 Figure: c exp( a(x b) 4 ) and γ(0.5, 0.05) Pavliotis (IC) CompStochProc 65 / 298

66 1 function [yun,varyun,errun,yic,varyic,erric] = quarticis(n) 2 % Use importance sampling to calculate the mean 3 % of exp(-v) where V is a quartic polynomial 4 a = 10000;b=0.5;c=100; 5 intexact = c* ; 6 x = rand(n,1); 7 yun = mean(c*exp(-a*(x-b).^4)); 8 varyun = var(c*exp(-a*(x-b).^4))/n; 9 errun = abs(intexact-yun); 10 mu = 0.5;sigma = 0.05; 11 xx = mu + sigma*randn(n,1); 12 yic = mean((c*exp(-a*(xx-b).^4))./normpdf(xx,mu,sigma)); 13 varyic = var((c*exp(-a*(xx-b).^4))./normpdf(xx,mu,sigma))/n; 14 erric = abs(intexact-yic); Pavliotis (IC) CompStochProc 66 / 298

67 ELEMENTS OF THE THEORY OF STOCHASTIC PROCESSES Pavliotis (IC) CompStochProc 67 / 298

68 Let T be an ordered set. A stochastic process is a collection of random variables X = {X t ; t T} where, for each fixed t T, X t is a random variable from (Ω,F) to (E,G). The measurable space {Ω, F} is called the sample space. The space (E,G) is called the state space. In this course we will take the set T to be [0,+ ). The state space E will usually be R d equipped with the σ algebra of Borel sets. A stochastic process X may be viewed as a function of both t T and ω Ω. We will sometimes write X(t), X(t,ω) or X t (ω) instead of X t. For a fixed sample point ω Ω, the function X t (ω) : T E is called a sample path (realization, trajectory) of the process X. Pavliotis (IC) CompStochProc 68 / 298

69 The finite dimensional distributions (fdd) of a stochastic process are the distributions of the E k valued random variables (X(t 1 ), X(t 2 ),..., X(t k )) for arbitrary positive integer k and arbitrary times t i T, i {1,..., k}: with x = (x 1,..., x k ). F(x) = P(X(t i ) x i, i = 1,..., k) We will say that two processes X t and Y t are equivalent if they have same finite dimensional distributions. From experiments or numerical simulations we can only obtain information about the (fdd) of a process. Pavliotis (IC) CompStochProc 69 / 298

70 Definition A Gaussian process is a stochastic processes for which E = R d and all the finite dimensional distributions are Gaussian F(x) = P(X(t i ) x i, i = 1,..., k) [ = (2π) n/2 (detk k ) 1/2 exp 1 ] 2 K 1 k (x µ k ), x µ k, for some vector µ k and a symmetric positive definite matrix K k. A Gaussian process x(t) is characterized by its mean and the covariance function m(t) := Ex(t) C(t, s) = E( (x(t) m(t) ) ( x(s) m(s) ) ). Thus, the first two moments of a Gaussian process are sufficient for a complete characterization of the process. Pavliotis (IC) CompStochProc 70 / 298

71 Examples of Gaussian stochastic processes Random Fourier series: let ξ i, ζ i N(0, 1), i = 1,... N and define X(t) = N (ξ j cos(2πjt)+ζ j sin(2πjt)). j=1 Brownian motion is a Gaussian process with m(t) = 0, C(t, s) = min(t, s). Brownian bridge is a Gaussian process with m(t) = 0, C(t, s) = min(t, s) ts. The Ornstein-Uhlenbeck process is a Gaussian process with m(t) = 0, C(t, s) = λ e α t s with α, λ > 0. Pavliotis (IC) CompStochProc 71 / 298

72 To simulate a Gaussian stochastic process: Fix t and define tj = (j 1) t, j = 1,...N. Set Xj := X(t j ) and define the Gaussian random vector X N = { } Xj N N. Then j=1 XN N(µ N, Γ N ) with µ N = (µ(t 1 ),...µ(t N )) and Γ N ij = C(t i, t j ). Then X N = µ N +ΛN(0, I) with Γ N = ΛΛ T. We can calculate the square root of the covariance matrix either using the Cholesky factorization or by calculating its eigenvalue decomposition (PCA). Γ might be a full matrix and the calculation of Λ can be computationally expensive. Pavliotis (IC) CompStochProc 72 / 298

73 1 function [t,x,gamma,mx,varx] =gaussiansimulation(t,dt,m) 2 % simulate a mean zero gaussian process 3 % use the eigenvalue decomposition to calculate sqrt{gamma} 4 % Input: T: length of path 5 % dt: timestep 6 % gamma: covariance 7 % M : number of paths simulated 8 % calculate first and second moment 9 t = 0:dt:T; 10 N = length(t); 11 for i =1:N, 12 for j=1:n, 13 % gamma(i,j) = covbm(t(i),t(j)); 14 % gamma(i,j) = covou(t(i),t(j),1,1); 15 % gamma(i,j) = covbb(t(i),t(j)); 16 gamma(i,j) = covfbm(t(i),t(j),0.2); 17 end 18 end 19 [vv,dd] = eig(gamma);lambda = vv*sqrt(dd)*vv'; 20 x = lambda*randn(n,m); 21 mx = mean(x'); varx = mean(x'.*x'); 22 figure;plot(t,x(:,1:10),'linewidth',1.0); Pavliotis (IC) CompStochProc 73 / 298

74 1 function rr = covbm(x,y) 2 % covariance function of Brownian motion 3 rr = min(x,y); 4 end 1 function rr = covbb(x,y) 2 % covariance function of Brownian motion 3 rr = min(x,y) - x*y; 4 end 1 function rr = covou(x,y,dd,aa) 2 % covariance function of Ornstein-Uhlenbeck process 3 % aa: inverse correlation time 4 % dd: diffusion coefficient, rr(o)= dd/aa 5 rr = (dd/aa)*exp(-aa*abs(x-y)); 6 end Pavliotis (IC) CompStochProc 74 / 298

75 1 function rr = covfbm(x,y,h) 2 % covariance function of fractional Brownian motion 3 % H: Hurst exponent 4 rr = (1/2)*(abs(x)^(2*H) + abs(y)^(2*h) - abs(x-y)^(2*h)... ); 5 end Pavliotis (IC) CompStochProc 75 / 298

76 Let ( Ω,F,P ) be a probability space. Let X t, t T (with T = R or Z) be a real-valued random process on this probability space with finite second moment, E X t 2 < + (i.e. X t L 2 ). Definition A stochastic process X t L 2 is called second-order stationary or wide-sense stationary if the first moment EX t is a constant and the second moment E(X t X s ) depends only on the difference t s: EX t = µ, E(X t X s ) = C(t s). Pavliotis (IC) CompStochProc 76 / 298

77 The constant m is called the expectation of the process X t. We will set m = 0. The function C(t) is called the covariance or the autocorrelation function of the X t. Notice that C(t) = E(X t X 0 ), whereas C(0) = E(X 2 t), which is finite, by assumption. Since we have assumed that X t is a real valued process, we have that C(t) = C( t), t R. Pavliotis (IC) CompStochProc 77 / 298

78 The correlation function of a second order stationary process enables us to associate a time scale to X t, the correlation time τ cor : τ cor = 1 C(0) 0 C(τ) dτ = 0 E(X τ X 0 )/E(X 2 0) dτ. The slower the decay of the correlation function, the larger the correlation time is. We have to assume sufficiently fast decay of correlations so that the correlation time is finite. Pavliotis (IC) CompStochProc 78 / 298

79 Example Consider the mean zero, second order stationary process with covariance function R(t) = D α e α t. (17) The spectral density of this process is: f(x) = 1 D 2π α + ( 0 e ixt e α t dt = 1 D e ixt e αt dt + 2π α 0 = 1 ( ) D 1 2π α ix+α + 1 ix+α = D 1 π x 2 +α 2. + ) e ixt e αt dt Pavliotis (IC) CompStochProc 79 / 298

80 Example (Continued) This function is called the Cauchy or the Lorentz distribution. The Gaussian stochastic process with covariance function (17) is called the stationary Ornstein-Uhlenbeck process. The correlation time is (we have that C(0) = D/(α)) τ cor = 0 e αt dt = α 1. Pavliotis (IC) CompStochProc 80 / 298

81 Second order stationary processes are ergodic: time averages equal phase space (ensemble) averages. An example of an ergodic theorem for a stationary processes is the following L 2 (mean-square) ergodic theorem. Theorem Let {X t } t 0 be a second order stationary process on a probability space Ω, F, P with mean µ and covariance R(t), and assume that R(t) L 1 (0,+ ). Then lim E 1 T + T T 0 X(s) ds µ 2 = 0. (18) Pavliotis (IC) CompStochProc 81 / 298

82 Proof. We have E 1 T T 0 X(s) ds µ 2 = 1 T 2 T 0 = 2 T 2 T 0 T 0 t 0 R(t s) dtds R(t s) dsdt = 2 T T 2 (T v)r(u) du 0, using the dominated convergence theorem and the assumption R( ) L 1. In the above we used the fact that R is a symmetric function, together with the change of variables u = t s, v = t and an integration over v. 0 Pavliotis (IC) CompStochProc 82 / 298

83 Definition A stochastic process is called (strictly) stationary if all finite dimensional distributions are invariant under time translation: for any integer k and times t i T, the distribution of (X(t 1 ), X(t 2 ),..., X(t k )) is equal to that of (X(s+t 1 ), X(s+t 2 ),..., X(s+t k )) for any s such that s+t i T for all i {1,..., k}. In other words, P(X t1 +t A 1, X t2 +t A 2...X tk +t A k ) = P(X t1 A 1, X t2 A 2...X tk A k ), t T. Let X t be a strictly stationary stochastic process with finite second moment (i.e. X t L 2 ). The definition of strict stationarity implies that EX t = µ, a constant, and E((X t µ)(x s µ)) = C(t s). Hence, a strictly stationary process with finite second moment is also stationary in the wide sense. The converse is not true. Pavliotis (IC) CompStochProc 83 / 298

84 Remarks 1 A sequence Y 0, Y 1,... of independent, identically distributed random variables is a stationary process with R(k) = σ 2 δ k0, σ 2 = E(Y k ) 2. 2 Let Z be a single random variable with known distribution and set Z j = Z, j = 0, 1, 2,.... Then the sequence Z 0, Z 1, Z 2,... is a stationary sequence with R(k) = σ 2. 3 The first two moments of a Gaussian process are sufficient for a complete characterization of the process. A corollary of this is that a second order stationary Gaussian process is also a (strictly) stationary process. Pavliotis (IC) CompStochProc 84 / 298

85 The most important continuous time stochastic process is Brownian motion. Brownian motion is a mean zero, continuous (i.e. it has continuous sample paths: for a.e ω Ω the function X t is a continuous function of time) process with independent Gaussian increments. A process X t has independent increments if for every sequence t 0 < t 1...t n the random variables are independent. X t1 X t0, X t2 X t1,...,x tn X tn 1 If, furthermore, for any t 1, t 2 and Borel set B R P(X t2 +s X t1 +s B) is independent of s, then the process X t has stationary independent increments. Pavliotis (IC) CompStochProc 85 / 298

86 Definition A one dimensional standard Brownian motion W(t) : R + R is a real valued stochastic process with the following properties: 1 W(0) = 0; 2 W(t) is continuous; 3 W(t) has independent increments. 4 For every t > s 0 W(t) W(s) has a Gaussian distribution with mean 0 and variance t s. That is, the density of the random variable W(t) W(s) is g(x; t, s) = ( ) 1 ) 2 2π(t s) exp ( x2 ; (19) 2(t s) Pavliotis (IC) CompStochProc 86 / 298

87 Definition (Continued) A d dimensional standard Brownian motion W(t) : R + R d is a collection of d independent one dimensional Brownian motions: W(t) = (W 1 (t),..., W d (t)), where W i (t), i = 1,..., d are independent one dimensional Brownian motions. The density of the Gaussian random vector W(t) W(s) is thus g(x; t, s) = ( ) ) d/2 2π(t s) exp ( x 2. 2(t s) Brownian motion is sometimes referred to as the Wiener process. Pavliotis (IC) CompStochProc 87 / 298

88 To generate Brownian paths we can use the general algorithm for simulating Gaussian processes with µ N = 0 and { Γ = min(ti, t j ) } N i,j=1. It is easier to use the fact that Brownian motion has independent increments with dw N(0, dt): Wj = W j 1 + dw j, j = 1, 2,...N, where dwj = tn(0, 1). After having generated Brownian paths we can compute statistics using Monte Carlo Ef(W t ) 1 N N f(wt). j j=1 We can also simulate processes related to Brownian motion such as the geometric Brownian motion: S t = S 0 e at+σwt. (20) Pavliotis (IC) CompStochProc 88 / 298

89 2 1.5 mean of 1000 paths 5 individual paths 1 U(t) t Figure: Brownian sample paths Pavliotis (IC) CompStochProc 89 / 298

90 It is possible to prove rigorously the existence of the Wiener process (Brownian motion): Theorem (Wiener) There exists an almost-surely continuous process W t with independent increments such and W 0 = 0, such that for each t the random variable W t is N(0, t). Furthermore, W t is almost surely locally Hölder continuous with exponentα for any α (0, 1 2 ). Notice that Brownian paths are not differentiable. Pavliotis (IC) CompStochProc 90 / 298

91 Brownian motion is a Gaussian process. For the d dimensional Brownian motion, and for I the d d dimensional identity, we have (see (7) and (8)) EW(t) = 0 t 0 and Moreover, ( ) E (W(t) W(s)) (W(t) W(s)) = (t s)i. (21) ( ) E W(t) W(s) = min(t, s)i. (22) Pavliotis (IC) CompStochProc 91 / 298

92 From the formula for the Gaussian density g(x, t s), eqn. (19), we immediately conclude that W(t) W(s) and W(t+u) W(s+u) have the same pdf. Consequently, Brownian motion has stationary increments. Notice, however, that Brownian motion itself is not a stationary process. Since W(t) = W(t) W(0), the pdf of W(t) is g(x, t) = 1 2πt e x2 /2t. We can easily calculate all moments of the Brownian motion: E(x n (t)) = = 1 + x n e x2 /2t dx 2πt { (n 1)t n/2, n even, 0, n odd. Pavliotis (IC) CompStochProc 92 / 298

93 We can define the OU process through the Brownian motion via a time change. Lemma Let W(t) be a standard Brownian motion and consider the process V(t) = e t W(e 2t ). Then V(t) is a Gaussian second order stationary process with mean 0 and covariance K(s, t) = e t s. Pavliotis (IC) CompStochProc 93 / 298

94 Definition A (normalized) fractional Brownian motion W H t, t 0 with Hurst parameter H (0, 1) is a centered Gaussian process with continuous sample paths whose covariance is given by E(W H t W H s ) = 1 2( s 2H + t 2H t s 2H). (23) Fractional Brownian motion has the following properties. 1 When H = 1 2, W 1 2 t becomes the standard Brownian motion. 2 W H 0 = 0, EWH t = 0, E(W H t ) 2 = t 2H, t 0. 3 It has stationary increments, E(W H t W H s ) 2 = t s 2H. 4 It has the following self similarity property (W H αt, t 0) = (α H W H t, t 0), α > 0, where the equivalence is in law. Pavliotis (IC) CompStochProc 94 / 298

95 A very useful result is that we can expand every centered (EX t = 0) stochastic process with continuous covariance (and hence, every L 2 -continuous centered stochastic process) into a random Fourier series. Let (Ω,F,P) be a probability space and {X t } t T a centered process which is mean square continuous. Then, the Karhunen-Loéve theorem states that X t can be expanded in the form X t = ξ n φ n (t), (24) n=1 where the ξ n are an orthogonal sequence of random variables with E ξ k 2 = λ k, where {λ k φ n } k=1 are the eigenvalues and eigenfunctions of the integral operator whose kernel is the covariance of the processes X t. The convergence is in L 2 (P) for every t T. If X t is a Gaussian process then the ξ k are independent Gaussian random variables. Pavliotis (IC) CompStochProc 95 / 298

96 Set T = [0, 1].Let K : L 2 (T) L 2 (T) be defined by Kψ(t) = 1 0 K(s, t)ψ(s) ds. The kernel of this integral operator is a continuous (in both s and t), symmetric, nonnegative function. Hence, the corresponding integral operator has eigenvalues, eigenfunctions Kφ n = λ n, λ n 0, (φ n,φ m ) L 2 = δ nm, such that the covariance operator can be expanded in a uniformly convergent series B(t, s) = λ n φ n (t)φ n (s). n=1 Pavliotis (IC) CompStochProc 96 / 298

97 The random variables ξ n are defined as ξ n = 1 0 X(t)φ n (t) dt. The orthogonality of the eigenfunctions of the covariance operator implies that E(ξ n ξ m ) = λ n δ nm. Thus the random variables are orthogonal. When X(t) is Gaussian, then ξ k are Gaussian random variables. Furthermore, since they are also orthogonal, they are independent Gaussian random variables. Pavliotis (IC) CompStochProc 97 / 298

98 Example The Karhunen-Loéve Expansion for Brownian Motion We set T = [0, 1].The covariance function of Brownian motion is C(t, s) = min(t, s). The eigenvalue problem Cψ n = λ n ψ n becomes Or, min(t, s)ψ n (s) ds = λ n ψ n (t). sψ n (s) ds+ t 1 We differentiate this equation twice: 1 t t ψ n (s) ds = λ n ψ n (t). ψ n (s) ds = λ n ψ n(t) and ψ n (t) = λ n ψ n(t), where primes denote differentiation with respect to t. Pavliotis (IC) CompStochProc 98 / 298

99 Example From the above equations we immediately see that the right boundary conditions are ψ(0) = ψ (1) = 0. The eigenvalues and eigenfunctions are ψ n (t) = ( ) ( 1 2 sin 2 (2n 1)πt, λ n = 2 (2n 1)π ) 2. Thus, the Karhunen-Loéve expansion of Brownian motion on [0, 1] is W t = 2 sin (( n 1 ) ) 2 πt ξ n ( ). (25) n 1 2 π n=1 Pavliotis (IC) CompStochProc 99 / 298

100 Example KL expansion for Brownian motion 1 function [t,bb] =KLBmotion(dt) 2 % generate paths of Brownian motion on [0,1] 3 % using the KL expansion 4 t = 0:dt:1; 5 N = length(t); 6 bb = zeros(1,n); 7 for i =1:N, 8 xi = sqrt(2)*randn/((i-0.5)*pi); 9 bb = bb + xi*sin((i-0.5)*pi*t); 10 end 11 figure;plot(t,bb,'linewidth',2); end Pavliotis (IC) CompStochProc 100 / 298

101 Example The Karhunen-Loéve expansion of the Brownian bridge B t = W t tw 1 on [0, 1] is 2 sin(nπt) B t = ξ n. (26) nπ n=1 Pavliotis (IC) CompStochProc 101 / 298

102 Example KL expansion for Brownian bridge 1 function [t,bb] =KLBBridge(dt) 2 % generate paths of Brownian bridge on [0,1] 3 % using the KL expansion 4 t = 0:dt:1; 5 N = length(t); 6 bb = zeros(1,n); 7 for i =1:N, 8 xi = sqrt(2)*randn/(i*pi); 9 bb = bb + xi*sin(i*pi*t); 10 end 11 figure;plot(t,bb,'linewidth',2); end Pavliotis (IC) CompStochProc 102 / 298

103 STOCHASTIC DIFFERENTIAL EQUATIONS (SDEs) Pavliotis (IC) CompStochProc 103 / 298

104 In this part of the course we will study stochastic differential equation (SDEs): ODEs driven by Gaussian white noise. Let W(t) denote a standard m dimensional Brownian motion, h : Z R d a smooth vector-valued function and γ : Z R d m a smooth matrix valued function (in this course we will take Z = T d, R d or R l T d l. Consider the SDE dz dt = h(z)+γ(z)dw dt, z(0) = z 0. (27) We think of the term dw dt as representing Gaussian white noise: a mean-zero Gaussian process with correlation δ(t s)i. The function h in (27) is sometimes referred to as the drift and γ as the diffusion coefficient. Pavliotis (IC) CompStochProc 104 / 298

105 Such a process exists only as a distribution. The precise interpretation of (27) is as an integral equation for z(t) C(R +,Z): z(t) = z 0 + t 0 h(z(s))ds + t 0 γ(z(s))dw(s). (28) In order to make sense of this equation we need to define the stochastic integral against W(s). Pavliotis (IC) CompStochProc 105 / 298

106 The Itô Stochastic Integral For the rigorous analysis of stochastic differential equations it is necessary to define stochastic integrals of the form I(t) = t 0 f(s) dw(s), (29) where W(t) is a standard one dimensional Brownian motion. This is not straightforward because W(t) does not have bounded variation. In order to define the stochastic integral we assume that f(t) is a random process, adapted to the filtration F t generated by the process W(t), and such that ( T ) E f(s) 2 ds <. 0 Pavliotis (IC) CompStochProc 106 / 298

107 The Itô stochastic integral I(t) is defined as the L 2 limit of the Riemann sum approximation of (29): K 1 I(t) := lim f(t k 1 )(W(t k ) W(t k 1 )), (30) K k=1 where t k = k t and K t = t. Notice that the function f(t) is evaluated at the left end of each interval [t n 1, t n ] in (30). The resulting Itô stochastic integral I(t) is a.s. continuous in t. These ideas are readily generalized to the case where W(s) is a standard d dimensional Brownian motion and f(s) R m d for each s. Pavliotis (IC) CompStochProc 107 / 298

108 The resulting integral satisfies the Itô isometry E I(t) 2 = t 0 E f(s) 2 Fds, (31) where F denotes the Frobenius norm A F = tr(a T A). The Itô stochastic integral is a martingale: EI(t) = 0 and E[I(t) F s ] = I(s) t s, where F s denotes the filtration generated by W(s). Pavliotis (IC) CompStochProc 108 / 298

109 Example Consider the Itô stochastic integral I(t) = t 0 f(s) dw(s), where f, W are scalar valued. This is a martingale with quadratic variation I t = t 0 (f(s)) 2 ds. More generally, for f, W in arbitrary finite dimensions, the integral I(t) is a martingale with quadratic variation I t = t 0 (f(s) f(s)) ds. Pavliotis (IC) CompStochProc 109 / 298

110 The Stratonovich Stochastic Integral In addition to the Itô stochastic integral, we can also define the Stratonovich stochastic integral. It is defined as the L 2 limit of a different Riemann sum approximation of (29), namely K 1 1 ( ) I strat (t) := lim f(t k 1 )+f(t k ) (W(t k ) W(t k 1 )), (32) K 2 k=1 where t k = k t and K t = t. Notice that the function f(t) is evaluated at both endpoints of each interval [t n 1, t n ] in (32). The multidimensional Stratonovich integral is defined in a similar way. The resulting integral is written as I strat (t) = t 0 f(s) dw(s). Pavliotis (IC) CompStochProc 110 / 298

111 The limit in (32) gives rise to an integral which differs from the Itô integral. The situation is more complex than that arising in the standard theory of Riemann integration for functions of bounded variation: in that case the points in [t k 1, t k ] where the integrand is evaluated do not effect the definition of the integral, via a limiting process. In the case of integration against Brownian motion, which does not have bounded variation, the limits differ. When f and W are correlated through an SDE, then a formula exists to convert between them. Pavliotis (IC) CompStochProc 111 / 298

112 The Itô and Stratonovich stochastic integrals coincide provided that the integrand is sufficiently regular then the Itô and Stratonovich stochastic integrals coincide. Suppose that there exist C <, ε > 0 such that for all s, t [0, T]. Then E f(t,ω) f(s,ω) 2 C t s 1+ε, T 0 f(t,ω) dw t = t 0 f(t,ω) dw t. This is more than the regularity of the solution to an SDE. On the other hand: T 0 W t dw t = 1 2 W2 T 1 T 2 T, W t dw t = 1 2 W2 T. 0 Pavliotis (IC) CompStochProc 112 / 298

113 Itô and Stratonovich stochastic integrals (Higham 2001) 1 function [itoerr, straterr] = stint(t,n) 2 % 3 % Ito and Stratonovich integrals of W dw 4 % 5 6 dt = T/N; 7 8 dw = sqrt(dt)*randn(1,n); % increments 9 W = cumsum(dw); % cumulative sum ito = sum([0,w(1:end-1)].*dw) 12 strat = sum((0.5*([0,w(1:end-1)]+w) *sqrt(dt)*randn(1,N)).*dW) itoerr = abs(ito - 0.5*(W(end)^2-T)) 15 straterr = abs(strat - 0.5*W(end)^2) 16 end Pavliotis (IC) CompStochProc 113 / 298

114 We can associate a second order differential operator with and SDE, the generator of the process. Consider the Itô SDE dx t = b(x t ) dt +σ(x t ) dw(t), (33) We define The generator of X t is Σ(x) = σ(x)σ(x) T. (34) L = b d 2 Σ ij (x), (35) x i x j i,j=1 Pavliotis (IC) CompStochProc 114 / 298

115 Using the generator we can write Itô s formula, which enables us to calculate the rate of change in time of functions V(t, X t ). In the absence of noise the rate of change of V can be written as d dt V(t, X t) = V t (t, x t)+av(t, x(t)), (36) where X t is the solution of the ODE Ẋ t = b(x) and A is A = b(x). (37) For the SDE the chain rule (36) has to be modified by the addition of a term due to noise: d dt V(t, X t) = V t (t, X t)+av(t, X t )+ V(t, X t ),σ(x t ) dwdt (t). (38) Pavliotis (IC) CompStochProc 115 / 298

116 The additional term in LV is proportional to Σ and it arises from the lack of smoothness of Brownian motion. The precise interpretation of the expression for the rate of change of V is in integrated form: V(t, X t ) = V(x(0))+ + t 0 t 0 V s (s, X s) ds+ t 0 LV(s, X s ) ds V(s, X s ),σ(x s ) dw(s). (39) The Brownian differential scales like the square root of the differential in time: E(dW t ) 2 = dt. We can write Itô s formula in the form dv(t, x(t)) = V t dt + d i,j=1 V dx i V dx i dx j, (40) x i 2 x i x j We are using the convention dw i (t)dw j (t) = δ ij dt, dw i (t)dt = 0, i, j = 1,... d. Thus, we can think of (40) as a generalization of Leibniz rule (38) where second order differentials are kept. Pavliotis (IC) CompStochProc 116 / 298

117 Taking the expectation of (40) and using the fact that the stochastic integral is a martingale we obtain (we assume that V is independent of time and that the I.C. are deterministic) EV(X t ) = E t 0 LV(s, X s ) ds. We can use Itô s formula to calculate the expectation value of functionals of the solution on an SDE. For a Stratonovich SDE the rules of standard calculus apply; let X t denote the solution of the Stratonovich SDE dx t = b(x t ) dt +σ(x t ) dw t, (41) where b : R d R d and σ : R d R d d. Then the generator of X t is L = b σ : (σt ). (42) Furthermore, the Newton-Leibniz chain rule applies: let h C 2 (R d ) and define Y t = h(x t ). Then dy t = h(x t ) (b(x t ) dt +σ(x t ) dw t ). (43) Pavliotis (IC) CompStochProc 117 / 298

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

M5A42 APPLIED STOCHASTIC PROCESSES

M5A42 APPLIED STOCHASTIC PROCESSES M5A42 APPLIED STOCHASTIC PROCESSES Professor G.A. Pavliotis Department of Mathematics Imperial College London, UK LECTURE 1 06/10/2016 Lectures: Thursdays 14:00-15:00, Huxley 140, Fridays 10:00-12:00,

More information

M4A42 APPLIED STOCHASTIC PROCESSES

M4A42 APPLIED STOCHASTIC PROCESSES M4A42 APPLIED STOCHASTIC PROCESSES G.A. Pavliotis Department of Mathematics Imperial College London, UK LECTURE 1 12/10/2009 Lectures: Mondays 09:00-11:00, Huxley 139, Tuesdays 09:00-10:00, Huxley 144.

More information

APPLIED STOCHASTIC PROCESSES

APPLIED STOCHASTIC PROCESSES APPLIED STOCHASTIC PROCESSES G.A. Pavliotis Department of Mathematics Imperial College London, UK January 16, 2011 Pavliotis (IC) StochProc January 16, 2011 1 / 367 Lectures: Mondays, 10:00-12:00, Huxley

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Stochastic differential equation models in biology Susanne Ditlevsen

Stochastic differential equation models in biology Susanne Ditlevsen Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner

More information

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Simo Särkkä Aalto University, Finland (visiting at Oxford University, UK) November 13, 2013 Simo Särkkä (Aalto) Lecture 1: Pragmatic

More information

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ) Chapter 4 Stochastic Processes 4. Definition In the previous chapter we studied random variables as functions on a sample space X(ω), ω Ω, without regard to how these might depend on parameters. We now

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

Lecture 4: Stochastic Processes (I)

Lecture 4: Stochastic Processes (I) Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 215 Lecture 4: Stochastic Processes (I) Readings Recommended: Pavliotis [214], sections 1.1, 1.2 Grimmett and Stirzaker [21], section 9.4 (spectral

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

Chapter 4: Monte-Carlo Methods

Chapter 4: Monte-Carlo Methods Chapter 4: Monte-Carlo Methods A Monte-Carlo method is a technique for the numerical realization of a stochastic process by means of normally distributed random variables. In financial mathematics, it

More information

Handbook of Stochastic Methods

Handbook of Stochastic Methods C. W. Gardiner Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences Third Edition With 30 Figures Springer Contents 1. A Historical Introduction 1 1.1 Motivation I 1.2 Some Historical

More information

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Xiaowei Chen International Business School, Nankai University, Tianjin 371, China School of Finance, Nankai

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Karhunen-Loève Expansions of Lévy Processes

Karhunen-Loève Expansions of Lévy Processes Karhunen-Loève Expansions of Lévy Processes Daniel Hackmann June 2016, Barcelona supported by the Austrian Science Fund (FWF), Project F5509-N26 Daniel Hackmann (JKU Linz) KLE s of Lévy Processes 0 / 40

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Fast-slow systems with chaotic noise

Fast-slow systems with chaotic noise Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 1, 216 Statistical properties of dynamical systems, ESI Vienna. David

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1 Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet. For ξ, ξ 2, i.i.d. with P(ξ i = ± = /2 define the discrete-time random walk W =, W n = ξ +... + ξ n. (i Formulate and prove the property

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Stochastic Differential Equations. Introduction to Stochastic Models for Pollutants Dispersion, Epidemic and Finance

Stochastic Differential Equations. Introduction to Stochastic Models for Pollutants Dispersion, Epidemic and Finance Stochastic Differential Equations. Introduction to Stochastic Models for Pollutants Dispersion, Epidemic and Finance 15th March-April 19th, 211 at Lappeenranta University of Technology(LUT)-Finland By

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

Hochdimensionale Integration

Hochdimensionale Integration Oliver Ernst Institut für Numerische Mathematik und Optimierung Hochdimensionale Integration 14-tägige Vorlesung im Wintersemester 2010/11 im Rahmen des Moduls Ausgewählte Kapitel der Numerik Contents

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Simo Särkkä Aalto University Tampere University of Technology Lappeenranta University of Technology Finland November

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

MFE6516 Stochastic Calculus for Finance

MFE6516 Stochastic Calculus for Finance MFE6516 Stochastic Calculus for Finance William C. H. Leon Nanyang Business School December 11, 2017 1 / 51 William C. H. Leon MFE6516 Stochastic Calculus for Finance 1 Symmetric Random Walks Scaled Symmetric

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Simo Särkkä Aalto University, Finland November 18, 2014 Simo Särkkä (Aalto) Lecture 4: Numerical Solution of SDEs November

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

A numerical method for solving uncertain differential equations

A numerical method for solving uncertain differential equations Journal of Intelligent & Fuzzy Systems 25 (213 825 832 DOI:1.3233/IFS-12688 IOS Press 825 A numerical method for solving uncertain differential equations Kai Yao a and Xiaowei Chen b, a Department of Mathematical

More information

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t, CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Introduction. Stochastic Processes. Will Penny. Stochastic Differential Equations. Stochastic Chain Rule. Expectations.

Introduction. Stochastic Processes. Will Penny. Stochastic Differential Equations. Stochastic Chain Rule. Expectations. 19th May 2011 Chain Introduction We will Show the relation between stochastic differential equations, Gaussian processes and methods This gives us a formal way of deriving equations for the activity of

More information

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018 ECE534, Spring 2018: s for Problem Set #4 Due Friday April 6, 2018 1. MMSE Estimation, Data Processing and Innovations The random variables X, Y, Z on a common probability space (Ω, F, P ) are said to

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Local vs. Nonlocal Diffusions A Tale of Two Laplacians

Local vs. Nonlocal Diffusions A Tale of Two Laplacians Local vs. Nonlocal Diffusions A Tale of Two Laplacians Jinqiao Duan Dept of Applied Mathematics Illinois Institute of Technology Chicago duan@iit.edu Outline 1 Einstein & Wiener: The Local diffusion 2

More information

Preliminary statistics

Preliminary statistics 1 Preliminary statistics The solution of a geophysical inverse problem can be obtained by a combination of information from observed data, the theoretical relation between data and earth parameters (models),

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

Sampling from complex probability distributions

Sampling from complex probability distributions Sampling from complex probability distributions Louis J. M. Aslett (louis.aslett@durham.ac.uk) Department of Mathematical Sciences Durham University UTOPIAE Training School II 4 July 2017 1/37 Motivation

More information

Scientific Computing: Monte Carlo

Scientific Computing: Monte Carlo Scientific Computing: Monte Carlo Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 April 5th and 12th, 2012 A. Donev (Courant Institute)

More information

Introduction to Diffusion Processes.

Introduction to Diffusion Processes. Introduction to Diffusion Processes. Arka P. Ghosh Department of Statistics Iowa State University Ames, IA 511-121 apghosh@iastate.edu (515) 294-7851. February 1, 21 Abstract In this section we describe

More information

Handbook of Stochastic Methods

Handbook of Stochastic Methods Springer Series in Synergetics 13 Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences von Crispin W Gardiner Neuausgabe Handbook of Stochastic Methods Gardiner schnell und portofrei

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

MATH4210 Financial Mathematics ( ) Tutorial 7

MATH4210 Financial Mathematics ( ) Tutorial 7 MATH40 Financial Mathematics (05-06) Tutorial 7 Review of some basic Probability: The triple (Ω, F, P) is called a probability space, where Ω denotes the sample space and F is the set of event (σ algebra

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

MA50251: Applied SDEs

MA50251: Applied SDEs MA5251: Applied SDEs Alexander Cox February 12, 218 Preliminaries These notes constitute the core theoretical content of the unit MA5251, which is a course on Applied SDEs. Roughly speaking, the aim of

More information

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time: Chapter 1 Random Variables 1.1 Elementary Examples We will start with elementary and intuitive examples of probability. The most well-known example is that of a fair coin: if flipped, the probability of

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information