u( x)= Pr X() t hits C before 0 X( 0)= x ( ) 2 AMS 216 Stochastic Differential Equations Lecture #2

Similar documents
UNIVERSITY OF CALIFORNIA SANTA CRUZ AN ANALYSIS OF THE DISCRETE AND CONTINUOUS GAMBLER S RUIN PROBLEM

Multiple Decrement Models

Transform Techniques - CF

Week 3 Sept. 17 Sept. 21

Transform Techniques - CF

Markov Processes Hamid R. Rabiee

Transform Techniques - CF

April 24, 2012 (Tue) Lecture 19: Impulse Function and its Laplace Transform ( 6.5)

Introduction to numerical simulations for Stochastic ODEs

6 Continuous-Time Birth and Death Chains

Mathematical Games and Random Walks

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

n px p x (1 p) n x. p x n(n 1)... (n x + 1) x!

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.

Strauss PDEs 2e: Section Exercise 2 Page 1 of 8. In order to do so, we ll solve for the Green s function G(x, t) in the corresponding PDE,

1 Random Walks and Electrical Networks

1 Gambler s Ruin Problem

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

1D Wave Equation General Solution / Gaussian Function

Stochastic Differential Equations.

Stat 475 Life Contingencies I. Chapter 2: Survival models

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Lecture 4 - Random walk, ruin problems and random processes

Lecture 6 Random walks - advanced methods

e (x y)2 /4kt φ(y) dy, for t > 0. (4)

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Section 6.5 Impulse Functions

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

MAT 271E Probability and Statistics

Math Calculus f. Business and Management - Worksheet 12. Solutions for Worksheet 12 - Limits as x approaches infinity

Fourier Transform for Continuous Functions

The integrating factor method (Sect. 1.1)

Math 342 Partial Differential Equations «Viktor Grigoryan

16. Working with the Langevin and Fokker-Planck equations

2. Higher-order Linear ODE s

Fourier Analysis Fourier Series C H A P T E R 1 1

NUMERICAL ANALYSIS PROBLEMS

MATH HOMEWORK PROBLEMS D. MCCLENDON

Chapter 11 Advanced Topic Stochastic Processes

Inference for Stochastic Processes

Fundamental Theorem of Calculus

Edexcel past paper questions

Stochastic Modelling in Climate Science

Computations - Show all your work. (30 pts)

Optimal Sojourn Time Control within an Interval 1

Math 510 midterm 3 answers

Solutions to In Class Problems Week 15, Wed.

Chapter 3 Single Random Variables and Probability Distributions (Part 1)

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

1 Gambler s Ruin Problem

ALGEBRA II-GRAPHING QUADRATICS THE GRAPH OF A QUADRATIC FUNCTION

Lecture 31. Basic Theory of First Order Linear Systems

Solution to Problems for the 1-D Wave Equation

System Identification

Lecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)

4 Branching Processes

LEARN ABOUT the Math

The problems that follow illustrate the methods covered in class. They are typical of the types of problems that will be on the tests.

Calculus Module C09. Delta Process. Copyright This publication The Northern Alberta Institute of Technology All Rights Reserved.

MATH141: Calculus II Exam #1 review 6/8/2017 Page 1

Random variable X is a mapping that maps each outcome s in the sample space to a unique real number x, x. X s. Real Line

Lecture 11: Fourier Cosine Series

Slopes and Rates of Change

Solving the Heat Equation (Sect. 10.5).

LTI Systems (Continuous & Discrete) - Basics

Answers. Investigation 2. ACE Assignment Choices. Applications. Problem 2.5. Problem 2.1. Problem 2.2. Problem 2.3. Problem 2.4

The Chain Rule. This is a generalization of the (general) power rule which we have already met in the form: then f (x) = r [g(x)] r 1 g (x).

I forgot to mention last time: in the Ito formula for two standard processes, putting

Great Theoretical Ideas in Computer Science

33 The Gambler's Ruin 1

9.5 HONORS Determine Odd and Even Functions Graphically and Algebraically

CS 361: Probability & Statistics

Brief Review of Probability

Laplace Transforms Chapter 3

2.7 The Gaussian Probability Density Function Forms of the Gaussian pdf for Real Variates

A) Questions on Estimation

MA108 ODE: Picard s Theorem

1 Signals and systems

Linear Variable coefficient equations (Sect. 2.1) Review: Linear constant coefficient equations

Simultaneous Equations Solve for x and y (What are the values of x and y): Summation What is the value of the following given x = j + 1. x i.

ECE 636: Systems identification

Lecture 6. Four postulates of quantum mechanics. The eigenvalue equation. Momentum and energy operators. Dirac delta function. Expectation values

Lecture 8: Continuous random variables, expectation and variance

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Stochastic processes Lecture 1: Multiple Random Variables Ch. 5

6.241 Dynamic Systems and Control

Math 211. Lecture #6. Linear Equations. September 9, 2002

System Identification & Parameter Estimation

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

Lecture 9 Classification of States

NAME: 23 February 2017 EE301 Signals and Systems Exam 1 Cover Sheet

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Generalized sources (Sect. 6.5). The Dirac delta generalized function. Definition Consider the sequence of functions for n 1, Remarks:

Transition to College Math

Statistical Methods for Data Analysis

harmonic oscillator in quantum mechanics

Tutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term

Module 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur

Transcription:

AMS 6 Stochastic Differential Equations Lecture # Gambler s Ruin (continued) Question #: How long can you play? Question #: What is the chance that you break the bank? Note that unlike in the case of deterministic equations, for stochastic differential equations, it is not enough just to calculate X(t). For stochastic differential equations, we also want to know quantities that are not readily determined from X(t), such as answers to questions above. We answer question # first. Let u( )= Pr X() t hits C before 0 X( 0)= { } We have u(c) = and u(0) = 0. Xdt = + For fied 0 < < C and when dt is small enough, the probability of X(t) hits 0 or C in time interval [0, dt] is eponentially small. u( )= E { u( + )}+ odt = E u( )+ u + u = u + u dt + odt + odt ==> u ( )= 0 differential equation u( 0)= 0, uc = boundary conditions ==> u = C The probability of breaking the bank is proportional to your initial cash and inversely proportional to the total cash. Now we answer question #. Let T( )= E{ time until X()= t C or X()= t 0 X( 0)= }. We have T(0) = 0 and T(C) = 0. Xdt = + - -

==> T( )= dt + E { T( + )}+ odt = dt + E T( )+ T + T = dt + T( ) + T dt + odt T ( )= differential equation T ( 0)= 0, T( C)= 0 boundary conditions ==> T( ) = C ( ) + odt But the average time does not give us the full picture! T() is the average of the time until going bankrupt or breaking the bank. However, this average does not give us a full picture of how long we can play with initial cash. Notice that T( )= C ( ) increases with C. In particular, when C =, we have T( ) = for > 0. This certainly does not mean we can play forever with initial cash. To have a more detailed picture of how long we can play when C is very large, we look at the probability that we can play longer than a certain time. Assume C =. Consider { [ ] X ( 0)= } P(,t)= Pr X ( )> 0 for 0, t P(, t) is the probability of surviving (at least) to time t given that X(0) =. We have P,0 = and P( 0,t) = 0. Xdt = + ==> P(, t)= E { P( +, t dt) }+ odt = E P(,t)+ P t ( dt)+ P + P = P,t + P t ( dt) + P dt + odt P(, t) satisfies the initial boundary value problem (IBVP) P t = P P(,0)=, P( 0,t)= 0 Converting it to an initial value problem (IVP) by odd etension P(,t)= P(,t) The etended function P(, t) satisfies the IVP + odt - -

P t = P P,0 =, < 0, > 0 Solution of the IVP u t = au u(,0)= f is given by u(,t)= 4at ep 4at f ( )d Using this formula to calculate P(, t), we obtain f( )= P(,t)= = =, <, > t t 0 t ep t d ep t d t ep( s )ds, s = t = erf t where the error function is defined as erf 0 t We can use P,t = erf z 0 ep( s )ds t ep t d to calculate the probability. P(,.) = 0.5 means that with initial cash =, the probability that we can play longer than t =. is 50%. P(, 63) = 0. means that with initial cash =, the probability that we can play longer than t = 63 is 0%. P(, 55) = 0.05 means that with initial cash =, the probability that we can play longer than t = 55 is 5%. - 3 -

Note that when the game is fair and there are many players, the casino s cash is almost unchanged with respect to the time. In other words, the casino cannot make money with a fair game. (Now we consider a) Biased game: dx = mdt + It is a biased game because E{ dx} = mdt Question #: How long can you play? Question #: What is the chance that you break the bank? Let u = Pr Xt hits C before 0 X( 0) = { } We have uc = and u( 0) = 0. Again, for fied 0 < < C and when dt is small enough, the probability of X(t) hits 0 or C in time interval [0, dt] is eponentially small. Xdt = + dx where dx = mdt + satisfies dx = O( dt) E{ dx}= mdt E{ ( dx) }= E{ o( dt)+ ( ) }= dt + odt Function u() satisfies u( )= E { u( + dx) }+ odt = E u( )+ u dx + u dx = u( ) u mdt + u dt + odt + odt u ==> ( ) mu ( )= 0 differential equation u( 0)= 0, uc = boundary conditions (The characteristic equation of the ODE is m - 4 -

The two roots are = m, = 0 A general solution has the form u( )= c e m + c Using the boundary conditions, we obtain) ==> u( )= em ( e mc = e m )e mc e mc Suppose mc is moderately large (for eample, mc = 0). We have u( ) ( e m )e mc m C < e The probability of breaking the bank is eponentially small. Let us compare fair game vs biased game Fair game: u( )= C ==> u C = Biased game: u C ( e mc )e mc mc e u C is the probability of winning all the cash of the other player when the two players start with the equal amount of cash. Now we study the average time to the end of game. Let T( )= E{ time until X()= t C or X()= t 0 X( 0)= }. We have T ( 0) = 0 and T( C) = 0. Eercise #: Derive an ODE for T(). Solve the boundary value problem to get T( )= m C m em e mc Suppose mc is moderately large and is not close to C, then we have T( )= m C em e mc m - 5 -

This is consistent with the intuitive deterministic picture that if your cash decreases with a speed m, then your initial cash will last time = m. Now consider a discrete version of the gambler s ruin problem. t = 0,,, 3, discrete X = 0,,, 3, discrete X( t + )= X()+ t dx +, prob = µ dx =, prob = + µ N = sum of your initial cash and casino s cash n = your initial cash { } { ()= N or X()= t 0 X( 0)= n} un = Pr X() t hits N before 0 X( 0)= n T( n)= E time until X t Eercise P: For µ = 0 (fair game), derive equations for u(n) and T(n). Solve the equations for u(n) and T(n). Eercise P: For µ > 0 (biased game), derive equations for u(n) and T(n). Solve the equations for u(n) and T(n). White noise A short story ) Z() t dt ()Z( s) ==> ) E{ Z t }= ( t s) ==> 3) e it E{ Z()Z t ( 0) }dt = ==> 4) Z(t) is white noise. First, we point out that Z() t dt is not a regular function. - 6 -

= O( dt) dt = O dt dt = O dt lim = dt0 dt 0 = We need to fill in some details to eplain each step, especially from line 3 to line 4. A long story We start with some mathematical preparations. Delta function (Dirac's delta function): We give two equivalent definitions of the Delta function. Each definition is mathematically more convenient in some situations. Definition (limit of Gaussian distribution): ( )= lim n ep n n where lim n = 0, for eample, n = n n. Definition (limit of impulse function): ( )= lim ( ) n n n, for where n ( )= n, n 0, otherwise - 7 -