Normal Random Variables and Probability

Similar documents
Discrete Random Variables

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Recitation 2: Probability

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Probability and Distributions

Brief Review of Probability

1: PROBABILITY REVIEW

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Analysis of Engineering and Scientific Data. Semester

Discrete Random Variables

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

Week 1 Quantitative Analysis of Financial Markets Distributions A

1 Presessional Probability

SDS 321: Introduction to Probability and Statistics

Ching-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12

Random variables, Expectation, Mean and Variance. Slides are adapted from STAT414 course at PennState

Chapter 2: Random Variables

Math Practice Final - solutions

Continuous Random Variables

MATH Solutions to Probability Exercises

More on Distribution Function

Elementary Discrete Probability

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.

Continuous Distributions

Continuous random variables

Outline PMF, CDF and PDF Mean, Variance and Percentiles Some Common Distributions. Week 5 Random Variables and Their Distributions

Mathematics 426 Robert Gross Homework 9 Answers

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

1 Random Variable: Topics

1 Review of Probability and Distributions

p. 6-1 Continuous Random Variables p. 6-2

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

Definition: A random variable X is a real valued function that maps a sample space S into the space of real numbers R. X : S R

Twelfth Problem Assignment

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Discrete Distributions

Continuous Random Variables

Taylor and Maclaurin Series

1.1 Review of Probability Theory

MATH 236 ELAC FALL 2017 TEST 3 NAME: SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question.

Common ontinuous random variables

Slides 8: Statistical Models in Simulation

Notes for Math 324, Part 19

Statistics 100A Homework 5 Solutions

Multivariate distributions

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

The Binomial distribution. Probability theory 2. Example. The Binomial distribution

Fundamental Tools - Probability Theory II

Random Variables and Their Distributions

2 Functions of random variables

Statistical Methods in Particle Physics

Arkansas Tech University MATH 3513: Applied Statistics I Dr. Marcel B. Finan

Northwestern University Department of Electrical Engineering and Computer Science

M378K In-Class Assignment #1

Continuous Random Variables and Continuous Distributions

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Review: mostly probability and some statistics

Continuous Random Variables

Final Exam 2011 Winter Term 2 Solutions

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro

Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.

Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA

Continuous distributions

Math438 Actuarial Probability

MATH Notebook 5 Fall 2018/2019

Quick Tour of Basic Probability Theory and Linear Algebra

Math Bootcamp 2012 Miscellaneous

Chapter 2 Random Variables

ECO220Y Continuous Probability Distributions: Uniform and Triangle Readings: Chapter 9, sections

Characteristic Functions and the Central Limit Theorem

Introduction to Probability and Stocastic Processes - Part I

THE USE OF A CALCULATOR, CELL PHONE, OR ANY OTHER ELECTRONIC DEVICE IS NOT PERMITTED IN THIS EXAMINATION.

Bell-shaped curves, variance

Continuous RVs. 1. Suppose a random variable X has the following probability density function: π, zero otherwise. f ( x ) = sin x, 0 < x < 2

Chapter 4: Continuous Probability Distributions

18.440: Lecture 19 Normal random variables

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable

Order Statistics and Distributions

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

Probability concepts. Math 10A. October 33, 2017

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

General Random Variables

Contents 1. Contents

DS-GA 1002 Lecture notes 2 Fall Random variables

Lecture 2: Review of Probability

Probability Review. Gonzalo Mateos

Lecture 17: The Exponential and Some Related Distributions

MAS113 Introduction to Probability and Statistics. Proofs of theorems

where r n = dn+1 x(t)

STAT 430/510: Lecture 16

QMI Lesson 19: Integration by Substitution, Definite Integral, and Area Under Curve

Bivariate distributions

7 Random samples and sampling distributions

Given a experiment with outcomes in sample space: Ω Probability measure applied to subsets of Ω: P[A] 0 P[A B] = P[A] + P[B] P[AB] = P(AB)

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

Probability, Random Processes and Inference

Random variables. DS GA 1002 Probability and Statistics for Data Science.

X 1 ((, a]) = {ω Ω : X(ω) a} F, which leads us to the following definition:

Transcription:

Normal Random Variables and Probability An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2015

Discrete vs. Continuous Random Variables Think about the probability of selecting X from the interval [0, 1] when X {0, 1}

Discrete vs. Continuous Random Variables Think about the probability of selecting X from the interval [0, 1] when X {0, 1} X {k/10 : k = 0, 1,..., 10}

Discrete vs. Continuous Random Variables Think about the probability of selecting X from the interval [0, 1] when X {0, 1} X {k/10 : k = 0, 1,..., 10} X {k/n : k = 0, 1,..., n} and n N

Discrete vs. Continuous Random Variables Think about the probability of selecting X from the interval [0, 1] when X {0, 1} X {k/10 : k = 0, 1,..., 10} X {k/n : k = 0, 1,..., n} and n N Question: what happens to the last probability as n?

Continuous Random Variables Definition A random variable X has a continuous distribution (or probability distribution function or probability density function) if there exists a non-negative function f : R R such that for an interval [a, b] the P (a X b) = b a f (x) dx. The function f must, in addition to satisfying f (x) 0, have the following property, f (x) dx = 1.

Area Under the PDF P a x b a b Remark: the area under the curve may be interpreted as probability.

Uniformly Distributed Continuous Random Variables Definition A continuous random variable X is uniformly distributed in the interval [a, b] (with b > a) if the probability that X belongs to any subinterval of [a, b] is equal to the length of the subinterval divided by b a.

Uniformly Distributed Continuous Random Variables Definition A continuous random variable X is uniformly distributed in the interval [a, b] (with b > a) if the probability that X belongs to any subinterval of [a, b] is equal to the length of the subinterval divided by b a. Question: Assuming the PDF vanishes outside of [a, b] and is constant on [a, b], what is the PDF?

Uniformly Distributed Continuous Random Variables Definition A continuous random variable X is uniformly distributed in the interval [a, b] (with b > a) if the probability that X belongs to any subinterval of [a, b] is equal to the length of the subinterval divided by b a. Question: Assuming the PDF vanishes outside of [a, b] and is constant on [a, b], what is the PDF? { 1 Answer: f (x) = b a if a x b, 0 otherwise.

Example (1 of 2) Random variable X is continuously and uniformly randomly distributed in the interval [ 5, 5]. Find the probability that 1 X 2.

Example (1 of 2) Random variable X is continuously and uniformly randomly distributed in the interval [ 5, 5]. Find the probability that 1 X 2. P ( 1 X 2) = 2 ( 1) 5 ( 5) = 3 10

Example (2 of 2) Random variable X is continuously and uniformly randomly distributed in the interval [ 10, 10]. Find the probability that 3 X 1 or X > 7.

Example (2 of 2) Random variable X is continuously and uniformly randomly distributed in the interval [ 10, 10]. Find the probability that 3 X 1 or X > 7. P (( 3 X 1) (X > 7)) = P ( 3 X 1) + P (X > 7) = 1 ( 3) 10 ( 10) + 10 7 10 ( 10) = 7 20

Expected Value Definition The expected value or mean of a continuous random variable X with probability density function f (x) is E [X] = x f (x) dx.

Example Find the expected value of X, if X is a continuously uniformly distributed random variable on the interval [ 10, 80].

Example Find the expected value of X, if X is a continuously uniformly distributed random variable on the interval [ 10, 80]. E [X] = = 80 10 80 = x 2 180 x f (x) dx x 90 dx 10 = 6400 180 100 180 = 35

Example Find the expected value of X, if X is a continuously uniformly distributed random variable on the interval [ 10, 80]. E [X] = = 80 10 80 = x 2 180 x f (x) dx x 90 dx 10 = 6400 180 100 180 = 35 Question: if X is a uniformly distributed but integer-valued RV, what is its expected value?

Expected Value of a Function Definition The expected value of a function g of a continuously distributed random variable X which has probability density function f is defined as E [g(x)] = g(x)f (x) dx, provided the improper integral converges absolutely, i.e., E [g(x)] is defined if and only if g(x) f (x) dx <.

Example Find the expected value of X 2 if X is continuously distributed on [0, ) with probability density function f (x) = e x.

Example Find the expected value of X 2 if X is continuously distributed on [0, ) with probability density function f (x) = e x. [ E X 2] = 0 = lim M x 2 e x dx M 0 x 2 e x dx [ (x 2 + 2x + 2)e x] M = lim M 0 = lim [2 (M 2 + 2M + 2)e M] M = 2

Joint and Marginal Distributions Definition A joint probability density for a pair of random variables, X and Y, is a non-negative function f (x, y) for which f (x, y) dx dy = 1.

Joint and Marginal Distributions Definition A joint probability density for a pair of random variables, X and Y, is a non-negative function f (x, y) for which f (x, y) dx dy = 1. Definition If X and Y are continuous random variables with joint distribution f (x, y) then the marginal density for X is defined as the function f X (x) = f (x, y) dy.

Joint and Marginal Distributions Definition A joint probability density for a pair of random variables, X and Y, is a non-negative function f (x, y) for which f (x, y) dx dy = 1. Definition If X and Y are continuous random variables with joint distribution f (x, y) then the marginal density for X is defined as the function f X (x) = f (x, y) dy. Remark: a similar definition may be stated for the marginal density for Y.

Example If the joint probability density of X and Y is given by { 1/π if x f (x, y) = 2 + y 2 1, 0 otherwise find the marginal probability density of X.

Example If the joint probability density of X and Y is given by { 1/π if x f (x, y) = 2 + y 2 1, 0 otherwise find the marginal probability density of X. f X (x) = 1 x 2 1 f (x, y) dy = 1 x 2 π dy = { 2 π 1 x 2 if x 1, 0 otherwise

Independence of Jointly Distributed RVs Definition Two continuous random variables are independent if and only if the joint probability density function factors into the product of the marginal densities of X and Y. In other words X and Y are independent if and only if for all real numbers x and y. f (x, y) = f X (x)f Y (y)

Example The joint probability density function of X and Y is { 1 f (x, y) = 2x y if 0 x y and 0 y 2 0 otherwise. Are X and Y independent?

Example The joint probability density function of X and Y is { 1 f (x, y) = 2x y if 0 x y and 0 y 2 0 otherwise. Are X and Y independent? No, since f X (x) = x x 3 /4 if 0 x 2 and f Y (y) = y 3 /4 if 0 y 2.

Example (1 of 2) Consider the jointly distributed random variables (X, Y ) [0, ) [ 2, 2] whose density is the function f (x, y) = 1. Find the mean of X + Y. 4ex

Example (2 of 2) E [X + Y ] = = = = 2 0 1 2 0 4 e x 0 0 = lim M (x + y) 2 2 1 4 e x (4x) dx xe x dx M 0 xe x dx ( ) 1 4e x dy dx (x + y) dy dx = lim M (1 Me M e M ) = 1

Properties of the Expected Value Theorem If X 1, X 2,..., X k are continuous random variables with joint probability density f (x 1, x 2,..., x k ) then E [X 1 + X 2 + + X k ] = E [X 1 ] + E [X 2 ] + + E [X k ].

Properties of the Expected Value Theorem If X 1, X 2,..., X k are continuous random variables with joint probability density f (x 1, x 2,..., x k ) then E [X 1 + X 2 + + X k ] = E [X 1 ] + E [X 2 ] + + E [X k ]. Theorem Let X 1, X 2,..., X k be pairwise independent random variables with joint density f (x 1, x 2,..., x k ), then E [X 1 X 2 X k ] = E [X 1 ] E [X 2 ] E [X k ].

Variance and Standard Deviation Definition If X is a continuously distributed random variable with probability density function f (x), the variance of X is defined as [ V (X) = E (X µ) 2] = (x µ) 2 f (x) dx, where µ = E [X]. The standard deviation of X is σ(x) = V (X).

Variance and Standard Deviation Definition If X is a continuously distributed random variable with probability density function f (x), the variance of X is defined as [ V (X) = E (X µ) 2] = (x µ) 2 f (x) dx, where µ = E [X]. The standard deviation of X is σ(x) = V (X). Theorem Let X be a random variable with probability density f and mean µ, then V (X) = E [ X 2] µ 2.

Example Suppose X is continuously distributed on [0, ) with probability density function f (x) = e x. Find V (X).

Example Suppose X is continuously distributed on [0, ) with probability density function f (x) = e x. Find V (X). [ V (X) = E X 2] (E [X]) 2 = 0 ( x 2 e x dx ( = 2 xe x dx 0 = 2 (1) 2 = 1 0 ) 2 ) 2 xe x dx

Properties of Variance Theorem Let X be a continuous random variable with probability density f (x) and let a, b R, then V (a X + b) = a 2 V (X).

Properties of Variance Theorem Let X be a continuous random variable with probability density f (x) and let a, b R, then V (a X + b) = a 2 V (X). Theorem Let X 1, X 2,..., X k be pairwise independent continuous random variables with joint probability density f (x 1, x 2,..., x k ), then V (X 1 + X 2 + + X k ) = V (X 1 ) + V (X 2 ) + + V (X k ).

Normal Random Variable Assumption: any characteristic of an object subject to a large number of independently acting forces typically takes on a normal distribution.

Normal Random Variable Assumption: any characteristic of an object subject to a large number of independently acting forces typically takes on a normal distribution. We will develop the normal probability density function from the probability function for the binomial random variable.

Normal Random Variable Assumption: any characteristic of an object subject to a large number of independently acting forces typically takes on a normal distribution. We will develop the normal probability density function from the probability function for the binomial random variable. Recall that if X is a binomial random variable of n trials and probability of success on a single trial of p, then for x {0, 1,..., n}: P (X = x) = n! x!(n x)! px (1 p) n x

Overview of Derivation Thought experiment: Imagine standing at the origin of the number line and for each tick of a clock taking a step to the left or the right. In the long run where will you stand?

Overview of Derivation Thought experiment: Imagine standing at the origin of the number line and for each tick of a clock taking a step to the left or the right. In the long run where will you stand? Assumptions: 1 n steps/ticks, 2 random walk takes place during time interval [0, t], which implies a tick lasts t = t/n, 3 on each tick move a distance x > 0, 4 n( x) 2 = 2kt or equivalently ( x) 2 = 2k( t), for some positive constant k, 5 probability of moving left/right is 1/2, 6 all steps are independent.

Simulation Simulate taking 400 steps, 10 different trials. 20 10 100 200 300 400 n 10 20 Average position on last step: x = 1.4 with σ(x) 13.5.

Take a Few Steps Suppose r out of n steps (0 r n) have been to the right. Question: Where are you?

Take a Few Steps Suppose r out of n steps (0 r n) have been to the right. Question: Where are you? (r (n r)) x = (2r n) x = m x Question: What is the probability of standing there?

Take a Few Steps Suppose r out of n steps (0 r n) have been to the right. Question: Where are you? (r (n r)) x = (2r n) x = m x Question: What is the probability of standing there? P (X = m x) = P (X = (2r n) x) ( ) ( ) n 1 r ( ) 1 n r = r 2 2 ( ) n! 1 n = r!(n r)! 2 = ) n n! ( 1 2 ( 1 2 (n + m))! ( 1 2 (n m))!

Bernoulli Steps Each step is a Bernoulli experiment with outcomes x and x. Questions: What is the expected value of a single step? What is the variance in the outcomes?

Bernoulli Steps Each step is a Bernoulli experiment with outcomes x and x. Questions: What is the expected value of a single step? E [X] = 0 What is the variance in the outcomes?

Bernoulli Steps Each step is a Bernoulli experiment with outcomes x and x. Questions: What is the expected value of a single step? E [X] = 0 What is the variance in the outcomes? V (X) = ( x) 2

The Sum of Bernoulli Steps Questions: after n steps, What is the expected value of where you stand? What is the variance in final position?

The Sum of Bernoulli Steps Questions: after n steps, What is the expected value of where you stand? [ n ] E X = n E [X] = 0 i=1 What is the variance in final position?

The Sum of Bernoulli Steps Questions: after n steps, What is the expected value of where you stand? [ n ] E X = n E [X] = 0 i=1 What is the variance in final position? ( n ) V X = n V (X) = n( x) 2 = 2k t i=1

Stirling s Formula (1 of 3) Stirling s formula approximates n! for large n. n! 2πe n n n+1/2 n n! 2πe n n n+1/2 5 120 118.019 10 3.6288 10 6 3.5987 10 6 15 1.30767 10 12 1.30043 10 12 20 2.4329 10 18 2.42279 10 18 30 2.65253 10 32 2.64517 10 32

Stirling s Formula (2 of 3) 0.020 Relative Error 0.015 0.010 0.005 0 10 20 30 40 50 n

Stirling s Formula (3 of 3) n! 2πe n n n+1/2 Replace all the factorials with Stirling s Formula.

Stirling s Formula (3 of 3) n! 2πe n n n+1/2 Replace all the factorials with Stirling s Formula. P (X = m x) = = = ) n n! ( 1 2 ( 1 2 (n + m))! ( 1 2 (n m))! 2πe n n n+ ( 1 2 1 n 2) 2πe n+m 2 ( 1 2 (n + m)) n+m+1 2 2 ( 1 + m ) m ( 2 1 m ) m 2 2nπ n n 2πe n m 2 (1 m2 n 2 ( 1 2 (n m)) n m+1 2 ) n+1 2

Further Simplification Since m = x/ x and n = t/ t, P (X = m x) = 2 ( 1 + m 2nπ n ) m/2 ( 1 m n = 2 ( t 1 + x t ) x 2πt t x = x [ 1 + x x ] x kπt 2kt since ( x) 2 = 2k t. 2 x ( 1 x t t x 2 x [ 1 x x 2kt ) m/2 (1 m2 ) x 2 x ( n 2 1 ] x [ 2 x 1 ) (n+1)/2 [ ] ) 1+t/ t x t 2 2 t x ( ) ] x x 2 kt ( x) 2 1 2, 2kt

Passing to the Limit As x 0, the probability of standing at exactly one, specific location becomes 0. Instead we must change our thinking and ask for P ((m 1) x < X < (m + 1) x) 2( x)f (x, t).

Passing to the Limit As x 0, the probability of standing at exactly one, specific location becomes 0. Instead we must change our thinking and ask for P ((m 1) x < X < (m + 1) x) 2( x)f (x, t). f (x, t) = = = 1 2 kπt 1 2 kπt 1 2 kπt e [ lim 1 + x x ] x x 0 2kt 2 x [ 1 x x 2kt ) (e x x ( ) 2 2kt e x x 2 2kt x 2 4kt ] x [ 2 x 1 (e x2 4k 2 t 2 ) kt ( ) ] x x 2 kt ( x) 2 1 2 2kt

Is f (x, t) a PDF? Suppose S 2 = = 1 2 kπt e x 2 4kt dx = S, then 1 2 kπt e x 2 1 4kt dx 2 kπt e 1 e (x 2 +y 2 )/4kt dx dy 4kπt 1 = 4kπt = 1 2π 0 0 r e r 2 /4kt dr dθ y 2 4kt dy

Surface Plot The graph of the PDF resembles: f H x, tl t x J. Robert Buchanan Normal Random Variables and Probability

The Bell Curve For a fixed value of t, the graph of the PDF resembles: x

Expected Value and Variance If X is a continuously distributed random variable with PDF: then f (x, t) = 1 2 kπt e x 2 4kt

Expected Value and Variance If X is a continuously distributed random variable with PDF: then and E [X] = 1 f (x, t) = 2 kπt e x 2 4kt x 2 kπt e x 2 4kt dx = 0

Expected Value and Variance If X is a continuously distributed random variable with PDF: then and V (X) = E [X] = 1 f (x, t) = 2 kπt e x 2 x 2 4kt x 2 kπt e x 2 4kt dx = 0 x 2 2 kπt e 4kt dx (E [X]) 2 = 2k t and thus 2k t = σ 2 and we express the PDF for a normally distributed random variable with mean µ and variance σ 2 as f (x) = 1 σ (x µ) 2 2π e 2σ 2.

Standard Normal Distribution When µ = 0 and σ = 1, the PDF φ (x) = 1 2π e x2 2 standard normal density. is called the

Standard Normal Distribution When µ = 0 and σ = 1, the PDF φ (x) = 1 2π e x2 2 standard normal density. is called the The cumulative distribution function (CDF) Φ (x) is defined as x 1 Φ (x) = P (X < x) = e u2 2 du. 2π

Standard Normal Distribution When µ = 0 and σ = 1, the PDF φ (x) = 1 2π e x2 2 standard normal density. is called the The cumulative distribution function (CDF) Φ (x) is defined as x 1 Φ (x) = P (X < x) = e u2 2 du. 2π Remarks: Pay close attention of the case of the symbol used for the probability density and cumulative distribution of X. The values of Φ (x) can be produced by many scientific calculators or by looking them up in printed tables.

Change of Variable Theorem If X is a normally distributed random variable with expected value µ and variance σ 2, then Z = X µ is normally distributed σ with an expected value of zero and a variance of one.

Central Limit Theorem (1 of 2) Suppose the random variables X 1, X 2,..., X n : 1 are pairwise independent but not necessarily identically distributed, 2 have means µ 1, µ 2,..., µ n and variances σ1 2, σ2 2,..., σ2 n, and we define a new random variable Y n as Y n = n i=1 (X i µ i ) n i=1 σ2 i.

Central Limit Theorem (1 of 2) Suppose the random variables X 1, X 2,..., X n : 1 are pairwise independent but not necessarily identically distributed, 2 have means µ 1, µ 2,..., µ n and variances σ1 2, σ2 2,..., σ2 n, and we define a new random variable Y n as Y n = n i=1 (X i µ i ) n i=1 σ2 i. A Central Limit Theorem due to Liapounov implies that Y n has the standard normal distribution.

Central Limit Theorem (2 of 2) Theorem Suppose that the infinite collection {X i } i=1 of random variables are pairwise independent and that for each i N we have E [ X i µ i 3] <. If in addition, lim n n i=1 E [ X i µ i 3] ) 3/2 = 0 ( n i=1 σ2 i then for any x R lim P (Y n x) = Φ (x) n where random variable Y n is defined as above.

Example (1 of 2) Suppose the annual snowfall in Millersville, PA is 14.6 inches with a standard deviation of 3.2 inches and is normally distributed. Snowfall amounts in different years are independent. What is the probability that the sum of the snowfall amounts in the next two years will exceed 30 inches?

Example (2 of 2) Solution: If X represents the random variable standing for the snowfall received in Millersville, PA for one year then X + X is the random variable representing the snowfall of two years. The random variable X + X has mean µ = 2(14.6) = 29.2 inches and variance σ 2 = (3.2) 2 + (3.2) 2 = 20.48. ( ) 30 29.2 P (X + X > 30) = P Z > 20.48 = 1 P (Z 0.176777) = 1 Φ (0.176777) = 0.429842

Lognormal Random Variables Definition A random variable X is a lognormal random variable with parameters µ and σ if ln X is a normally distributed random variable with mean µ and variance σ 2.

Lognormal Random Variables Definition A random variable X is a lognormal random variable with parameters µ and σ if ln X is a normally distributed random variable with mean µ and variance σ 2. Remarks: The parameter µ is sometimes called the drift. The parameter σ is sometimes called the volatility.

Lognormal PDF (1 of 2) Suppose X is lognormal, then Y = ln X is normal and P (X < x) = P (Y < ln x) = 1 ln x 2π If we let u = e t and du = e t dt, then P (X < x) = 1 2π x e (t µ)2 /2σ 2 dt. 1 u e (ln u µ)2 /2σ 2 du, the cumulative distribution function for the lognormally distributed random variable X.

Lognormal PDF (2 of 2) The probability density function for lognormal X is f (x) = 1 (σ 2π)x x µ)2 /2σ2 e (ln. y x

Mean and Variance of a Lognormal RV Lemma If X is a lognormal random variable with parameters µ and σ then E [X] = e µ+σ2 /2 (, ) V (X) = e 2µ+σ2 e σ2 1.

Proof (1 of 2) Let X be lognormally distributed with parameters µ and σ, then 1 ( ) 1 E [X] = σ x 2π 0 x e (ln x µ)2 /2σ 2 dx 1 = σ e t e (t µ)2 /2σ 2 dt 2π [ = e µ+σ2 /2 1 ] σ e (t (µ+σ2 )) 2 /2σ 2 dt 2π = e µ+σ2 /2.

Proof (2 of 2) Let X be lognormally distributed with parameters µ and σ, then [ V (X) = E X 2] (E [X]) 2 1 ( ) 1 = σ x 2 2π 0 x e (ln x µ)2 /2σ 2 dx (e ) 2 µ+σ2 /2 1 = σ e 2t e (t µ)2 /2σ 2 dt e 2µ+σ2 2π [ = e 2(µ+σ2 ) 1 ] σ e (t (µ+2σ))2 /2σ 2 dt e 2µ+σ2 2π = e 2(µ+σ2) ( e 2µ+σ2 ) = e 2µ+σ2 e σ2 1.

Lognormal RVs and Security Prices Observation: Let S(0) denote the price of a security at some starting time arbitrarily chosen to be t = 0. For n 1, let S(n) denote the price of the security on day n. The random variable X(n) = S(n) for n 1 is S(n 1) lognormally distributed, i.e., ln X(n) = ln S(n) ln S(n 1) is normally distributed.

Closing Prices of Sony (SNE) Stock Closing prices of Sony Corporation stock (09/20/2010 09/19/2011): 35 30 25 20 15 10 5 0 Oct Jan Jul

Lognormal Behavior of Sony (SNE) Stock 35 Lognormal behavior of closing prices: 30 25 20 15 10 5 0.06 0.04 0.02 0.00 0.02 0.04 µ = 0.00177279 σ = 0.0181285

Example (1 of 2) What is the probability that the closing price of Sony Corporation stock will be higher today than yesterday?

Example (1 of 2) What is the probability that the closing price of Sony Corporation stock will be higher today than yesterday? P S(n) S(n 1) }{{} lognormal > 1 = P ln S(n) S(n 1) } {{ } normal > ln 1 = P (X > 0) ( = P Z > 0 ( 0.00177279) ) 0.01811285 = 1 P (Z 0.09779) = 1 Φ (0.09779) = 0.46105

Example (2 of 2) What is the probability that tomorrow s closing price will be higher than yesterday s closing price?

Example (2 of 2) What is the probability that tomorrow s closing price will be higher than yesterday s closing price? ( ) S(n + 1) P S(n 1) > 1 ( S(n + 1) = P S(n) = P ( S(n + 1) ln S(n) ) S(n) S(n 1) > 1 + ln S(n) S(n 1) > 0 = P (X + X > 0) ( ) = P Z > 0 2( 0.00177279) 2(0.01811285) 2 ) = 1 P (Z 0.138296) = 1 Φ (0.138296) = 0.445003

Properties of Expected Value and Variance If an item is worth K but can only be sold for X, a rational investor would sell only if X K. The net payoff of the sale can be expressed as { (X K ) + X K if X K, = 0 if X < K.

Payoff When X is Normal Corollary If X is normal random variable with mean µ and variance σ 2 and K is a constant, then E [ (X K ) +] = σ ( ) e (µ K )2 /2σ 2 µ K + (µ K )Φ, 2π σ V ( (X K ) +) ( = (µ K ) 2 + σ 2) ( ) µ K (µ K )σ Φ + e (µ K )2 /2σ 2 σ 2π ( ( )) σ e (µ K )2 /2σ 2 µ K 2 + (µ K )Φ. 2π σ

Payoff When X is Lognormal Corollary If X is a lognormally distributed random variable with parameters µ and σ 2 and K > 0 is a constant then E [ (X K ) +] ( ) ( µ ln K µ ln K = e µ+σ2 /2 Φ + σ K Φ σ σ V ( (X K ) +) = e 2(µ+σ2) Φ (w + 2σ) + K 2 Φ (w) where w = (µ ln K )/σ. 2Ke µ+σ2 /2 Φ (w + σ) ( e µ+σ2 /2 Φ (w + σ) K Φ (w) ) 2 ),

Credits These slides are adapted from the textbook, An Undergraduate Introduction to Financial Mathematics, 3rd edition, (2012). author: J. Robert Buchanan publisher: World Scientific Publishing Co. Pte. Ltd. address: 27 Warren St., Suite 401 402, Hackensack, NJ 07601 ISBN: 978-9814407441