Chapter 5 continued. Chapter 5 sections

Similar documents
Chapter 5. Chapter 5 sections

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Random Variables and Their Distributions

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Lecture 2: Repetition of probability theory and statistics

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Lecture 1: August 28

Stat 5101 Notes: Brand Name Distributions

Contents 1. Contents

STAT Chapter 5 Continuous Distributions

Multivariate Random Variable

Multiple Random Variables

15 Discrete Distributions

Stat 5101 Notes: Brand Name Distributions

Continuous Random Variables

Probability and Distributions

Continuous Random Variables and Continuous Distributions

Algorithms for Uncertainty Quantification

18 Bivariate normal distribution I

Chapter 4 Multiple Random Variables

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Chapter 7: Special Distributions

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

MAS223 Statistical Inference and Modelling Exercises

Lecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016

Statistical Methods in Particle Physics

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Elements of Probability Theory

Formulas for probability theory and linear models SF2941

Probability Distributions Columns (a) through (d)

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Introduction to Normal Distribution

Continuous Distributions

Advanced topics from statistics

Actuarial Science Exam 1/P

Probability Theory. Patrick Lam

Chapter 4 continued. Chapter 4 sections

Joint p.d.f. and Independent Random Variables

3. Probability and Statistics

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours

Copula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2

Introduction to Probability and Stocastic Processes - Part I

Some Special Distributions (Hogg Chapter Three)

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

STATISTICS SYLLABUS UNIT I

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

ACM 116: Lectures 3 4

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions

Joint Probability Distributions and Random Samples (Devore Chapter Five)

ENGG2430A-Homework 2

Closed book and notes. 60 minutes. Cover page and four pages of exam. No calculators.

Asymptotic Statistics-III. Changliang Zou

Lecture 2: Review of Probability

Chapter Learning Objectives. Probability Distributions and Probability Density Functions. Continuous Random Variables

Lecture 5: Moment Generating Functions

Brief Review of Probability

Bivariate distributions

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

Department of Large Animal Sciences. Outline. Slide 2. Department of Large Animal Sciences. Slide 4. Department of Large Animal Sciences

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Lecture 4. Continuous Random Variables and Transformations of Random Variables

LIST OF FORMULAS FOR STK1100 AND STK1110

8 - Continuous random vectors

CS Lecture 19. Exponential Families & Expectation Propagation

Lecture 5: Moment generating functions

Chapter 6: Functions of Random Variables

EE4601 Communication Systems

1 Review of Probability and Distributions

Lecture 11. Multivariate Normal theory

Some Special Distributions (Hogg Chapter Three)

1.1 Review of Probability Theory

Lecture 14: Multivariate mgf s and chf s

1.12 Multivariate Random Variables

Things to remember when learning probability distributions:

Continuous random variables

ARCONES MANUAL FOR THE SOA EXAM P/CAS EXAM 1, PROBABILITY, SPRING 2010 EDITION.

MULTIVARIATE DISTRIBUTIONS

Review for the previous lecture

Chapter 4. Chapter 4 sections

Section 9.1. Expected Values of Sums

The generative approach to classification. A classification problem. Generative models CSE 250B

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

A Few Special Distributions and Their Properties

Probability Distributions

Chapter 6: Large Random Samples Sections

ECON 5350 Class Notes Review of Probability and Distribution Theory

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

4. Distributions of Functions of Random Variables

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30

Stat410 Probability and Statistics II (F16)

3 Continuous Random Variables

THE QUEEN S UNIVERSITY OF BELFAST

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

Transcription:

Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions Continuous univariate distributions: 5.6 Normal distributions 5.7 Gamma distributions Just skim 5.8 Beta distributions Multivariate distributions Just skim 5.9 Multinomial distributions 5.10 Bivariate normal distributions STA 611 (Lecture 09) Expectation September 23, 2014 1 / 25

Why Normal? Chapter 5 continued 5.6 Normal distributions Works well in practice. Many physical experiments have distributions that are approximately normal STA 611 (Lecture 09) Expectation September 23, 2014 2 / 25

5.6 Normal distributions Why Normal? Works well in practice. Many physical experiments have distributions that are approximately normal Central Limit Theorem: Sum of many i.i.d. random variables are approximately normally distributed STA 611 (Lecture 09) Expectation September 23, 2014 2 / 25

5.6 Normal distributions Why Normal? Works well in practice. Many physical experiments have distributions that are approximately normal Central Limit Theorem: Sum of many i.i.d. random variables are approximately normally distributed Mathematically convenient especially the multivariate normal distribution. Can explicitly obtain the distribution of many functions of a normally distributed random variable have. Marginal and conditional distributions of a multivariate normal are also normal (multivariate or univariate). STA 611 (Lecture 09) Expectation September 23, 2014 2 / 25

5.6 Normal distributions Why Normal? Works well in practice. Many physical experiments have distributions that are approximately normal Central Limit Theorem: Sum of many i.i.d. random variables are approximately normally distributed Mathematically convenient especially the multivariate normal distribution. Can explicitly obtain the distribution of many functions of a normally distributed random variable have. Marginal and conditional distributions of a multivariate normal are also normal (multivariate or univariate). Developed by Gauss and then Laplace in the early 1800s Also known at the Gaussian distributions Gauss Laplace STA 611 (Lecture 09) Expectation September 23, 2014 2 / 25

Normal distributions Chapter 5 continued 5.6 Normal distributions Def: Normal distributions N(µ, σ 2 ) A continuous r.v. X has the normal distribution with mean µ and variance σ 2 if it has the pdf f (x µ, σ 2 ) = 1 ) (x µ)2 exp ( 2π σ 2σ 2, < x < Parameter space: µ R and σ 2 > 0 Show: ψ(t) = exp ( µt + 1 2 σ2 t 2) E(X) = µ Var(X) = σ 2 STA 611 (Lecture 09) Expectation September 23, 2014 3 / 25

The Bell curve Chapter 5 continued 5.6 Normal distributions STA 611 (Lecture 09) Expectation September 23, 2014 4 / 25

Standard normal Chapter 5 continued 5.6 Normal distributions Standard normal distribution: N(0, 1) The normal distribution with µ = 0 and σ 2 = 1 is called the standard normal distribution and the pdf and cdf are denoted as φ(x) and Φ(x) STA 611 (Lecture 09) Expectation September 23, 2014 5 / 25

Standard normal Chapter 5 continued 5.6 Normal distributions Standard normal distribution: N(0, 1) The normal distribution with µ = 0 and σ 2 = 1 is called the standard normal distribution and the pdf and cdf are denoted as φ(x) and Φ(x) The cdf for a normal distribution cannot be expressed in closed form and is evaluated using numerical approximations. Φ(x) is tabulated in the back of the book. Many calculators and programs such as R, Matlab, Excel etc. can calculate Φ(x). Φ( x) = 1 Φ(x) Φ 1 (p) = Φ 1 (1 p) STA 611 (Lecture 09) Expectation September 23, 2014 5 / 25

5.6 Normal distributions Properties of the normal distributions Theorem 5.6.4: Linear transformation of a normal is still normal If X N(µ, σ 2 ) and Y = ax + b where a and b are constants and a 0 then Y N(aµ + b, a 2 σ 2 ) Let F be the cdf of X, where X N(µ, σ 2 ). Then ( ) x µ F(x) = Φ σ and F 1 (p) = µ + σφ 1 (p) STA 611 (Lecture 09) Expectation September 23, 2014 6 / 25

Example: Measured Voltage 5.6 Normal distributions Suppose the measured voltage, X, in a certain electric circuit has the normal distribution with mean 120 and standard deviation 2 1 What is the probability that the measured voltage is between 118 and 122? 2 Below what value will 95% of the measurements be? STA 611 (Lecture 09) Expectation September 23, 2014 7 / 25

5.6 Normal distributions Properties of the normal distributions Theorem 5.6.7: Linear combination of ind. normals is a normal Let X 1,..., X k be independent r.v. and X i N(µ i, σi 2 ) for i = 1,..., k. Then ( ) X 1 + + X k N µ 1 + + µ k, σ1 2 + + σ2 k Also, if a 1,..., a k and b are constants where at least one a i is not zero: In particular: a 1 X 1 + + a k X k + b N ( b + k µ i, i=1 k i=1 a 2 i σ2 i The sample mean: X n = 1 n n i=1 X i If X 1,..., X n are a random sample from a N(µ, σ 2 ), what is the distribution of the sample mean? ) STA 611 (Lecture 09) Expectation September 23, 2014 8 / 25

5.6 Normal distributions Example: Measured voltage continued Suppose the measured voltage, X, in a certain electric circuit has the normal distribution with mean 120 and standard deviation 2. If three independent measurements of the voltage are made, what is the probability that the sample mean X 3 will lie between 118 and 120? Find x that satisfies P( X 3 120 x) = 0.95 STA 611 (Lecture 09) Expectation September 23, 2014 9 / 25

Area under the curve Chapter 5 continued 5.6 Normal distributions STA 611 (Lecture 09) Expectation September 23, 2014 10 / 25

5.6 Normal distributions Lognormal distributions Def: Lognormal distributions If log(x) N(µ, σ 2 ) then we say that X has the Lognormal distribution with parameters µ and σ 2. The support of the lognormal distribution is (0, ). Often used to model time before failure. STA 611 (Lecture 09) Expectation September 23, 2014 11 / 25

5.6 Normal distributions Lognormal distributions Def: Lognormal distributions If log(x) N(µ, σ 2 ) then we say that X has the Lognormal distribution with parameters µ and σ 2. The support of the lognormal distribution is (0, ). Often used to model time before failure. Example: Let X and Y be independent random variables such that log(x) N(1.6, 4.5) and log(y ) N(3, 6). What is the distribution of the product XY? STA 611 (Lecture 09) Expectation September 23, 2014 11 / 25

Bivariate normal distributions 5.10 Bivariate normal distributions Def: Bivariate normal Two continuous r.v. X 1 and X 2 have the bivariate normal distribution with means µ 1 and µ 2, variances σ1 2 and σ2 2 and correlation ρ if they have the joint pdf 1 f (x 1, x 2 ) = 2π(1 ρ 2 ) 1/2 σ 1 σ 2 ( exp 1 [ ( ) ( ) (x1 µ 1 ) 2 x1 µ 1 x2 µ 2 2ρ + (x ]) 2 µ 2 ) 2 2 σ 2 1 σ 1 σ 2 σ 2 2 (1) Parameter space: µ i R, σ 2 i > 0 for i = 1, 2 and 1 ρ 1 STA 611 (Lecture 09) Expectation September 23, 2014 12 / 25

Bivariate normal pdf Chapter 5 continued 5.10 Bivariate normal distributions Bivariate normal pdf with different ρ: Contours: STA 611 (Lecture 09) Expectation September 23, 2014 13 / 25

5.10 Bivariate normal distributions Bivariate normal as linear combination Theorem 5.10.1: Bivariate normal from two ind. standard normals Let Z 1 N(0, 1) and Z 2 N(0, 1) be independent. Let µ i R, σi 2 > 0 for i = 1, 2 and 1 ρ 1 and let X 1 = σ 1 Z 1 + µ 1 X 2 = σ 2 (ρz 1 + 1 ρ 2 Z 2 ) + µ 2 (2) Then the joint distribution of X 1 and X 2 is bivariate normal with parameters µ 1, µ 2, σ 2 1, σ2 1 and ρ Theorem 5.10.2 (part 1) the other way Let X 1 and X 2 have the pdf in (1). Then there exist independent standard normal r.v. Z 1 and Z 2 so that (2) holds. STA 611 (Lecture 09) Expectation September 23, 2014 14 / 25

5.10 Bivariate normal distributions Properties of a bivariate normal Theorem 5.10.2 (part 2) Let X 1 and X 2 have the pdf in (1). Then the marginal distributions are X 1 N(µ 1, σ1 2 ) and X 2 N(µ 2, σ2 2 ) And the correlation between X 1 and X 2 is ρ STA 611 (Lecture 09) Expectation September 23, 2014 15 / 25

Properties of a bivariate normal 5.10 Bivariate normal distributions Theorem 5.10.2 (part 2) Let X 1 and X 2 have the pdf in (1). Then the marginal distributions are X 1 N(µ 1, σ 2 1 ) and X 2 N(µ 2, σ 2 2 ) And the correlation between X 1 and X 2 is ρ Theorem 5.10.4: The conditional is normal Let X 1 and X 2 have the pdf in (1). Then the conditional distribution of X 2 given that X 1 = x 1 is (univariate) normal with E(X 2 X 1 = x 1 ) = µ 2 + ρσ 2 (x 1 µ 1 ) σ 1 and Var(X 2 X 1 = x 1 ) = (1 ρ 2 )σ 2 2 STA 611 (Lecture 09) Expectation September 23, 2014 15 / 25

5.10 Bivariate normal distributions Properties of a bivariate normal Theorem 5.10.3: Uncorrelated Independent Let X 1 and X 2 have the bivariate normal distribution. Then X 1 and X 2 are independent if and only if they are uncorrelated. Only holds for the multivariate normal distribution One of the very convenient properties of the normal distribution STA 611 (Lecture 09) Expectation September 23, 2014 16 / 25

5.10 Bivariate normal distributions Properties of a bivariate normal Theorem 5.10.3: Uncorrelated Independent Let X 1 and X 2 have the bivariate normal distribution. Then X 1 and X 2 are independent if and only if they are uncorrelated. Only holds for the multivariate normal distribution One of the very convenient properties of the normal distribution Theorem 5.10.5: Linear combinations are normal Let X 1 and X 2 have the pdf in (1) and let a 1, a 2 and b be constants. Then Y = a 1 X 1 + a 2 X 2 + b is normally distributed with E(Y ) = a 1 µ 1 + a 2 µ 2 + b and Var(Y ) = a 2 1 σ2 1 + a2 2 σ2 2 + 2a 1a 2 ρσ 1 σ 2 This extends what we already had for independent normals STA 611 (Lecture 09) Expectation September 23, 2014 16 / 25

Example Chapter 5 continued 5.10 Bivariate normal distributions Let X 1 and X 2 have the bivariate normal distribution with means µ 1 = 3, µ 2 = 5, variances σ1 2 = 4, σ2 2 = 9 and correlation ρ = 0.6. a) Find the distribution of X 2 2X 1 b) What is expected value of X 2, given that we observed X 1 = 2? c) What is the probability that X 1 > X 2? STA 611 (Lecture 09) Expectation September 23, 2014 17 / 25

5.10 Bivariate normal distributions Multivariate normal Matrix notation The pdf of an n-dimensional normal distribution, X N(µ, Σ): { 1 f (x) = (2π) n/2 exp 1 } Σ 1/2 2 (x µ) Σ 1 (x µ) where µ 1 x 1 µ 2 µ =., x = x 2. µ n x n σ 2 1 σ 1,2 σ 1,3 σ 1,n σ 2,1 σ2 2 σ 2,3 σ 2,n and Σ = σ 3,1 σ 3,2 σ3 2 σ 3,n....... σ n,1 σ n,2 σ n,3 σn 2 µ is the mean vector and Σ is called the variance-covariance matrix. STA 611 (Lecture 09) Expectation September 23, 2014 18 / 25

5.10 Bivariate normal distributions Multivariate normal Matrix notation Same things hold for multivariate normal distribution as the bivariate. Let X N(µ, Σ) Linear combinations of X are normal STA 611 (Lecture 09) Expectation September 23, 2014 19 / 25

5.10 Bivariate normal distributions Multivariate normal Matrix notation Same things hold for multivariate normal distribution as the bivariate. Let X N(µ, Σ) Linear combinations of X are normal AX + b is (multivariate) normal for fixed matrix A and vector b STA 611 (Lecture 09) Expectation September 23, 2014 19 / 25

5.10 Bivariate normal distributions Multivariate normal Matrix notation Same things hold for multivariate normal distribution as the bivariate. Let X N(µ, Σ) Linear combinations of X are normal AX + b is (multivariate) normal for fixed matrix A and vector b The marginal distribution of X i is normal with mean µ i and variance σ 2 i STA 611 (Lecture 09) Expectation September 23, 2014 19 / 25

5.10 Bivariate normal distributions Multivariate normal Matrix notation Same things hold for multivariate normal distribution as the bivariate. Let X N(µ, Σ) Linear combinations of X are normal AX + b is (multivariate) normal for fixed matrix A and vector b The marginal distribution of X i is normal with mean µ i and variance σ 2 i The off-diagonal elements of Σ are the covariances between individual elements of X, i.e. Cov(X i, X j ) = σ i,j. STA 611 (Lecture 09) Expectation September 23, 2014 19 / 25

5.10 Bivariate normal distributions Multivariate normal Matrix notation Same things hold for multivariate normal distribution as the bivariate. Let X N(µ, Σ) Linear combinations of X are normal AX + b is (multivariate) normal for fixed matrix A and vector b The marginal distribution of X i is normal with mean µ i and variance σ 2 i The off-diagonal elements of Σ are the covariances between individual elements of X, i.e. Cov(X i, X j ) = σ i,j. The joint marginal distributions are also normal where the mean and covariance matrix are found by picking the corresponding elements from µ and rows and columns from Σ. STA 611 (Lecture 09) Expectation September 23, 2014 19 / 25

5.10 Bivariate normal distributions Multivariate normal Matrix notation Same things hold for multivariate normal distribution as the bivariate. Let X N(µ, Σ) Linear combinations of X are normal AX + b is (multivariate) normal for fixed matrix A and vector b The marginal distribution of X i is normal with mean µ i and variance σ 2 i The off-diagonal elements of Σ are the covariances between individual elements of X, i.e. Cov(X i, X j ) = σ i,j. The joint marginal distributions are also normal where the mean and covariance matrix are found by picking the corresponding elements from µ and rows and columns from Σ. The conditional distributions are also normal (multivariate or univariate) STA 611 (Lecture 09) Expectation September 23, 2014 19 / 25

Gamma distributions Chapter 5 continued 5.7 Gamma distributions The Gamma function: Γ(α) = 0 x α 1 e x dx Γ(1) = 1 and Γ(0.5) = π Γ(α) = (α 1)Γ(α) if α > 1 Def: Gamma distributions Gamma(α, β) A continuous r.v. X has the gamma distribution with parameters α and β if it has the pdf { β α f (x α, β) = Γ(α) x α 1 e βx for x > 0 0 otherwise Parameter space: α > 0 and β > 0 Gamma(1, β) is the same as the exponential distribution with parameter β, Expo(β) STA 611 (Lecture 09) Expectation September 23, 2014 20 / 25

5.7 Gamma distributions Properties of the gamma distributions ψ(t) = ( ) α β β+t E(X) = α β and E(X) = α β 2 If X 1,..., X k are independent Γ(α i, β) r.v. then ( k ) X 1 + + X k Gamma α i, β i=1 STA 611 (Lecture 09) Expectation September 23, 2014 21 / 25

5.7 Gamma distributions Properties of the gamma distributions Theorem 5.7.9: Exponential distribution is memoryless Let X Expo(β) and let t > 0. Then for any h > 0 P(X t + h X t) = P(X h) Theorem 5.7.12: Times between arrivals in a Poisson process Let Z k be the time until the k th arrival in a Poisson process with rate β. Let Y 1 = Z 1 and Y k = Z k Z k 1 for k 2. Then Y 1, Y 2, Y 3,... are i.i.d. with the exponential distribution with parameter β. STA 611 (Lecture 09) Expectation September 23, 2014 22 / 25

Beta distributions Chapter 5 continued 5.8 Beta distributions Def: Beta distributions Beta(α, β) A continuous r.v. X has the beta distribution with parameters α and β if it has the pdf { Γ(α+β) f (x α, β) = Γ(α)Γ(β) x α 1 (1 x) β 1 for 0 < x < 1 0 otherwise Parameter space: α > 0 and β > 0 Beta(1, 1) = Uniform(0, 1) Used to model a random variable that takes values between 0 and 1. The Beta distributions are often used as prior distributions for probability parameters, e.g. the p in the Binomial distribution. STA 611 (Lecture 09) Expectation September 23, 2014 23 / 25

Beta distributions Chapter 5 continued 5.8 Beta distributions STA 611 (Lecture 09) Expectation September 23, 2014 24 / 25

END OF CHAPTER 5 STA 611 (Lecture 09) Expectation September 23, 2014 25 / 25