g(y)dy = f(x)dx. y B If we write x = x(y) in the second integral, then the change of variable technique gives g(y)dy = f{x(y)} dx dy dy.

Similar documents
Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr.

Chapter 4 Multiple Random Variables

Probability and Distributions

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Lecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Chapter 2. Discrete Distributions

Review for the previous lecture

UNIT 4 MATHEMATICAL METHODS SAMPLE REFERENCE MATERIALS

Lecture Notes 2 Random Variables. Random Variable

ECON 5350 Class Notes Review of Probability and Distribution Theory

Ch. 5 Joint Probability Distributions and Random Samples

Chapter 3 Common Families of Distributions

1 Review of Probability and Distributions

Contents 1. Contents

n px p x (1 p) n x. p x n(n 1)... (n x + 1) x!

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Let X and Y denote two random variables. The joint distribution of these random

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH A Test #2 June 11, Solutions

Random Variables. P(x) = P[X(e)] = P(e). (1)

Mathematical Statistics 1 Math A 6330

More on Distribution Function

Special distributions

1 Solution to Problem 2.1

Basics on Probability. Jingrui He 09/11/2007

Rejection sampling - Acceptance probability. Review: How to sample from a multivariate normal in R. Review: Rejection sampling. Weighted resampling

i=1 k i=1 g i (Y )] = k

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

Introduction to Probability Theory for Graduate Economics Fall 2008

1: PROBABILITY REVIEW

II. Probability. II.A General Definitions

Statistics for scientists and engineers

MA 320 Introductory Probability, Section 1 Spring 2017 Final Exam Practice May. 1, Exam Scores. Question Score Total. Name:

US01CMTH02 UNIT-3. exists, then it is called the partial derivative of f with respect to y at (a, b) and is denoted by f. f(a, b + b) f(a, b) lim

Chapter 2: The Random Variable

Chapter 5 Joint Probability Distributions

Introduction...2 Chapter Review on probability and random variables Random experiment, sample space and events

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown

Chapter 3 Single Random Variables and Probability Distributions (Part 1)

GEOMETRIC -discrete A discrete random variable R counts number of times needed before an event occurs

STAT 430/510 Probability

Topic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr.

Chapter 6 Expectation and Conditional Expectation. Lectures Definition 6.1. Two random variables defined on a probability space are said to be

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Stat 100a, Introduction to Probability.

AQA Level 2 Further mathematics Number & algebra. Section 3: Functions and their graphs

- Beams are structural member supporting lateral loadings, i.e., these applied perpendicular to the axes.

Chap 2.1 : Random Variables

ΔP(x) Δx. f "Discrete Variable x" (x) dp(x) dx. (x) f "Continuous Variable x" Module 3 Statistics. I. Basic Statistics. A. Statistics and Physics

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.

No books, no notes, only SOA-approved calculators. Please put your answers in the spaces provided!

Exercises and Answers to Chapter 1

Stat 134 Fall 2011: Notes on generating functions

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

Statistics 100A Homework 5 Solutions

PROBABILITY DISTRIBUTION

Contents PART II. Foreword

Appendix A PROBABILITY AND RANDOM SIGNALS. A.1 Probability

Lecture 5: Moment Generating Functions

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Joint Distribution of Two or More Random Variables

Ch3 Operations on one random variable-expectation

Notes 12 Autumn 2005

Expectation MATH Expectation. Benjamin V.C. Collins, James A. Swenson MATH 2730

Continuous Random Variables

ENGI 4430 Multiple Integration Cartesian Double Integrals Page 3-01

Chapter 5 Class Notes

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Calculus Problem Sheet Prof Paul Sutcliffe. 2. State the domain and range of each of the following functions

Lecture 01: Introduction

11.4. Differentiating ProductsandQuotients. Introduction. Prerequisites. Learning Outcomes

ELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables

Chapter 4 Multiple Random Variables

Random Variables Example:

2 Functions of random variables

Estimators as Random Variables

Recap of Basic Probability Theory

PROBABILITY DISTRIBUTIONS: DISCRETE AND CONTINUOUS

Raquel Prado. Name: Department of Applied Mathematics and Statistics AMS-131. Spring 2010

1 Random Variable: Topics

Limiting Distributions

Limiting Distributions

Recap of Basic Probability Theory

Chapter 3. Discrete Random Variables and Their Probability Distributions

Lecture 2: CDF and EDF

Chapter 3.3 Continuous distributions

Data, Estimation and Inference

Lecture 5: Rules of Differentiation. First Order Derivatives

EXACT EQUATIONS AND INTEGRATING FACTORS

Machine Learning 4771

1 Presessional Probability

Sampling Distributions

SOLUTION FOR HOMEWORK 12, STAT 4351

Daily Lessons and Assessments for AP* Calculus AB, A Complete Course Page 584 Mark Sparks 2012

Probability: Handout

Transcription:

PROBABILITY DISTRIBUTIONS: (continued) The change of variables technique. Let f() and let y = y() be a monotonic transformation of such that = (y) eists. Let A be an event defined in terms of, and let B be the equivalent event defined in terms of y such that if A, then y = y() B and vice versa. Then, P (A) = P (B) and we can find the the p.d.f of y denoted by g(y). The continuous case. If, y are continuous random variables, then g(y)dy = f()d. y B If we write = (y) in the second integral, then the change of variable technique gives g(y)dy = f{(y)} d dy dy. y B B If y = y() is a monotonically decreasing transformation, then d/dy < 0, and f{(y)} > 0, and so f{(y)}d/dy < 0 cannot represent a p.d.f since g(y) 0 is necessary. The recourse is to change the sign on the y-ais. When d/dy < 0, it is replaced by its modulus d/dy > 0. In general, g(y) = f{(y)} d Ø dy Ø. A 1

The Standard Normal Distribution. Consider the function g(z) = e z2 /2, where < z <. There is g(z) > 0 for all z and also e z2 /2 dz = 1. 2π It follows that f(z) = 1 2π e z2 /2 constitutes a p.d.f., described as the standard normal and denoted by N(z; 0, 1). In general, the normal distribution is denoted by N(; µ, σ 2 ); so, in this case, there are µ = 0 and σ 2 = 1. The General Normal Distribution. This can be derived via the change of variables technique. Let z N(0, 1) = 1 e z2 /2 = f(z), 2π and let y = zσ + µ so that z = (y µ)/σ is the inverse function and dz/dy = σ 1 is its derivative. Then, ( g(y) = f{z(y)} dz Ødy Ø = 1 ep 1 µ ) 2 y µ 1 2π 2 σ σ Ω 1 = ep 1 (y µ) 2 æ 2πσ 2 2 σ 2. 2

The discrete case. Matters are simpler when f() is discrete and y = y() is a monotonic transformation. Then, if y g(y), it must be the case that X f{(y)}, whence g(y) = f{(y)}. Eample. Let y B g(y) = X A f() = X y B b(n =, p = 2/) =!!( )! µ 2 with = 0, 1, 2,, and let y = 2 so that (y) = y. Then µ 1, y g(y) =! ( y)!( y)! µ 2 y µ y 1, where y = 0, 1, 4, 9. Eample. Let f() = 2; 0 1. Let y = 8, which implies that = (y/8) 1/ = y 1/ /2 and d/dy = y 2/ /6. Then, µ Ø g(y) = f{(y)} d y 1/ ØØØ Ø dy Ø = 2 y 2/ 2 6 Ø = y 1/. 6

Epectations of a random variable. If f(), then the epected value of is E() = f()d if is continuous, and E() = X f() if is discrete. The epectations operator. We can economise on notation by defining the epectations operator E, which is subject to a number of simple rules. They are as follows: (a) If 0, then E() 0. (b) If a is a constant, then E(a) = a. (c) If a is a constant and is a random variable, then E(a) = ae(). (d) If, y are random variables, then E( + y) = E() + E(y). (e) The epectation of a sum is the sum of the epectations. By combining (c) and (d), we get: If E(a + by) = ae() + be(y). Thus, the epectation operator is a linear operator and E( P i a i i ) = P i a ie( i ). 4

Eample. The epected value of the binomial distribution b(; n, p) is E() = X b(; n, p) = nx n! (n )!! p (1 p) n. =0 We factorise np from the epression under the summation and we begin the summation at = 1. On defining y = 1, which means setting = y + 1 in the epression above, we get E() = nx b(; n, p) =0 = np = np n 1 X y=0 n 1 X y=0 (n 1)! ([n 1] y)!y! py (1 p) [n 1] y b(y; n 1, p) = np, where the final equality follows from the fact that we are summing the values of the binomial distribution b(y; n 1, p) over its entire domain to obtain a total of unity. 5

Eample. Let f() = e ; 0 <. Then E() = 0 e d. This must be evaluated by integrating by parts: u dv d = uv d v du d d. With u = and dv/d = e, this formula gives 0 e d = [ e ] 0 + 0 e d = [ e ] 0 [e ] 0 = 0 + 1 = 1. Observe that, since this integral is unity and since e > 0 over the domain of, it follows that f() = e is also a valid p.d.f. 6

Epectation of a function of random variable. Let y = y() be a function of f(). The value of E(y) can be found without first determining the p.d.f g(y) of y. Quite simply, there is E(y) = y()f()d. If y = y() is a monotonic transformation of, then it follows that E(y) = y yg(y)dy = = y yf{(y)} d Ø dy Ø dy y()f()d, which establishes a special case of the result. However, the result is not confined to monotonic transformations. It is equally valid for functions that are piecewise monotonic, i.e. ones that have monotonic segments, which includes virtually all of the functions that we might consider. 7

The moments of a distribution. The rth raw moment of relative to the datum a is defined by the epectation E{( a) r } = ( a) r f()d if is continuous, and E{( a) r } = X ( a) r f() if is discrete. The variance, which is a measure of the disperion of is the second moment of about the mean. This measure is minimise by the choice of a = E(): V () = E[{ E()} 2 ] = E[ 2 2E() + {E()} 2 ] = E( 2 ) {E()} 2. We can also define the variance operator V. (a) If is a random variable, then V () > 0. (b) If a is a constant, then then V (a) = 0. (c) If a is a constant and is a random variable, then V (a) = a 2 V (). To confirm the latter, we may consider V (a) = E{[a E(a)] 2 } = a 2 E{[ E()] 2 } = a 2 V (). If, y are independently distributed random variables, then V ( + y) = V () + V (y). But this is not true in general. 8

The variance of the binomial distribution. Consider a sequence of Bernoulli trials with i {0, 1} for all i. The p.d.f of the generic trial is Then f( i ) = p i (1 p) 1 i. E( i ) = X i i f( i ) = 0.(1 p) + 1.p = p. It follows that, in n trials, the epected value of the total score = P i i is E() = X E( i ) = np. i This is the epected value of the binomial distribution. To find the variance of the Bernoulli trial, we use the formula E() = E( 2 ) {E()} 2. For a single trial, there is E( 2 i ) = X f( i ) = p, i =0,1 V ( i ) = p p 2 = p(1 p) = pq, where q = 1 p. The outcome of the binomial random variable is the sum of a set of n independent and identical Bernoulli trials. Thus, the variance of the sum is the sum of the variances, and we have nx V () = V ( i ) = npq. i=1 9