Some worked exercises from Ch 4: Ex 10 p 151

Similar documents
Probability Distributions for Continuous Variables. Probability Distributions for Continuous Variables

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( ) Chapter 4 4.

Statistics and data analyses

Chapter 4: CONTINUOUS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( )

A. Evaluate log Evaluate Logarithms

ECON 4130 Supplementary Exercises 1-4

Continuous random variables

Common ontinuous random variables

Review for the previous lecture

Probability Distributions Columns (a) through (d)

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Chapter 11 Sampling Distribution. Stat 115

Continuous RVs. 1. Suppose a random variable X has the following probability density function: π, zero otherwise. f ( x ) = sin x, 0 < x < 2

Chapter 3 Common Families of Distributions

Continuous Random Variables and Continuous Distributions

Probability Density Functions

STAT Chapter 5 Continuous Distributions

3 Continuous Random Variables

Chapter 2 Continuous Distributions

Ching-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12

Lecture 17: The Exponential and Some Related Distributions

Practice Problems Section Problems

Probability and Distributions

Example 1: Dear Abby. Stat Camp for the MBA Program

Tom Salisbury

Method of Moments. which we usually denote by X or sometimes by X n to emphasize that there are n observations.

Exam C Solutions Spring 2005

Special Discrete RV s. Then X = the number of successes is a binomial RV. X ~ Bin(n,p).

STAT 430/510 Probability Lecture 12: Central Limit Theorem and Exponential Distribution

15 Discrete Distributions

Continuous random variables

Sampling Distributions of Statistics Corresponds to Chapter 5 of Tamhane and Dunlop

This does not cover everything on the final. Look at the posted practice problems for other topics.

STA 256: Statistics and Probability I

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution

Continuous Random Variables

Chapter 5. Chapter 5 sections

3. On the grid below, sketch and label graphs of the following functions: y = sin x, y = cos x, and y = sin(x π/2). π/2 π 3π/2 2π 5π/2

1 Review of Probability and Distributions

Brief Review of Probability

SOLUTION FOR HOMEWORK 8, STAT 4372

Applied Statistics and Probability for Engineers. Sixth Edition. Chapter 4 Continuous Random Variables and Probability Distributions.

Chapter 4 Continuous Random Variables and Probability Distributions

Inference and Regression

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Chapter 4: Continuous Random Variables and Probability Distributions

IV. The Normal Distribution

Lecture 4: Sampling, Tail Inequalities

Institute of Actuaries of India

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Expectation, Variance and Standard Deviation for Continuous Random Variables Class 6, Jeremy Orloff and Jonathan Bloom

II. The Normal Distribution

Probability Distributions

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015

CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS. 6.2 Normal Distribution. 6.1 Continuous Uniform Distribution

There are two basic kinds of random variables continuous and discrete.

Stat 100a, Introduction to Probability.

CS 5014: Research Methods in Computer Science. Bernoulli Distribution. Binomial Distribution. Poisson Distribution. Clifford A. Shaffer.

Confidence intervals CE 311S

Severity Models - Special Families of Distributions

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci

Bell-shaped curves, variance

17 Exponential and Logarithmic Functions

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Practice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes:

GEOMETRIC -discrete A discrete random variable R counts number of times needed before an event occurs

Continuous Distributions

7.4* General logarithmic and exponential functions

Stat 35, Introduction to Probability.

Actuarial Science Exam 1/P

STA 4322 Exam I Name: Introduction to Statistics Theory

Introduction and Overview STAT 421, SP Course Instructor

If a function has an inverse then we can determine the input if we know the output. For example if the function

Logarithmic Functions

Limiting Distributions

The Binomial distribution. Probability theory 2. Example. The Binomial distribution

CHAPTER 14 THEORETICAL DISTRIBUTIONS

Joint p.d.f. and Independent Random Variables

8 Laws of large numbers

Definition A random variable X is said to have a Weibull distribution with parameters α and β (α > 0, β > 0) if the pdf of X is

Reduction of Variance. Importance Sampling

1 Probability and Random Variables

Continuous Random Variables

CS224W: Analysis of Networks Jure Leskovec, Stanford University

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Slides 8: Statistical Models in Simulation

APPENDICES APPENDIX A. STATISTICAL TABLES AND CHARTS 651 APPENDIX B. BIBLIOGRAPHY 677 APPENDIX C. ANSWERS TO SELECTED EXERCISES 679

Math438 Actuarial Probability

Chapter 24. Comparing Means

Chapter 18. Sampling Distribution Models /51

continuous random variables

7 Random samples and sampling distributions

Exponential Distribution and Poisson Process

Sampling Distributions

Statistics for Economists. Lectures 3 & 4

Limiting Distributions

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Transcription:

Some worked exercises from Ch 4: Ex 0 p 5 a) It looks like this density depends on two parameters how are we going to plot it for general k and θ? But note that we can re-write the density as a constant times (x/ θ) -(k+). And since the constant must always be such that the integral is, we don't have to worry about it when we are sketching it for shape if we plot the (x/ θ) -(k+), the function f() will be the same if we were to scale the axis properly. But since we are to sketch the graph, we will not worry about this scaling. We can wlog let θ=. Of course, it is conceivable that the function might depend qualitatively on k. But if we look at the function, it is obvious it is a sort of exponential decline, so any value of k will do for this part, so we will choose k=. (Note we are told k>0 so the exponent will never be negative or 0. ) Note another condition on f(x) is that x θ. So we must draw f(x) over the range (θ,+ ), and in our case this will be (,+ ). The c is just the constant necessary to make the integral. In this case c=k/θ =, so the scale shown is correct. b) integrating x - from to is (- - ) (- - ) = bk # c) integrating the general density expression from θ to b, $ dx = -(θ/b) k + k + x d)p(a X b) = -(θ/b) k + [-(θ/a) k +] = (θ/a) k - (θ/b) k as long as θ a b

Ex 4 p 60 Pareto density again! $ a) E(X) = x k# $ k % # dx = k + % dx =kθ k /(-k+)[ (0)-(θ -k+ )]=kθ/(k-) for k> x x k b) E(X) is proportional to (integral of /x) which is. c) d) are like a) and b) e) turns out that the integral converges as long as n<k. Note that the lower limit of the expected value integral is θ >0, so the integrand does not blow up at this lower limit. Ex 38 p 7 This is an easy one. 95% of a normal distribution is within SDs of the mean. So σ =.05. (The is approximate if you use the tabulated z=.96 for a left area =.975, the σ is slightly larger.) Ex 43 p 7 X~N(µ=,σ=3.5). Want c such that P(X< c-)=0.99 (all weights are in lbs). The 99 th percentile of the N(0,) dist is.33 so we want z=(c--)/3.5 =.33 Hence c=.33*3.5+3=.55 lbs. Note: Always check your answer for reasonable-ness. If the parcel weight distribution has mean and SD 3.5, then is more than SDs above the mean, so the vast majority of parcels will weight less, so 99% seems plausible. Ex 5 p 73 If X ~ N(µ,σ), then z=(x-µ)/σ ~ N(0,). Note x= zσ+µ. Suppose z p is such that P(z< z p )=p. Since z< z p iff zσ+µ < z p σ+µ, P(zσ+µ < z p σ+µ) = P(z< z p ) = p, and so P(x< z p σ+µ) = p which is the same as in the box on p 68. This should be understood more than algebraically think of expressing the pth percentile of the original X scale in terms of SDs from the mean normal probabilities only depend on the number of SDs from the mean (and whether above or below the mean). This number of SDs is the z you compute from z=(x-µ)/σ. -------------------------------

Exercise based on pp 78-79. Light bulbs sold by a certain manufacturer are claimed to be exponentially distributed with mean 000 hours. Suppose a bulb has lasted for 000 hours. What is the probability that it last another 000 hours? From the memoryless property of the exponential distrbution, P(T>000 T>000) = P(T>000) = -exp(-.00*000)=-e - = 0.63. Note: This memoryless property is proved on the top of p 79. The general form may be expressed as T exponential --> P(T>s+t T>s) = P(T>t). Ex 65 p 8 a) y / < X < y / b) Just note the result: X ~ N(0,) implies X ~ chisquared ( df) Note: An important further result that we will experience in Ch 5 is that if Z i ~ N(0,) and independent then X = # Z i has a chi-squared distribution on ν degrees of freedom. i= Degrees of freedom can be thought of as something like number of independent sources of variation. Sometimes this is the number of observations in a sample. 98. p 00 a) n=50, p=.05 approx prob most easily obtained via Normal approx (since np and n(- p) both >0 )with mean np=.5 and SD=(np(-p)) / = 3.45. P(#defectives 5) for the N(µ=.5,σ) requires P(Z z) where z=(4.5-.5)/3.45 = 3.48 so rqd prob is -.9997 =.0003 In other words, The probability that 0% or more boards were defective is.0003. b) P(exactly 0 defectives) = P((9.5-.5)/3.45 < Z < (0.5-.5)/3.45) = P(-.87<Z<-.58)=.80-.9 =.0888 The probability of exactly 0 defectives is.0888. 00. p 00 x 3 a) cdf(x) = F X (x) =. x dx = (3/)(-x- )- (3/)(- - ) = (3/)(-/x) for x 3 seconds. Note: if x<, F(x)=0 and if x>3, F(x)=. b) P(X.5) = (3/)(-/5) = 9/0 and P(X.5)=(3/)(-/3)=/ so P(.5 X.5)=9/0-/ = /5 3 c) E(X)= x 3. x dx = 3 3. x dx = (3/) log e(3) =.648 Note log is to base e is sometimes written ln().

Note: Sometimes the knowledge that a certain functional form is a density function can save you time by writing down an integral directly. For example, look at the gamma density on p 75. If you want to integrate # x p e $x dx, you will know instantly that the 0 answer is p! (as long as p is an integer, ( p +) otherwise). This is a lot simpler than doing integration by parts many times. Note also that this trick makes computing the moments of Gamma quite easy to compute. For example, if X is standard gamma with α=5, the expression for E(X) is the integral of the density of a standard gamma with α=6, but with a different constant. So E(X) =. (6) = 5. Of course, this particular integral (5) is also known from the formula for E(X) = αβ where α=5 and β=. d)sd(x) = V (X) V(X)=E(X ) E (X) E(X 3 ) = x. 3. dx = (3/)(3-) =3. x So SD(X) = (3-.648 ) / =.533 Note: Again check for reasonable-ness. The range of X is to 3, so a SD of.533 seems OK. e)h(x) =portion of reaction time that light is on =0 if X <.5 =X-.5 if.5 X.5 = if.5 < X <3.5 Therefore E(h(X)) = # (x.5) 3..5 x dx + 3. 3..5 x dx = (3/)ln(.5/.5)-.5*(/5)+/0 =.66 Note: An alternative approach is to express E(h(X)) as = 0*P(<X<.5) + E(X-.5.5.5<X<.5)*P(.5<X<.5) + *P(.5<X<3) = 0+ # (x.5) 3. dx +/0=.66.5 x --------------. p 0 (oops this was an assignment question instead I'll do 09. p 0) 09. p 0. X=ln(I o /I i ) is defined as the current gain, and is assumed N(µ=,σ=0.05) a) I o /I i is a random variable whose natural logarithm is Normal, and this is the definition of a lognormal random variable (we omitted it p 84, but it is useful so I will mention it here). b) Required probability is P(X>ln()) = P((X-)/.05 > (ln()-)/.05) = P(Z>-6.4)=-0 = c) Use formulas on p 84. mean is.7, var is.085. Note: The naming of the lognormal distribution is a bit perplexing. It is not the logarithm of a normal RV, but rather it is the RV whose logarithm is normal. Its

parameters are just the parameters of the normal distribution that results after taking the logarithm. But the mean and variance of the lognormal itself are given (in terms of the mean and sd of the associated normal) by the formulas on p 84. Note the strong right skewness of the lognormal distribution. When you see a sample data distribution that has a strong right skew, often taking the log of the data will convert it to normal, and then all the strategies of the normal distribution can be used. It is very common for data to have a lognormal distribution. For example personal income distributions are usually of this shape. Also, imagine the shape of the size of earthquakes. You have Richter size of very often, 4 occasionally, 6 rarely and 8 very rarely. The distribution would certainly be very right skewed. Although the lognormal distribution is in section 4.5, which I said we would omit, you should know about it.