Univariate Normal Probability Density Function

Size: px
Start display at page:

Download "Univariate Normal Probability Density Function"

Transcription

1 Statistical Distributions Univariate Normal Probability Density Function A random variable, x, is normally distributed if, and only if, its probability density function has the following form: Prob(x θ, σ) [ exp ] (x θ), <x<. () πσ σ This probability density function has two parameters: a location parameter θ, with <θ<, and a scale parameter σ, with restrictions that <σ<. It is often denoted by N (θ, σ ). Moreover, the function has a single mode or maximum frequency at x θ and such point Prob(x θ) is σ. If σ is (or the standard normal), then π probability is about.4. Probability σ,θ σ,θ Shaded Area.9545 σ,θ Value of x We will first prove that above function described in equation () is a proper normalized probability density function; that is Prob(x θ, σ) x. First note that Prob(x θ, σ) > for all x such that <x<. Make the change of variable z x θ to obtain following: σ Prob(z) exp z, <z<. () π Let u z,wehave<u< and u z z, or z u u. Using these substitutions, note that [ ] exp z z π π u exp( u) u The factor appear in the numerator to take account of the area under both positive and negative values of z. Multivariate Research Methods Date: September 9,

2 Statistical Distributions exp( u) u (3) π u π u exp( u) u Note that the right hand side in above expression is the gamma function with argument ; that is, the gamma function denoted by Γ(q) is defined as Γ(q) u q exp( u) u, <q<. (4) This would result in above integral to be equal to (/ π)γ( ). Moreover, since it is shown in calculus that Γ( ) is equal to π. We conclude that the right hand side is equal to one. Moment Generating Function for Normal Distribution The moment generating function is defined as the expected value of exp(t x); that is, M(z, t) [ ] exp(tz) exp z z (5) π Above integral may be evaluated by completing the square in the exponent and observing that the integrand becomes the normal density with mean t and variance of. That is, [ ] M(z, t) exp z zt z π [ exp z zt + t t ] z π [ ] exp(t (z t) /) exp z π exp(t /) (6) This expression simplifies because integral is equal to. That is, integral is for a random variable z with its mean of t and variance of. Moreover, to convert moment generating function with respect to x, note that M(z + θ, t) exp(θt)m(z, t) and M(zσ,t) M(z, tσ ). Using these two results, it possible to write the general expression for momentgenerating function for the normal distribution as M(x, t θ, σ ) exp[θt +(σ t /)] (7) Equation (7) can be used to determine rth moment by differentiating a moment generating function rth time and then evaluating resulting expression at t. That is, [ ] M(x, t) m (θ + σ t)(exp[θt +(σ t )/]) θ t t Multivariate Research Methods Date: September 9,

3 Statistical Distributions 3 [ ] M(x, t) m t [ 3 ] M(x, t) m 3 t 3 m 4 [ 4 ] M(x, t) t 4 t t t [(θ + σ t) + σ )](exp[θt +(σ t )/]) θ + σ [(θ + σ t) 3 +3σ (θ + σ t)](exp[θt +(σ t )/]) θ 3 +3σ θ [(θ + σ θ) 4 +6σ (θ + σ t) +3σ 4 ](exp[θt +(σ t )/]) θ 4 +6σ θ +3σ 4 (8) Note that above moments can be used to obtain the mean, variance, skewness and kurtosis using following relationships. mean µ m variance µ m m skewness µ 3 m 3 3m m +m 3 kurtosis µ 4 m 4 4m m 3 +6m m 3m 4 (9) Using equation (9), note that the mean is equal to θ and variance is equal to σ. After some algebraic manipulation, we conclude that skewness is equal to zero. Finally, after considerable algebraic manipulation, we may write kurtosis is equal to 3σ 4. Measure of skewness, that is, of departures from symmetry is measured by the coefficient of skewness (γ ) and it is ratio of µ 3 to µ 3/ or γ µ 3. To characterize, peakedness as well as long µ 3/ tail of distribution, coefficient of kurtosis (γ ) is employed which is equal to ratio of µ 4 to square of µ minus 3 or γ µ 4 3. Note that for γ >, the observed distribution µ are leptokurtic (more sharply peaked), and those for which γ <, platykurtic (more flat-topped) than the normal curve. Both γ and γ for normal distribution are zero. Gamma Probability Density Function The Gamma probability density function is closely related to the gamma function. A random variable, x, is distributed according to the gamma distribution if, and only if, its probability density function is given by [ Prob(x α, γ) xα Γ(α)γ exp x ], <x<, () α γ and α and γ are strictly positive parameters; that is, α, γ >. This probability density function is often denoted by G(α, γ). From equation () note that γ is a scale parameter. When α is less than or equal to, the probability density function has a single mode at x γ(α ). For small values of α the probability density function has a long tail to the right. For larger values of α and for any given value of γ, the probability density function becomes more symmetric and approaches a normal form. Multivariate Research Methods Date: September 9,

4 Statistical Distributions 4 Gamma Probability Distribution Function for α.,,, 3 and 5 Probability α α 3 α 5 α. α Value of z The gamma probability density function can be brought into a standardized form by the change of variable z x, which results in γ xα z α γ α and z x. This means γ that x α x z α γ α z. Consequently, the standardized form of gamma probability density function may be written as Prob(z α) Γ(α) zα exp( z), <z<. () Application of equation (4) leads to conclusion that probability density function in equation () is a proper probability density function. Moment Generating Function for Gamma To obtain moment generating function, following integral must be evaluated; M(z, t) exp(tz) Γ(α) zα exp( z) z, <z<. () With following substitution, z tz u, note that z( t) u and z u. Substituting these in equation () would result in t [ ] α u t exp( u) u M(z, t) Γ(α) t u α exp( u) u ( t) α Γ(α) t ( t) α Γ(α) uα exp( u) u ( t) ( α t) α (3) Multivariate Research Methods Date: September 9,

5 Statistical Distributions 5 Repeated differentiation with respect to t, and setting t results in following four moments. M(z, t) t α( t) (α+) M(z, t) t α(α + )( t) (α+) 3 M(z, t) t 3 α(α + )(α + )( t) (α+3) 4 M(z, t) α(α + )(α + )(α + 3)( t) (α+4) t 4 m α (4) m α(α +) m 3 α(α + )(α +) m 4 α(α + )(α + )(α +3) This would result in the mean and variance of gamma distribution both to be α and skewness and kurtosis is equal to α and 3α(α + ) respectively. Note that coefficient of skewness is equal to and coefficient of kurtosis is 6. For α>, both of these α α coefficients approach zero as α becomes larger. Note also that the maximum value of distribution function occurs, when z α. This is different than normal distribution where mean of distribution also is the maximum of distribution function or mode of distribution function. If gamma distribution contains parameter γ as given in equation (), then mean and variance is equal to αγ and αγ respectively. Similarly skewness and kurtosis is equal to αγ 3 and 3α(α+)γ 4 respectively. These results can be obtained by noting that M(z, t) ( tγ) α and then following above steps. Note that changing scale of distribution has no impact on the coefficient of skewness and kurtosis. Some authors prefer to write gamma distribution as follows: Prob(x αγ) γα x α exp( xγ), <x<, (5) Γ(α) and α and γ are strictly positive parameters; that is, α, γ >. Note that moment generating function for this form of gamma distribution is M(z, t) ( t γ ) α. As result mean and variance would be equal to α γ and α γ respectively. Exponential Distribution When α and γ λ, the resulting distribution is exponential distribution and it is written as Prob(x λ) [ λ exp x ], <x<, (6) λ where λ>. This distribution has closed form cumulative distribution function which does not exist for normal as well as gamma distribution. It is given by F (x λ) Multivariate Research Methods Date: September 9,

6 Statistical Distributions 6 Exponential Probability Distribution Function for λ.5,.5, 3 and 5 λ.5 Probability λ.5 λ λ 5 Value of x exp ( λ) x. We can also infer all four moments about mean for the exponential distribution. Note that the mean is equal to λ, variance of λ, skewness of λ 3 and kurtosis as 9λ 4. Consequently, the coefficient of skewness and kurtosis is equal to and 6, which is unaffected by parameter of distribution. The moment generating function for exponential distribution is M(x, t) ( tλ). Note that as λ increases, probability density function approaches uniform distribution. Finally note that the maximum for exponential distribution occurs at x. χ Distribution Function The χ probability density function is a special of the G probability density function given in equation () in which α v/ and γ ; that is, the χ probability density function has the following form: Prob(x v) xv/ exp( x/), <x<, (7) v/ Γ(v/) where v >. The parameter v is usually referred to as the number of degrees of freedom. Since equation (7) is a special case of equation (), the standardized form moments are given in equation (4) with α replaced with v/. Moments associated with unstandardized form are as follows: Mean v v variance v v skewness v 3 8v kurtosis 3 v (v +) 4 4v [ v + ]. Multivariate Research Methods Date: September 9,

7 Statistical Distributions 7 χ Probability Distribution Function for v, 5, 8 and v v 5 v 8 Probability v Value of x 8 Using above results, the coefficient of skewness is equal to and coefficient of kurtosis v to be v. As figure below indicates, χ distribution approaches normal distribution as v increases. For smaller values of v, however χ distribution is with a long tail on the right, and somewhat flatter in the middle. When v or, shape of distribution is similar to exponential distribution while all other integer values, the maximum for the distribution occurs at x v. A very important property of the χ probability density function is that any sum of squared independent, standardized normal random variables has a probability density function in the χ form; that is, if y z + z + + z n, where the z i s are independent, standardized normal random variables, then y has a probability density function in the form of equation (7) with v n. Consider a case of v. Let z N(, ) and y z. We can show that y G(, )orχ with v. Note that f y (y) f z (h h (y) (y)) y for a<y<band stands for the absolute value and a and b refer to smallest and the largest value z can take respectively. To apply this transformation lemma, we must have h(z) is either increasing or decreasing but not both. Note that h(z) z which z z is increasing function for z>and decreasing function for z<. Consider, however, region (, ] for z function y is decreasing. Thus we can apply transformation lemma Multivariate Research Methods Date: September 9,

8 Statistical Distributions 8 in two segments. That is, z ± y. This would mean that h (y) y ± y. Thus, we may write f y (y) f z ( [ ] y) + f z ( y) y [ y y Γ( ) [ ] y for y> ][ π exp( y )+ π exp( y ) ] exp( y ) for y> (8) This is G(, )orχ with v. The Inverse Gamma (IG) Probability Density Function If x has gamma distribution with paramters α and γ as given in equation (), then x has the inverse-gamma distribution. That is, inverse-gamma distribution can be obtained by substituting y x in equation (). To accomplish this transformation, note that x y x and y y. Substituting these in equation (), following may be obtained Prob(y α, γ) y (α ) y [ exp ], <y<, Γ(α)γ α yγ [ y (α+) Γ(α)γ exp ], <y<. (9) α yγ Note that according to gamma function, [ y (α+) exp ] yγ y Γ(α)γ α, which leads to conclusion that equation in (9) is proper probability density function. Note also that the maximum for such function occurs at y. Following graph γ(α +) usefully describes inverse gamma distribution. Moments of Inverse Gamma Distribution To obtain moments of this distribution, it is not easy to derive the moment generating function for probability density function based on equation (9). To derive first four The maximum is unique because Prob(y) y maximum value. [ ] y (α+3) exp yγ (α + ) is always negative at the Multivariate Research Methods Date: September 9,

9 Statistical Distributions 9 α,γ Inverse Gamma Probability Distribution Function for various values of α and γ α 5,γ. Probability α 4,γ. α 5,γ Value of y moment, we use the relation between the expected value of y and the moment generating function of x given by Cressie et al. (98) 3. This relationship is given by E(y r ) t r M(x, t) t Γ(r) where r>. Since M(x, t) for gamma distribution is ( γt) α, we may write M(x, t) ( + γt) α. Substituting expression for M(x, t), we may write, E(y r ) t r ( + γt) α t. Γ(r) To solve this integral, first we need to convert above integral in beta functional format, which then can be evaluted. To accomplish the first task, substutite u γt +γt. Note γ t that u ( + γt), t u and + γt. Note also that substitution also γ( u) u affects limits of integration from to. With these substitution, we may write [ ] r [ ] E(y r u α u ) γ( u) u Γ(r)γ( u) u r ( u) α r u Γ(r)γ r 3 Cressie, Noel, Anne S. Davis, J. Leroy Folks and George E. Policello II (98) The Moment- Generating Function and Negative Integer Moments, The American Statistician, vol. 35, pp Multivariate Research Methods Date: September 9,

10 Statistical Distributions Γ(α r) Γ(α)γ r. () The last step could be written because definition of beta function is B(r, γ r) u r ( u) α r u Γ(r)Γ(α r). Γ(α) There is an alternative and simpler approach to obtain moments of inverse gamma random variable. Suppose we want the expected value of r th power, that is, E(y r ) which is given by E(y r ) y r Prob(y α, γ) y y r y (α+) Γ(α)γ α [ ] yγ Γ(α)γ exp y α [ y (α r+) exp yγ ] y Note that it follows from the definition of Γ() function that [ x (p+) exp ] x a p Γ(p). ax In our case p α r and a γ. Hence we may write E(y r ) γα r Γ(α r) Γ(α)γ α Γ(α r) Γ(α)γ r which is same as (). Using Γ(α) (α )Γ(α ) and equation (), we may write following moments of inverse gamma distribution. E(y) E(y ) E(y 3 ) E(y 4 ) Γ(α ) γγ(α) γ(α ) α> Γ(α ) γ Γ(α) γ (α )(α ) α> Γ(α 3) γ 3 Γ(α) γ 3 3 i (α i) α>3 Γ(α 4) γ 4 Γ(α) γ 4 4 i (α i) α>4 Multivariate Research Methods Date: September 9,

11 Statistical Distributions These could be used to derive variance, skewness and kurtosis. After various algebraic manipulations, note that variance skewness kurtosis γ (α ) (α ) 4 γ 3 (α ) 3 (α )(α 3) 3(a +5) γ 4 (α ) 4 (α )(α 3)(α 4) The coefficient of skewness for inverse gamma then is equal to γ coefficient of kurtosis is given by γ 6(5α ) (α 3)(α 4). 4 (α ) (α 3) The inverse gamma probability density function appear with many special forms, which may be derived from the one given above in equation (9). First consider, a scaled version which is obtained by substituting γ. This would result in, β Prob(y α, β) y (α+) β α [ exp β ], <y<. () Γ(α) y A summary for this and other forms of inverse gamma distribution are provided on following table. As shown above that χ distribution is special case of gamma probability density function, the inverse χ and scaled inverse χ probability density function are also special cases of IG distribution. To obtain, inverse χ distribution, substitute α v and γ. This would result in Prob(y v) and [ y (v/+) exp ], <y<. () Γ(v/)v/ y Finally scaled inverse χ probability density function is obtained by substituting α v and γ. This would result in vs Prob(y v, s ) y (v/+) (v/) v/ s v/ [ ] exp vs, <y<. (3) Γ(v/) y Zellner (97) 4 obtained the IG probability density function from the gamma probability density function in equation () by letting y equal the positive square root of ; that x is, y. With this change of variable the IG probability density function is x and thus y x Prob(y γ,α) [ exp ], <y<, (4) Γ(α)γ α yα+ γy 4 Zellner, Arnold (97) An Introduction to Bayesian Inference in Econometrics, Wiley:New York, pp Multivariate Research Methods Date: September 9,

12 Statistical Distributions A Summary of Various forms of Inverse Gamma Distributions Original as Scaled IG Inverse χ Scaled inverse χ equation (9) equation () equation () equation (3) Substitution γ β α v,γ α v,γ vs Mean γ(α ) Variance γ(α ) (α ) Mode γ(α+) γ (α ) γ (α ) (α ) γ (α+) v (v ) (v 4) v+ vs v v s 4 (v ) (v 4) vs v+ where γ and α >. Since this probability density function is encountered frequently in connection with prior and posterior probability density function for a standard deviation, in equation (4) let σ y (note that σ here is a random variable), α v/ and γ /vs to obtain Prob(σ v, s) Γ(v/) [ ] vs v/ exp σv+ [ ] vs σ, <σ<, (5) where v, s >. The probability density function in equation (5) has an unique maximum or mode at v σ s v +, and σ s as v tends to infinity. Note that v, σ is equal to.9535s, while at v 5, σ is equal to.99s which indicates that even for small v, σ is very close to s. To obtain moments of this form of inverse gamma distribution, we need to evaluate E(σ r ) Γ(v/) σ r Prob(σ v, s) σ [ ] σ r vs v/ [ ] exp vs σ Γ(v/) σv+ σ [ ] vs v/ [ ] σ (v r+) exp vs σ σ Note that x (p+) exp [ a ] x p p x a Γ( ). In our case, p v r and a vs. Hence we may write, E(σ r ) [ ] v [ ] (v r) vs vs Γ( v ) Γ( v r ) [ ] vs r/ Γ(v r)/, v > r, (6) Γ(v/) Multivariate Research Methods Date: September 9,

13 Statistical Distributions 3 Inverse Squared Gamma Probability Distribution Function for various values of v and s v,s v 6, s Probability v 6,s v,s 3 4 Value of σ a compact expression for the moments about zero. This expression only helps understand various moments, when r is even. That is, m E(σ) Γ ( ) v v Γ ( ) v s, v > m E(σ ) Γ ( ) v Γ ( ) v v s vs v, v > m 3 E(σ 3 ) Γ ( ) v 3 [ ] v 3/ Γ ( ) v s 3, v > 3 m 4 E(σ 4 ) Γ ( ) v 4 [ ] v Γ ( ) v s 4 v s 4 (v )(v 4), v > 4 To get better feel for these moments, following Table provides computed moments about zero and about the mean as well as coefficient of skewness (γ ) and kurtosis (γ ). We could conclude that as v becomes larger, measures associated with skewness and kurtosis approach that of normal distribution. Note also that for v of distribution implied by equation (5) is skewed to the right and slightly flat in the middle. The Beta (B) Probability Density Function A random variable, x is said to be distributed according to beta distribution if, and only Multivariate Research Methods Date: September 9,

14 Statistical Distributions 4 Computed First four moments of Inverse Gamma when s Mean E(σ ) E(σ 3 ) E(σ 4 ) v m m m 3 m 4 variance Skewness Kurtosis γ γ Computed First four moments of Inverse Gamma when s if, its probability density function has the following form: Prob(x α,β,c) [ ] x α [ x ] β, x c, (7) cb(α, β) c c where α, β and c> are parameters of distribution and B(α, β) denotes the beta function which is equal to B(α, β) z α ( z) β z, Γ(α)Γ(β) Γ(α + β) z, <α,β< which converges for all values of α and β greater than. Note that probability density function given in equation (7) ranges from to c. By a change of variable, z x c,we can obtain the standardized beta probability density function, Prob(z α, β) B(α, β) zα ( z) (β ), z, (8) which has a range of zero to one. From definition of Beta function, we conclude that equation (8) is proper probability density function. Beta distribution takes variety of shapes depending upon parameter values of α and β. For α and β>, there is unique maximum 5 or mode for probability density function given in equation (8) at z α α + β, and for <α,β<, the probability density function approaches as z approaches or. 5 Note that Prob(z) z [ z (α ) ( z) (β ) α + ] is always negative for α and β>. β Multivariate Research Methods Date: September 9,

15 Statistical Distributions 5 Beta Probability Distribution Function for various values of α and β α.5,β Probability α,β 4 α 3,β 3 α 3,β α.5,β Value of z To obtain moments of Beta distribution, we need to evaluate E(z r ) z r Prob(z α, β) z z r B(α, β) zα ( z) (β ) z B(α, β) z α+r ( z) (β ) z Note that integral in the above expression is Beta function with parameters (α + r) and β. Consequently, E(z r B(α + r, β) ) B(α, β) Γ(α + β)γ(α + r) Γ(α + β + r)γ(α), (9) which is a simple expression but not easy to compute. Using recursive relationship Γ(α +)αγ(α), the first four moments about zero could be written as m E(z) m E(z ) m 3 E(z 3 ) m 4 E(z 4 ) α α + β α(α +) (α + β)(α + β +) α(α + )(α +) (α + β)(α + β + )(α + β +) α(α + )(α + )(α +3) (α + β)(α + β + )(α + β + )(α + β +3) Multivariate Research Methods Date: September 9,

16 Statistical Distributions 6 Since variance is equal to m m, after algebraic manipulations, we may write Variance µ αβ (α + β) (α + β +), and skewness equal to m 3 3m m +m 3. After considerable algebraic manipulations, we may write αβ(β α) Skewness µ 3 (α + β) 3 (α + β + )(α + β +). Since kurtosis is equal to m 4 4m m 3 +6m m 3m 4, after considerable algebraic manipulations, we may write Kurtosis µ 4 3αβ[α (β +) αβ + β (α + )] (α + β) 4 (α + β + )(α + β + )(α + β +3). As can be seen from above expressions, there is not much insights about distributional characteristics based on skewness and kurtosis. Note that when α β, skewness is zero and indicating that distribution is symmetric about the middle point in the distribution µ 4 6 and the coefficient of kurtosis, is equal to. This would indicate that beta µ α +3 distribution is more flat topped or platykurtic compared to the normal curve. Based on the skewness expression, we may conclude that if β is greater than α, there is positive skewness and distribution is likely to be skewed to the right. On the other hand, β is less than α, there is negative skewness and distribution is likely to be skewed to the left. The standardized form of Beta probability density function is closely related to the standardized gamma distribution. Suppose z and z be two independent random gamma variables with parameters α and β respectively. Then the random variable z z z + z has a standardized beta probability density function with parameters α and β. To prove this result, first note that the joint probability of two independent gamma variate is given by Prob(z,z α, β) Γ(α)Γ(β) zα z β exp [ (z + z )] z z. First, let us change variables as follows: v z + z and z z z + z. Thus, g(z,z ) z (z + z, ) z + z g (v, z) (zv, v zv), where z zv and z v zv. Note that Jacobian of this transformation is given by Jg z z (v, z) v v v. Multivariate Research Methods Date: September 9,

17 Statistical Distributions 7 We may write the joint probability of our transformation as, Prob(v, z α, β) Prob(g (v, z) α, β) Jg (v, z) Γ(α)Γ(β) exp( v)(zv)(α ) (v zv) (β ) v z v. Γ(α)Γ(β) exp( v)v(α+β ) z (α ) ( z) (β ) z v. Note that integral with respect to v is gamma function with parameter α + β and if we integrate out v, we have required result. Inverted Beta (IB) Distribution Function A closely related beta distribution is the beta prime or inverted beta probability density function. If we substitute z in equation (8), then Jacobian of transformation +u is given by z u ( + u) with u<. Thus, making substitution in (8), we may write Prob(u α, β) [ B(α, β) +u u (β ) ] α [ ] (β ) +u ( + u),, u<, (3) B(α, β) ( + u) (α+β) with α and β greater than. Note that integral x p xn pn ( + x + x + + x n ) n+ n+ s ps s Γ(p s ) x x n Γ( n+ s p s ) is known as the inverted Dirichlet integral. As consequence, we would conclude that equation (3) is proper integral. To obtain moments of Inverted Beta distribution, we need to evaluate E(u r ) u r Prob(u α, β) u u (β ) u r B(α, β) ( + u) B(α, β) (α+β) u u (r+β ) u ( + u) (α+β) To evaluate integral in above expression, substitute u u x ( x), x x. Then, Multivariate Research Methods Date: September 9,

18 Statistical Distributions 8 Inverted Beta Probability Distribution Function for various values of α and β α 3,β Probability α.5,β α,β 6 α 3,β 3 α,β Value of u ( + u), and limits of integration become to. Then substituting these terms x in above integral, we may write u (r+β ) [ ] ( + u) u x (r+β ) [ ] (α+β) ( x) x (α+β) x x x (r+β ) ( x) (α r ) x B(β + r, α r), r < α. Consequently, we conclude that the moments of inverted beta distribution are given by E(u r B(β + r, α r) ) r<α. (3) B(α, β) Using recursive relationship Γ(α + ) αγ(α), the first four moments about zero could be written as m E(u) m E(u ) m 3 E(u 3 ) m 4 E(u 4 ) β α> α β(β +) α> (α )(α ) β(β + )(β +) (α )(α )(α 3) β(β + )(β + )(β +3) (α )(α )(α 3)(α 4) Multivariate Research Methods Date: September 9,

19 Statistical Distributions 9 Since variance is equal to m m, after algebraic manipulations, we may write Variance µ β(α + β ) (α ) (α ), and skewness equal to m 3 3m m +m 3. After considerable algebraic manipulations, we may write Skewness µ 3 β[β +3β(α )+(α ) ] (α ) 3 (α )(α 3) α>3. Since skewness is positive, we would conclude that inverted Beta probability density function has usually a long tail to the right. Since kurtosis is equal to m 4 4m m 3 + 6m m 3m 4, after considerable algebraic manipulations, we may write Kurtosis µ 4 3β[(β + )(β + )(α ) (α +5β αβ )] + (α ) 4 (α )(α 3)(α 4) 3β 3 (α 3)(α 4)(αβ +α )] (α ) 4 (α )(α 3)(α 4). Unfortunately, such complex expression provides very little insight about nature of this distribution. The inverted beta probability density function has an unique maximum or mode 6,if β>at u β α +. An alternative form of inverted beta probability density function can be obtained, if we substitute u y with c> in equation (3), then we may obtain c Prob(y α,β,c) cb(α, β) ( y c ) (β ) ( + y, y<, (3) )(α+β) c and α, β and c>. We will show below that the univariate student s t probability density function as well as F distribution are special cases of equation (3). Since y cu and we already know moments of random variable u, moments y are equal to c r m r (u). Univariate t Probability Distribution Function A random variable, x is distributed according to t probability density function if, and only if, it has the following functional form Prob(x θ, h, v) 6 Note that Prob(u) u (v+) Γ( ) h [+ hv ] v+ Γ( )Γ( v ) v (x θ), <x<, (33) (α +) 3 u (β ) is always negative for α> and β>. (b )(a + b)( + u) (α+β) Multivariate Research Methods Date: September 9,

20 Statistical Distributions where <θ<, <h<, v>and where Γ denotes the gamma function This probability density function has three parameters, θ, h and v. θ is associated with location, v is degrees of freedom and h is scale parameter. To prove that equation (33) is proper one, we need to convert this expression to normalized form by change of variable, t h t (x θ), with v x h v. Prob(t v) (v+) Γ( ) [ ] v+ +t Γ( )Γ( v ), <t<. (34) Note that various terms involving Γ function is equal to ( B, v ) Γ( )Γ( v) Γ( v+). To indicate that probability density function given in equation (34) is proper, substitute z +t. Note that t z t and z z tz, which is equal to z 3 ( z). Note also that Prob(t v) t Prob(z v) z, we have Prob(z v) B (, ) v B (, v z ( v+ 3 ) ( z) z ) z ( v ) ( z) z We conclude from above that the probability density function described by (34) is proper. Multivariate Research Methods Date: September 9,

A Few Special Distributions and Their Properties

A Few Special Distributions and Their Properties A Few Special Distributions and Their Properties Econ 690 Purdue University Justin L. Tobias (Purdue) Distributional Catalog 1 / 20 Special Distributions and Their Associated Properties 1 Uniform Distribution

More information

0, otherwise, (a) Find the value of c that makes this a valid pdf. (b) Find P (Y < 5) and P (Y 5). (c) Find the mean death time.

0, otherwise, (a) Find the value of c that makes this a valid pdf. (b) Find P (Y < 5) and P (Y 5). (c) Find the mean death time. 1. In a toxicology experiment, Y denotes the death time (in minutes) for a single rat treated with a toxin. The probability density function (pdf) for Y is given by cye y/4, y > 0 (a) Find the value of

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Exercise 4.1 Let X be a random variable with p(x)

More information

1 Uniform Distribution. 2 Gamma Distribution. 3 Inverse Gamma Distribution. 4 Multivariate Normal Distribution. 5 Multivariate Student-t Distribution

1 Uniform Distribution. 2 Gamma Distribution. 3 Inverse Gamma Distribution. 4 Multivariate Normal Distribution. 5 Multivariate Student-t Distribution A Few Special Distributions Their Properties Econ 675 Iowa State University November 1 006 Justin L Tobias (ISU Distributional Catalog November 1 006 1 / 0 Special Distributions Their Associated Properties

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer February 14, 2003 1 Discrete Uniform Distribution DiscreteUniform(n). Discrete. Rationale Equally likely outcomes. The interval 1, 2,..., n of

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

MAS223 Statistical Inference and Modelling Exercises and Solutions

MAS223 Statistical Inference and Modelling Exercises and Solutions MAS3 Statistical Inference and Modelling Exercises and Solutions The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS. 6.2 Normal Distribution. 6.1 Continuous Uniform Distribution

CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS. 6.2 Normal Distribution. 6.1 Continuous Uniform Distribution CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS Recall that a continuous random variable X is a random variable that takes all values in an interval or a set of intervals. The distribution of a continuous

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

Common probability distributionsi Math 217 Probability and Statistics Prof. D. Joyce, Fall 2014

Common probability distributionsi Math 217 Probability and Statistics Prof. D. Joyce, Fall 2014 Introduction. ommon probability distributionsi Math 7 Probability and Statistics Prof. D. Joyce, Fall 04 I summarize here some of the more common distributions used in probability and statistics. Some

More information

MAS223 Statistical Modelling and Inference Examples

MAS223 Statistical Modelling and Inference Examples Chapter MAS3 Statistical Modelling and Inference Examples Example : Sample spaces and random variables. Let S be the sample space for the experiment of tossing two coins; i.e. Define the random variables

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017 Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries

More information

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate

More information

Continuous random variables

Continuous random variables Continuous random variables Continuous r.v. s take an uncountably infinite number of possible values. Examples: Heights of people Weights of apples Diameters of bolts Life lengths of light-bulbs We cannot

More information

ON THE NORMAL MOMENT DISTRIBUTIONS

ON THE NORMAL MOMENT DISTRIBUTIONS ON THE NORMAL MOMENT DISTRIBUTIONS Akin Olosunde Department of Statistics, P.M.B. 40, University of Agriculture, Abeokuta. 00, NIGERIA. Abstract Normal moment distribution is a particular case of well

More information

Eco517 Fall 2004 C. Sims MIDTERM EXAM

Eco517 Fall 2004 C. Sims MIDTERM EXAM Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

STA 4322 Exam I Name: Introduction to Statistics Theory

STA 4322 Exam I Name: Introduction to Statistics Theory STA 4322 Exam I Name: Introduction to Statistics Theory Fall 2013 UF-ID: Instructions: There are 100 total points. You must show your work to receive credit. Read each part of each question carefully.

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

Applied Probability Models in Marketing Research: Introduction

Applied Probability Models in Marketing Research: Introduction Applied Probability Models in Marketing Research: Introduction (Supplementary Materials for the A/R/T Forum Tutorial) Bruce G. S. Hardie London Business School bhardie@london.edu www.brucehardie.com Peter

More information

Probability. Table of contents

Probability. Table of contents Probability Table of contents 1. Important definitions 2. Distributions 3. Discrete distributions 4. Continuous distributions 5. The Normal distribution 6. Multivariate random variables 7. Other continuous

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

2-1. xp X (x). (2.1) E(X) = 1 1

2-1. xp X (x). (2.1) E(X) = 1 1 - Chapter. Measuring Probability Distributions The full specification of a probability distribution can sometimes be accomplished quite compactly. If the distribution is one member of a parametric family,

More information

Appendix A Conjugate Exponential family examples

Appendix A Conjugate Exponential family examples Appendix A Conjugate Exponential family examples The following two tables present information for a variety of exponential family distributions, and include entropies, KL divergences, and commonly required

More information

Lecture 2: Conjugate priors

Lecture 2: Conjugate priors (Spring ʼ) Lecture : Conjugate priors Julia Hockenmaier juliahmr@illinois.edu Siebel Center http://www.cs.uiuc.edu/class/sp/cs98jhm The binomial distribution If p is the probability of heads, the probability

More information

Probability and Statistics

Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be Chapter 3: Parametric families of univariate distributions CHAPTER 3: PARAMETRIC

More information

STAT:5100 (22S:193) Statistical Inference I

STAT:5100 (22S:193) Statistical Inference I STAT:5100 (22S:193) Statistical Inference I Week 10 Luke Tierney University of Iowa Fall 2015 Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall 2015 1 Monday, October 26, 2015 Recap

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Statistics 3657 : Moment Generating Functions

Statistics 3657 : Moment Generating Functions Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition

More information

Solutions to the Exercises of Section 2.11.

Solutions to the Exercises of Section 2.11. Solutions to the Exercises of Section 2.11. 2.11.1. Proof. Let ɛ be an arbitrary positive number. Since r(τ n,δ n ) C, we can find an integer n such that r(τ n,δ n ) C ɛ. Then, as in the proof of Theorem

More information

Continuous random variables

Continuous random variables Continuous random variables Can take on an uncountably infinite number of values Any value within an interval over which the variable is definied has some probability of occuring This is different from

More information

Probability Background

Probability Background CS76 Spring 0 Advanced Machine Learning robability Background Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu robability Meure A sample space Ω is the set of all possible outcomes. Elements ω Ω are called sample

More information

Introduction to Applied Bayesian Modeling. ICPSR Day 4

Introduction to Applied Bayesian Modeling. ICPSR Day 4 Introduction to Applied Bayesian Modeling ICPSR Day 4 Simple Priors Remember Bayes Law: Where P(A) is the prior probability of A Simple prior Recall the test for disease example where we specified the

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet. Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 Suggested Projects: www.cs.ubc.ca/~arnaud/projects.html First assignement on the web: capture/recapture.

More information

Will Murray s Probability, XXXII. Moment-Generating Functions 1. We want to study functions of them:

Will Murray s Probability, XXXII. Moment-Generating Functions 1. We want to study functions of them: Will Murray s Probability, XXXII. Moment-Generating Functions XXXII. Moment-Generating Functions Premise We have several random variables, Y, Y, etc. We want to study functions of them: U (Y,..., Y n ).

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Lecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016

Lecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016 Lecture 3 Probability - Part 2 Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza October 19, 2016 Luigi Freda ( La Sapienza University) Lecture 3 October 19, 2016 1 / 46 Outline 1 Common Continuous

More information

Chapter 3 Common Families of Distributions

Chapter 3 Common Families of Distributions Lecture 9 on BST 631: Statistical Theory I Kui Zhang, 9/3/8 and 9/5/8 Review for the previous lecture Definition: Several commonly used discrete distributions, including discrete uniform, hypergeometric,

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

The linear model is the most fundamental of all serious statistical models encompassing:

The linear model is the most fundamental of all serious statistical models encompassing: Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

1, 0 r 1 f R (r) = 0, otherwise. 1 E(R 2 ) = r 2 f R (r)dr = r 2 3

1, 0 r 1 f R (r) = 0, otherwise. 1 E(R 2 ) = r 2 f R (r)dr = r 2 3 STAT 5 4.43. We are given that a circle s radius R U(, ). the pdf of R is {, r f R (r), otherwise. The area of the circle is A πr. The mean of A is E(A) E(πR ) πe(r ). The second moment of R is ( ) r E(R

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Statistical Distributions

Statistical Distributions CHAPTER Statistical Distributions In this chapter, we shall present some probability distributions that play a central role in econometric theory First, we shall present the distributions of some discrete

More information

Lecture 17: The Exponential and Some Related Distributions

Lecture 17: The Exponential and Some Related Distributions Lecture 7: The Exponential and Some Related Distributions. Definition Definition: A continuous random variable X is said to have the exponential distribution with parameter if the density of X is e x if

More information

Solution. (i) Find a minimal sufficient statistic for (θ, β) and give your justification. X i=1. By the factorization theorem, ( n

Solution. (i) Find a minimal sufficient statistic for (θ, β) and give your justification. X i=1. By the factorization theorem, ( n Solution 1. Let (X 1,..., X n ) be a simple random sample from a distribution with probability density function given by f(x;, β) = 1 ( ) 1 β x β, 0 x, > 0, β < 1. β (i) Find a minimal sufficient statistic

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #3 STA 5326 December 4, 214 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access to

More information

Chapter 1. Sets and probability. 1.3 Probability space

Chapter 1. Sets and probability. 1.3 Probability space Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability

More information

Application of Parametric Homogeneity of Variances Tests under Violation of Classical Assumption

Application of Parametric Homogeneity of Variances Tests under Violation of Classical Assumption Application of Parametric Homogeneity of Variances Tests under Violation of Classical Assumption Alisa A. Gorbunova and Boris Yu. Lemeshko Novosibirsk State Technical University Department of Applied Mathematics,

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

Some Special Distributions (Hogg Chapter Three)

Some Special Distributions (Hogg Chapter Three) Some Special Distributions Hogg Chapter Three STAT 45-: Mathematical Statistics I Fall Semester 5 Contents Binomial & Related Distributions. The Binomial Distribution............... Some advice on calculating

More information

On q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes

On q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes CHAPTER 4 On q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes 4.1 Introduction Several skewed distributions such as logistic, Weibull, gamma and beta distributions

More information

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined

More information

ARCONES MANUAL FOR THE SOA EXAM P/CAS EXAM 1, PROBABILITY, SPRING 2010 EDITION.

ARCONES MANUAL FOR THE SOA EXAM P/CAS EXAM 1, PROBABILITY, SPRING 2010 EDITION. A self published manuscript ARCONES MANUAL FOR THE SOA EXAM P/CAS EXAM 1, PROBABILITY, SPRING 21 EDITION. M I G U E L A R C O N E S Miguel A. Arcones, Ph. D. c 28. All rights reserved. Author Miguel A.

More information

Joint p.d.f. and Independent Random Variables

Joint p.d.f. and Independent Random Variables 1 Joint p.d.f. and Independent Random Variables Let X and Y be two discrete r.v. s and let R be the corresponding space of X and Y. The joint p.d.f. of X = x and Y = y, denoted by f(x, y) = P(X = x, Y

More information

i=1 k i=1 g i (Y )] = k

i=1 k i=1 g i (Y )] = k Math 483 EXAM 2 covers 2.4, 2.5, 2.7, 2.8, 3.1, 3.2, 3.3, 3.4, 3.8, 3.9, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.9, 5.1, 5.2, and 5.3. The exam is on Thursday, Oct. 13. You are allowed THREE SHEETS OF NOTES and

More information

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline. Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,

More information

Inference and Regression

Inference and Regression Inference and Regression Assignment 3 Department of IOMS Professor William Greene Phone:.998.0876 Office: KMC 7-90 Home page:www.stern.nyu.edu/~wgreene Email: wgreene@stern.nyu.edu Course web page: www.stern.nyu.edu/~wgreene/econometrics/econometrics.htm.

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Decomposable and Directed Graphical Gaussian Models

Decomposable and Directed Graphical Gaussian Models Decomposable Decomposable and Directed Graphical Gaussian Models Graphical Models and Inference, Lecture 13, Michaelmas Term 2009 November 26, 2009 Decomposable Definition Basic properties Wishart density

More information

Bayesian Models in Machine Learning

Bayesian Models in Machine Learning Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of

More information

Elliptically Contoured Distributions

Elliptically Contoured Distributions Elliptically Contoured Distributions Recall: if X N p µ, Σ), then { 1 f X x) = exp 1 } det πσ x µ) Σ 1 x µ) So f X x) depends on x only through x µ) Σ 1 x µ), and is therefore constant on the ellipsoidal

More information

Appendix 2. The Multivariate Normal. Thus surfaces of equal probability for MVN distributed vectors satisfy

Appendix 2. The Multivariate Normal. Thus surfaces of equal probability for MVN distributed vectors satisfy Appendix 2 The Multivariate Normal Draft Version 1 December 2000, c Dec. 2000, B. Walsh and M. Lynch Please email any comments/corrections to: jbwalsh@u.arizona.edu THE MULTIVARIATE NORMAL DISTRIBUTION

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

Chapter 1. Probability, Random Variables and Expectations. 1.1 Axiomatic Probability

Chapter 1. Probability, Random Variables and Expectations. 1.1 Axiomatic Probability Chapter 1 Probability, Random Variables and Expectations Note: The primary reference for these notes is Mittelhammer (1999. Other treatments of probability theory include Gallant (1997, Casella & Berger

More information

Chapter 5. Bayesian Statistics

Chapter 5. Bayesian Statistics Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective

More information

Preliminary Statistics. Lecture 3: Probability Models and Distributions

Preliminary Statistics. Lecture 3: Probability Models and Distributions Preliminary Statistics Lecture 3: Probability Models and Distributions Rory Macqueen (rm43@soas.ac.uk), September 2015 Outline Revision of Lecture 2 Probability Density Functions Cumulative Distribution

More information

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,,

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Basics of Experimental Design. Review of Statistics. Basic Study. Experimental Design. When an Experiment is Not Possible. Studying Relations

Basics of Experimental Design. Review of Statistics. Basic Study. Experimental Design. When an Experiment is Not Possible. Studying Relations Basics of Experimental Design Review of Statistics And Experimental Design Scientists study relation between variables In the context of experiments these variables are called independent and dependent

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

Section 21. Expected Values. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 21. Expected Values. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 21 Expected Values Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30010, R.O.C. Expected value as integral 21-1 Definition (expected

More information

Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota

Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution Charles J. Geyer School of Statistics University of Minnesota 1 The Dirichlet Distribution The Dirichlet Distribution is to the beta distribution

More information

2. The CDF Technique. 1. Introduction. f X ( ).

2. The CDF Technique. 1. Introduction. f X ( ). Week 5: Distributions of Function of Random Variables. Introduction Suppose X,X 2,..., X n are n random variables. In this chapter, we develop techniques that may be used to find the distribution of functions

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

The exponential family: Conjugate priors

The exponential family: Conjugate priors Chapter 9 The exponential family: Conjugate priors Within the Bayesian framework the parameter θ is treated as a random quantity. This requires us to specify a prior distribution p(θ), from which we can

More information

Review for the previous lecture

Review for the previous lecture Lecture 1 and 13 on BST 631: Statistical Theory I Kui Zhang, 09/8/006 Review for the previous lecture Definition: Several discrete distributions, including discrete uniform, hypergeometric, Bernoulli,

More information

Inference for a Population Proportion

Inference for a Population Proportion Al Nosedal. University of Toronto. November 11, 2015 Statistical inference is drawing conclusions about an entire population based on data in a sample drawn from that population. From both frequentist

More information

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Chapter 13 Convex and Concave Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Monotone Function Function f is called monotonically increasing, if x 1 x 2 f (x 1 ) f (x 2 ) It

More information

Continuous Distributions

Continuous Distributions Chapter 3 Continuous Distributions 3.1 Continuous-Type Data In Chapter 2, we discuss random variables whose space S contains a countable number of outcomes (i.e. of discrete type). In Chapter 3, we study

More information

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours MATH2750 This question paper consists of 8 printed pages, each of which is identified by the reference MATH275. All calculators must carry an approval sticker issued by the School of Mathematics. c UNIVERSITY

More information

Introduction to Statistics

Introduction to Statistics Introduction to Statistics By A.V. Vedpuriswar October 2, 2016 Introduction The word Statistics is derived from the Italian word stato, which means state. Statista refers to a person involved with the

More information

Data Analysis and Uncertainty Part 2: Estimation

Data Analysis and Uncertainty Part 2: Estimation Data Analysis and Uncertainty Part 2: Estimation Instructor: Sargur N. University at Buffalo The State University of New York srihari@cedar.buffalo.edu 1 Topics in Estimation 1. Estimation 2. Desirable

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Missouri Educator Gateway Assessments

Missouri Educator Gateway Assessments Missouri Educator Gateway Assessments June 2014 Content Domain Range of Competencies Approximate Percentage of Test Score I. Number and Operations 0001 0002 19% II. Algebra and Functions 0003 0006 36%

More information

A Journey Beyond Normality

A Journey Beyond Normality Department of Mathematics & Statistics Indian Institute of Technology Kanpur November 17, 2014 Outline Few Famous Quotations 1 Few Famous Quotations 2 3 4 5 6 7 Outline Few Famous Quotations 1 Few Famous

More information

Transformations of Standard Uniform Distributions

Transformations of Standard Uniform Distributions Transformations of Standard Uniform Distributions We have seen that the R function runif uses a random number generator to simulate a sample from the standard uniform distribution UNIF(0, 1). All of our

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information