Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 770
|
|
- Byron Higgins
- 6 years ago
- Views:
Transcription
1 Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 77 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill October 4, 2 MATHEMATICAL EXPECTATIONS Let be a random variable on the probability space ( F ). The expectation of is a weighted (Lebesgue-Stieltjes) average over the support of, where the weight is based on the distribution () := ( ): Z := [] = () The generality of the integral allows for discrete and continuous cases. By de nition the induced measure that is based on de nes the expectations: we may synonymously write [] = R ()() which is identically Z [] = where fe g = is a partition of. ()() = sup. Discrete and Continuous X µ fe g = = inf () 2E (E ) If is discrete we write [] = X ( = ) where the summation is over all possible values of. If is continuous then () = R () hence () = (), so we write Z [] = ()
2 Example (discrete): Let be the space for a die roll, and () = j 4j 2 f23g. The measure is (fg) = 6 for any 2, hence ( = ) = ( : () = ) = (f4g) = 6 ( = ) = ( : () = ) = (f3 5g) = = 3 ( = 2) = ( : () = 2) = (f4 6g) = = 3 Then ( = 3) = ( : () = 3) = (fg) = 6 = [] = X ( = ) = = 5 Example (continuous): ln() = and.2 Moments = [] = Let () = for. Then R = ln() Z The raw moment is ( ) for = 2 Z = = The central moment is j j for Variance measures dispersion: 2 := [] = ( ) 2 which reduces to [] = ( ) 2 = 2 2 []+ 2 = 2 2 []+ 2 = 2 2 The standard deviation is := p []. A random variable with a larger variance, ceteris parabus, has a larger likelihood of being distant from the mean: literally, the mean squared distance to the mean ( ) 2 will be large Skewness is the third standardized moment µ 3 S() := A (continuous) distribution symmetric at satis es ( + ) = ( ) 8 hence ( ) = ( ) = 2 Hence the mean is identically. A symmetric distribution satis es ( ) = for all odd = 3 hence S() =. If S() the distribution is skewed right, or positively skewed. 2
3 Proof of symmetry at the mean. Consider a continuous distribution. We have by change of variable = [] = = Z Z = + () = Z ( + ) ( + ) + Z ( + ) + ( + ) + = () + = + ( + ) () + () ( + ) ( + ) ( + ) ( + ) ( + ) + Z ( ) ( + ) By symmetry ( + ) = ( ) 8 hence R (+) R ( ) =, therefore [] =. QED. Proof of zero odd moments under symmetry. Consider a continuous distribution. Since =, and if for odd ( ) = = = = Z Z ( ) () ( ) () + ( + ) + Example (Pareto with index 4.5): ( ) + Let ( ) () ( + ) ( + ) = QED. ( ) = ( ) = 2 45 for all ( + ) Then is symmetric and continuous with density : () = () = µ 2 ( + ) 45 = 225 ( + ) 55 : () = () = µ 2 ( ) 45 = 225 ( ) 55 3
4 The function () integrates to and is therefore a proper density: () = = 45 ( + ) The mean is zero because is symmetric at zero: [] = () = 225 = 225 ( + ) Z ( + ) = 55 45j = ( ) = ( + ) The variance [] = [ 2 ] is computed by two applications of integration by parts and one application of change of variables: 2 2 = ( + ) µ = ( + ) 45j µ = 2 = ( + ) 35j + 35 Z 45 = 2 ( + ) ( + ) 35 = 2 ( + ) 35 ( + ) 35 = = We can check the result by simulation. A sample draw of 5, independent Pareto random variables renders a sample variance A plot of observations follow: 4
5 FIGURE : Pareto with index 4.5 Example (Pareto with index 2.5): If the distribution is now then the density is ( ) = ( ) = 2 25 for all ( + ) () = ( ) = for ( + ) By symmetry at zero [] =, and the variance is [] = 2 = 2 () 2 = ( + ) µ = ( + ) 25j µ = 2 5 ( + ) 5j + 5 = 2 = ( + ) j = = 266 = 2 ( + ) 25 Hence this Pareto has a larger variance than in the previous example. Figure 2 show the densities of the two Pareto distributions: notice the present Pareto with index 2.5 has much heavier tails, corresponding to a larger variance. 5
6 FIGURE 2: Pareto with index 2.5 or 4.5 Kurtosis is the fourth standardized moment: µ 4 K() := Kurtosis measures heaviness of tails, although so does variance. The advantage of kurtosis is standardization: we can compare kurtosis across distributions. The normal distribution ( 2 ) has a den- Example (Normal vs. Pareto): sity () = p 2 2 5( )2 2 You can prove [] = and [] = 2 by direct computation using integration with polar coordinates. This distribution is symmetric at zero, and has K() = 3. The Pareto distribution with index 2.5 has [ 4 ] = so the kurtosis is not de ned. Figure 3 plots the standard normal and Pareto with index 2.5. FIGURE 3: Pareto and Normal 6
7 Example (Normal vs. Asymmetric Pareto): Finally, consider the Pareto distribution ( ) = 2 ( ) 45 for and ( ) = 2 ( + ) 25 for Then the right tail is much heavier then the left tail. The density is () = 25 ( ) 55 for and () = 25 ( + ) 35 for FIGURE 4: Asymmetric Pareto and Normal.3 Inequalities The a many useful inequalities associated with expectations. Since expectations is itself just a Lebesgue-Stieltjes integral, these inequalities are ultimately grounded are very primitive mathematical properties. In all cases the following inequalities are true provided the required expectations exists. Chebyshev s Inequality: (jj ) 2 [ 2 ] for any. Markov s Inequality: (jj ) jj for any. Proof: jj = = Z + Z jj () jj () + Z jj () + jj () + jj () Z jj () jj () + jj () 7
8 µz = jj () + = jj (jj ). QED. () = jj ( ( ) + ( )) Hölder s Inequality: For any, + = : j j (jj ) (j j ). Proof: By convexity the natural log satis es ln( + ( )) ln() + ( )ln() for any and 2 (). Therefore + ( ). Choose = jj jj, = j j j j, = and therefore = to get jj jj + Now take expectations: j j µ jj µ j j j j jj j j = jjj j jj jj + where the left side is jj jj + and the right side is " jj j j hence we have shown j j " j j jj j j j j j j # ( jj ) ( j j ) = ( jj ) ( j j ) # ( jj ) ( j j ) = jj jj + j j j j = + = j j ( jj ) ( j j ) j j ( jj ) ( j j ) or ( jj ) ( j j ) j j. QED. Cauchy-Schwartz Inequality: Simply Holder s with = = 2. Lyapunov s Inequality: ( jj ) ( jj ) 8. Proof: Apply Holder s inequality: ~ ~ ³ ~ ³ Choose ~ = jj, ~ = and = to deduce hence ( jj ) ( jj ). ~ for any, + = jjj j ( jj ) = ( jj ) 8
9 .4 Moment Generating Function As long as a random variable has all moments, e.g. j[ ]j for all = 2 3 then it has a moment generating function: h () := i This function generates moments in the sense This is easily seen by expanding around : hence µ ()j = = [ ] () =! + µ j =! j = () = X =! 2! = X =! Notice this implicitly requires [ ] exist for all. Random variables with all moments nite include those with bounded support and those with exponential tails (e.g. exponential distribution, normal distribution). Now di erentiate: ()j = = X = h i! = []! = [] Repeat to obtain (). Examples with given in the follow section. Why does a bounded random variable have all moments? Let satisfy (jj ) =, hence its support is no larger than [ ] for nite. Then jj = jj () = Z Z jj () jj () = jj (jj ) = jj = jj Boundedness inherently implies all moments exist. Moment generating functions have a unique corresponding distribution function. Claim: Each function has a unique corresponding distribution () = ( ). Matching moment generating functions across random variables and su ces to match their distributions. However, they must have moment generating functions: merely matching their moments itself does not su ce. 9
10 Claim: If = for every in some open neighborhood of then () = () for all. Matching moments of arbitrary bounded transforms also su ces. Claim: If [()] = [( )] for every bounded continuous function on R then () = () for all. These properties are used predominantly in asymptotic theory. For example, if we want to prove sequences of estimators ^ and ~ of some unknown parameter have the same asymptotic distribution then we might attempt to show j[(^ )] [( ~ )]j! as! for all bounded..5 Characteristic Function The moment generating function exists when as all moments nite. This holds su ciently if has bounded support, or has a distribution with exponential tail decay (e.g. exponential, logistic, normal, chi-squared). We can compare distributions by comparing moment generating functions but only if the moment generating function exists. A generalization of the moment generating function is the characteristic function: h () = i where = p All random variables have a characteristic function because () is bounded. Claim: j()j. Proof. By DeMoivre s formula and a standard trigonometric identity for any 2 R ³ = jcos() + sin()j = cos() 2 + sin() 2 2 = Therefore by properties of the Lebegue-Stieltjes integral j()j = [] = QED. Like the moment generating function, characteristic functions are unique for a distribution. Claim: Each function has a unique corresponding distribution () = ( ). If a moment jj for integer 2 N then () reveals like as follows. Claim: If jj for 2 N then µ µ ()j = = jj = ()j =
11 Proof. See Davidson (994: Theorem.4 and Corollary.5) QED. It should be noticed that the characteristic function need not be real-valued. Claim: () is real-valued if and only if is symmetrically distributed. Proof. The following argument exposes more properties of characteristic functions. Consult Davidson (994: p..3) for some information. We know is symmetric if and only if ( ) = ( ) = ( ) hence if and only if and have the same distribution, hence if and only if and have the same characteristic function. But is a linear function of, and linear transforms satisfy hence is symmetric if and only if + () = () () = () = ( ) where ( ) = () the complex conjugate of (). But () = () if and only if () is real, and nally () is real if and only if all existing odd moments are zero (i.e. [ ] = for 3 5 if those moments exist), hence if and only if is symmetric. 2 DISTRIBUTION CLASSES 2. Discrete Distributions 2.. Bernoulli The random variable is distributed Bernoulli, binary or fg-distributed, if ( = ) = and ( = ) = for 2 [] This is denoted» (). The values f g are arbitrary and can any two values, e.g. f g. Trivially [] = X ( = ) = ( ) + = and [] = 2 ( []) 2 = X 2 ( = ) 2 = 2 = ( ) Bernoulli s are bounded and therefore have all moments. The moment generating function is h i = X ( = ) = ( = ) + ( = ) = ( ) +
12 Hence [] = h i j = = = 2 = µ 2 h i j = = = and so on. This is trivial by direct computation [ ] = P ( = ) = =. The characteristic function is h i = ( = ) + ( = ) = + Example: Bernoulli s include: ipping a coin = {h,t}; married or not; sells a stock share or not; accepts a job o er or not Binomial If we count the number of positive outcomes in independent Bernoulli trials, we have a Binomial, denoted» ( ). For example, ip coins and count the hears. Ask 5 people if they are married and count the yeses. The distribution can be directly computed by using combinatorics. The probability = is the probability of one particular outcome ( ) times the number of ways we get positives : ( = ) = ( ) = ( )! ( )!! Let us prove this sums to one: X X ( = ) = ( ) = ( ( )) = = = because P = ( ) is identically a binomial expansion. It is much easier to de ne a Binomial as a sum of independent Bernoulli s» (): = X = Then = if of the Bernoulli are positives (i.e. = ). Again, like ipping coins and counting the number of heads, or asking 5 people if they are married. Then X [] = [ ] = = By the binomial theorem(+) = =. 2
13 and by independence 2 [] = X [ ] = ( ) = Example: Exactly 8% items produced at a factory are defective. You independently sample 3. What is the probability that or 2 or 3 defective. What is the mean number of defectives? Let = if the item is defective, and let = P = the number defectives. There is a.8 chance = hence [] = 3 8 = 24. By repeatedly sampling 3 items the mean over in nitely many such sample with be exactly 3. Finally, ( 3) = P 3 = ( ) 3 3 = = The distribution is plotted below FIGURE 5: Binomial B(3,.8) k 2..3 Poisson The Poisson distribution describes an integer count with in nite support: ( = ) =! for = 2 where. which we write» (). Figure 6 shows two Poisson distributions. FIGURE 6 : Poisson 2 In general [] = = [ ] is not true, but under independence it is true. We will discuss joint distributions and independence later. 3
14 This distribution is used to count occurrences in a time frame in which an upper bound cannot be predetermined. Examples are the number of asset trades in a day, the number of volcanic eruptions in a decade, and so on. We can show Consider [] = X =! = X = [] = [] =! = X = ( )! = X = because P =! = P = ( = ) =. The distribution also naturally arises in the sense of being roughly identical to a Binomial ( ) for a large number of trials and a small probability of a positive outcome : ( ) ¼ () Since the Binomial is a pain to work with when is larger (i.e. = implies we must manage probabilities in order to compute moments), the Poisson approximation is a nice simpli cation Further, it can be prove that ( )! () as! This says as the number of trials grows to in nite and the probability of a positive goes to zero, the result is a Poisson. Example: Let there be = 5 trials concerning whether a home buyer defaults on a mortgage, with probability =. Let be the number of people who defaulted. Then ( = ) by the Poisson approximation is roughly ( 5) 5! = , and exactly ! (5 )!! = 338 Similarly, ( = 2) is roughly ( 5) 2 5 2! = 8422, and exactly ! (5 2)!2! = ! =
15 Example: Assume the mean number visitors to the Federal Reserve Bank in St. Louis in one hour is 6. Assume the number of visitors is Poisson distributed. What is the probability of 2 visitors in 2.5 hours? Of no more than 7 visitors in 2 hours, or between and 2 visitors in 2 hours? In 2.5 hours on average there are 5 visitors, hence» (5). Then ( = 2) = ! = 48. In 2 hours on average there are 2 visitors, hence 7X 2 2 ( 7) =! = = 895 and ( 2) = 2X = 2 2! = Continuous Distributions 2.2. Uniform The uniform density on interval [ ] is a constant () = ( ), and () = for or. We write» [ ]. The distribution function is () = ( ) = Moments are easily computed: and so on. Example: are [] = [] = 2 = Z Z Z = j = = 2 = 2 2 j = j = 3 3 for 2 [ ] = ( + ) 2 2 Let» []. Then () = : see the gure below. The moments = 2 2 j = 2 and 2 = 3 2 = 3 3 j = 3 hence [] = 3 4 = 2. The moment generating function is h () = i = = j = ³ Since () cannot be directly di erentiated at zero, expand around : = P =! = + P =!. Therefore () = X! = = X! = 5
16 Hence ()j = = X =2 µ 2 ()j = = ( ) 2 j = =! 2! = 2 X =3 and so on. The characteristic function is h () = i = ( )( 2) 3 (3 )(3 2) j = = = 2! 3! 6 = 3 = ³ FIGURE 7 : Uniform Distribution on [,] Exponential The exponential class is represented with density () = expf g for and. We write» exp(). The parameter determines the distribution shape and therefore mean and variance. See the gure below. FIGURE 8 : Exponential Distribution 2. lamda = lamda = 2 lamda = 6 lamda =
17 Notice () = = j = = hence for any the function () is a probability density. The exponential distribution exhibits exponential tail decay: ( ) = = j = = Hence, a larger shape parameter aligns with slower decay, and a smaller parameter aligns with faster (compare = 5 to = in Figure 8). A distribution with exponential tail decay has all moments nite: [ ] (it is a very thin tailed distribution, so all moments exit). This easily proven by using bounds for the Lebesgue-Stieltjes integral. First, note for all su ciently large and some nite because by convexity µ ln for all large Therefore for any [ ] = The mean is by integration-by-parts [] = = = = j = = j = µ j + Similarly [] = 2 is easy to show by repeated application of integration-by-parts. By construction is skewed right: S() Normal The normal distribution calls in the exponential class. The density is () = p 2 2 5( )2 2 for 2 R The quadratic ensures a symmetric distribution at. We write» ( 2 ). Using integration by polar coordinates, and by parts, it can be proven () =, [] = and [] = 2 7
18 The standardized random variable» ( ) a standard normal distribution. The kurtosis is identically 3 since µ 4 K () = = () 4 = 3 by direct computation from the standard normal distribution. Normals have the following property: for any 2 R h +i = +[]+2[]2 = Therefore the moment generating function and characteristic function are easily deduced: h () := i = h () := i = 2 22 By symmetry all odd centered moments are zero, while all even moments are functions of : ( ) = for = 3 5 ( ) = ( )! for = 246 where ( )! is the product of all odd numbers 3 up to this odd (e.g. (2 )! =, (4 )! = 3, (6 )! = 3 5). Therefore ( ) 4 = 4 (4)! = 4 3 which gives us kurtosis: (( )) 4 = 3. The moment generating function also gives us all moments and reveals all moments are linear functions of powers of and 2 : ()j = = j = = µ 2 ()j = = ³ j = = The parameter determines location (central tendency), while 2 determines shape (in this case, dispersion). See Figure 9. Since the only parameters that determine a normal are and 2, for a given the probability of being distant from is 8
19 larger for larger 2. Example: If» () then ( 5) = 668 while if» ( ) then ( 5) = See Figure FIGURE 9: Normal Distribution mu =, sig2 = mu = 2, sig2 = mu =, sig2 = mu = 2, sig2 = FIGURE : Normal Probabilities Normals (likely) do not naturally occur in the sense of precisely describing the distribution of observed events (e.g. human weight, income, interest rates). But, they naturally occur in the following sense. Consider a random draw of identically distributed random variables f 2 g with mean and variance 2. Then under remarkably general conditions X p ( )! ( ) = That is, the random variable p P = ( ) is normally distributed as!. This is a version of the Central Limit Theorem, a corner stone of modern statistical sciences. Although in many text books the underlying assumptions are is iid with nite variance, in fact variance does not need to be nite (a famously forgotten fact), and highly dependent and heterogeneous data are also allowed. We will prove this beautiful fact of the cosmos later in the semester. 9
20 2.2.4 Chi-Squared Let be iid 3 standard normals:» (). Then by de nition P = 2 has the distribution chi-squared with degrees of freedom, denoted» 2 (). Consider = 2 for just one» (). Since standard normals are symmetric and thin tailed with support R, a 2 () random variable will be right skewed, with the bulk of probability near zero. The density follows from direct computation of the probability function: ( ) = 2 = ³ 2 2 = = 2 hence by the chain-rule ³ 2 ³ 2 = ³ 2 ³ ³ 2 ³ 2 () = ( ) = 2 ³ 2 = 2() 2 = () 2 where () is the standard normal pdf: () = p 2 2 The moments follow from the construction P = and [] = 2. = 2 and independence: [] 3 TRANSFORMATIONS 3. Poisson A simple linear function of Poissons is Poisson. Claim: Remark: Why? Proof. If» ( ) are for = independent then P =» ( P = ). This does not hold for any a ne combination + P =. Consider = 2: the general proof follows by repeating this one ar- 3 Independent and identically distributed. 2
21 gument. We have by mutual exclusivity and independence ( + 2 = ) = = X ( = 2 = ) = = X =! = ( + 2 )! 2 2 ( )! X = 2 = ( + 2 ) ( + 2 )! X ( = ) ( 2 = ) =! ( )!! = (+2) ( + 2 )! which exploits the Binomial expansion: P = 2!( )!! = ( + 2 ). QED. 3.2 Normal Normals, however, belong to a class of distributions that are closed under arbitrary a ne combinations. In fact, in this class (the Stable distributions, or -stable) the normals are the only ones that must be symmetric and have a nite variance: all others have an in nite variance. Claim: P If» ( 2 ) are for = independent then := + =» ( + P = P = 2 2 ). Proof. I will prove [ ] = [ 22 ] for = + P = and = P = 2 2, which is the characteristic function form of a normal. Since characteristic functions uniquely de ne a distribution function the proof will be complete. We have by independence " # h i Y Y h i = = Now use» ( 2 ) to obtain h i = = exp ( Y = = h i = Ã + = Y =! X 2 2 = ) X 2 2 The latter is the characteristic function of a normal with mean + P = and variance P = 2 2. QED. = 2
22 3.3 Monotonic Transforms Let have an absolutely continuous distribution with density (). If := () is continuous and monotonic then we can deduce its density from (). Notice ( ) = ( () ) = () if monotonic increasing Therefore = () if monotonic decreasing. () = ( ) = Z () () = ( ()) () if increasing () = ( ) = () () = ( ()) () = () () if decreasing. Either is identically () = () () Example: Let» [ ] and = 2 3. Then () = ( ), 2 [ ], () = and () = hence Indeed Z () = () () = = everywhere else. 3( ) = ( )2 3 3 j 23 3( ) on [ ] 2 3 = ³ ( ) = 22
Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770
Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill November
More informationRobust Estimation and Inference for Extremal Dependence in Time Series. Appendix C: Omitted Proofs and Supporting Lemmata
Robust Estimation and Inference for Extremal Dependence in Time Series Appendix C: Omitted Proofs and Supporting Lemmata Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill January
More informationModule 3. Function of a Random Variable and its distribution
Module 3 Function of a Random Variable and its distribution 1. Function of a Random Variable Let Ω, F, be a probability space and let be random variable defined on Ω, F,. Further let h: R R be a given
More informationChapter 1. Sets and probability. 1.3 Probability space
Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationChapter 1. Probability, Random Variables and Expectations. 1.1 Axiomatic Probability
Chapter 1 Probability, Random Variables and Expectations Note: The primary reference for these notes is Mittelhammer (1999. Other treatments of probability theory include Gallant (1997, Casella & Berger
More informationIntroduction and Overview STAT 421, SP Course Instructor
Introduction and Overview STAT 421, SP 212 Prof. Prem K. Goel Mon, Wed, Fri 3:3PM 4:48PM Postle Hall 118 Course Instructor Prof. Goel, Prem E mail: goel.1@osu.edu Office: CH 24C (Cockins Hall) Phone: 614
More informationWeek 1 Quantitative Analysis of Financial Markets Distributions A
Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October
More informationGMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails
GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,
More informationMC3: Econometric Theory and Methods. Course Notes 4
University College London Department of Economics M.Sc. in Economics MC3: Econometric Theory and Methods Course Notes 4 Notes on maximum likelihood methods Andrew Chesher 25/0/2005 Course Notes 4, Andrew
More informationSTAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed.
STAT 302 Introduction to Probability Learning Outcomes Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. Chapter 1: Combinatorial Analysis Demonstrate the ability to solve combinatorial
More information2. Variance and Higher Moments
1 of 16 7/16/2009 5:45 AM Virtual Laboratories > 4. Expected Value > 1 2 3 4 5 6 2. Variance and Higher Moments Recall that by taking the expected value of various transformations of a random variable,
More informationSpeci cation of Conditional Expectation Functions
Speci cation of Conditional Expectation Functions Econometrics Douglas G. Steigerwald UC Santa Barbara D. Steigerwald (UCSB) Specifying Expectation Functions 1 / 24 Overview Reference: B. Hansen Econometrics
More informationSTATISTICS SYLLABUS UNIT I
STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution
More informationPlotting data is one method for selecting a probability distribution. The following
Advanced Analytical Models: Over 800 Models and 300 Applications from the Basel II Accord to Wall Street and Beyond By Johnathan Mun Copyright 008 by Johnathan Mun APPENDIX C Understanding and Choosing
More informationStatistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions
Statistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions 1999 Prentice-Hall, Inc. Chap. 4-1 Chapter Topics Basic Probability Concepts: Sample
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationProbability Distributions Columns (a) through (d)
Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)
More informationPhysics 403 Probability Distributions II: More Properties of PDFs and PMFs
Physics 403 Probability Distributions II: More Properties of PDFs and PMFs Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Last Time: Common Probability Distributions
More informationHypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006
Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationCS 361: Probability & Statistics
October 17, 2017 CS 361: Probability & Statistics Inference Maximum likelihood: drawbacks A couple of things might trip up max likelihood estimation: 1) Finding the maximum of some functions can be quite
More informationIntroduction to Probability Theory for Graduate Economics Fall 2008
Introduction to Probability Theory for Graduate Economics Fall 008 Yiğit Sağlam October 10, 008 CHAPTER - RANDOM VARIABLES AND EXPECTATION 1 1 Random Variables A random variable (RV) is a real-valued function
More informationRandom Variable. Discrete Random Variable. Continuous Random Variable. Discrete Random Variable. Discrete Probability Distribution
Random Variable Theoretical Probability Distribution Random Variable Discrete Probability Distributions A variable that assumes a numerical description for the outcome of a random eperiment (by chance).
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationDiscrete Random Variables
Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan Introduction The markets can be thought of as a complex interaction of a large number of random processes,
More informationThe first bound is the strongest, the other two bounds are often easier to state and compute. Proof: Applying Markov's inequality, for any >0 we have
The first bound is the strongest, the other two bounds are often easier to state and compute Proof: Applying Markov's inequality, for any >0 we have Pr (1 + ) = Pr For any >0, we can set = ln 1+ (4.4.1):
More informationChapter 8: An Introduction to Probability and Statistics
Course S3, 200 07 Chapter 8: An Introduction to Probability and Statistics This material is covered in the book: Erwin Kreyszig, Advanced Engineering Mathematics (9th edition) Chapter 24 (not including
More information3 Random Samples from Normal Distributions
3 Random Samples from Normal Distributions Statistical theory for random samples drawn from normal distributions is very important, partly because a great deal is known about its various associated distributions
More informationDiscrete probability distributions
Discrete probability s BSAD 30 Dave Novak Fall 08 Source: Anderson et al., 05 Quantitative Methods for Business th edition some slides are directly from J. Loucks 03 Cengage Learning Covered so far Chapter
More informationMAT 135B Midterm 1 Solutions
MAT 35B Midterm Solutions Last Name (PRINT): First Name (PRINT): Student ID #: Section: Instructions:. Do not open your test until you are told to begin. 2. Use a pen to print your name in the spaces above.
More informationEE514A Information Theory I Fall 2013
EE514A Information Theory I Fall 2013 K. Mohan, Prof. J. Bilmes University of Washington, Seattle Department of Electrical Engineering Fall Quarter, 2013 http://j.ee.washington.edu/~bilmes/classes/ee514a_fall_2013/
More informationMeasuring robustness
Measuring robustness 1 Introduction While in the classical approach to statistics one aims at estimates which have desirable properties at an exactly speci ed model, the aim of robust methods is loosely
More informationStatistics for Economists. Lectures 3 & 4
Statistics for Economists Lectures 3 & 4 Asrat Temesgen Stockholm University 1 CHAPTER 2- Discrete Distributions 2.1. Random variables of the Discrete Type Definition 2.1.1: Given a random experiment with
More informationFundamentals of Probability CE 311S
Fundamentals of Probability CE 311S OUTLINE Review Elementary set theory Probability fundamentals: outcomes, sample spaces, events Outline ELEMENTARY SET THEORY Basic probability concepts can be cast in
More informationBasic Probability space, sample space concepts and order of a Stochastic Process
The Lecture Contains: Basic Introduction Basic Probability space, sample space concepts and order of a Stochastic Process Examples Definition of Stochastic Process Marginal Distributions Moments Gaussian
More informationSTA 247 Solutions to Assignment #1
STA 247 Solutions to Assignment #1 Question 1: Suppose you throw three six-sided dice (coloured red, green, and blue) repeatedly, until the three dice all show different numbers. Assuming that these dice
More informationMATH/STAT 3360, Probability
MATH/STAT 3360, Probability Sample Final Examination This Sample examination has more questions than the actual final, in order to cover a wider range of questions. Estimated times are provided after each
More informationRelationship between probability set function and random variable - 2 -
2.0 Random Variables A rat is selected at random from a cage and its sex is determined. The set of possible outcomes is female and male. Thus outcome space is S = {female, male} = {F, M}. If we let X be
More informationFoundations of Probability and Statistics
Foundations of Probability and Statistics William C. Rinaman Le Moyne College Syracuse, New York Saunders College Publishing Harcourt Brace College Publishers Fort Worth Philadelphia San Diego New York
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics Lecture 3 October 29, 2012 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline Reminder: Probability density function Cumulative
More informationMath 493 Final Exam December 01
Math 493 Final Exam December 01 NAME: ID NUMBER: Return your blue book to my office or the Math Department office by Noon on Tuesday 11 th. On all parts after the first show enough work in your exam booklet
More informationA Few Special Distributions and Their Properties
A Few Special Distributions and Their Properties Econ 690 Purdue University Justin L. Tobias (Purdue) Distributional Catalog 1 / 20 Special Distributions and Their Associated Properties 1 Uniform Distribution
More informationExercises with solutions (Set D)
Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where
More informationSTATISTICS ANCILLARY SYLLABUS. (W.E.F. the session ) Semester Paper Code Marks Credits Topic
STATISTICS ANCILLARY SYLLABUS (W.E.F. the session 2014-15) Semester Paper Code Marks Credits Topic 1 ST21012T 70 4 Descriptive Statistics 1 & Probability Theory 1 ST21012P 30 1 Practical- Using Minitab
More informationChapter 3 Single Random Variables and Probability Distributions (Part 1)
Chapter 3 Single Random Variables and Probability Distributions (Part 1) Contents What is a Random Variable? Probability Distribution Functions Cumulative Distribution Function Probability Density Function
More informationSample Spaces, Random Variables
Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted
More informationConditional Probability
Conditional Probability Idea have performed a chance experiment but don t know the outcome (ω), but have some partial information (event A) about ω. Question: given this partial information what s the
More informationNotes 6 : First and second moment methods
Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative
More informationChapter 1: Revie of Calculus and Probability
Chapter 1: Revie of Calculus and Probability Refer to Text Book: Operations Research: Applications and Algorithms By Wayne L. Winston,Ch. 12 Operations Research: An Introduction By Hamdi Taha, Ch. 12 OR441-Dr.Khalid
More information2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.
CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook
More informationChapter 2 Random Variables
Stochastic Processes Chapter 2 Random Variables Prof. Jernan Juang Dept. of Engineering Science National Cheng Kung University Prof. Chun-Hung Liu Dept. of Electrical and Computer Eng. National Chiao Tung
More informationBrief Review of Probability
Brief Review of Probability Nuno Vasconcelos (Ken Kreutz-Delgado) ECE Department, UCSD Probability Probability theory is a mathematical language to deal with processes or experiments that are non-deterministic
More informationBinomial and Poisson Probability Distributions
Binomial and Poisson Probability Distributions Esra Akdeniz March 3, 2016 Bernoulli Random Variable Any random variable whose only possible values are 0 or 1 is called a Bernoulli random variable. What
More informationEconomics 241B Review of Limit Theorems for Sequences of Random Variables
Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence
More informationIEOR 6711: Stochastic Models I SOLUTIONS to the First Midterm Exam, October 7, 2008
IEOR 6711: Stochastic Models I SOLUTIONS to the First Midterm Exam, October 7, 2008 Justify your answers; show your work. 1. A sequence of Events. (10 points) Let {B n : n 1} be a sequence of events in
More informationRandomized Algorithms
Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationChapter 2: Probability Part 1
Engineering Probability & Statistics (AGE 1150) Chapter 2: Probability Part 1 Dr. O. Phillips Agboola Sample Space (S) Experiment: is some procedure (or process) that we do and it results in an outcome.
More informationFunctions, Graphs, Equations and Inequalities
CAEM DPP Learning Outcomes per Module Module Functions, Graphs, Equations and Inequalities Learning Outcomes 1. Functions, inverse functions and composite functions 1.1. concepts of function, domain and
More informationMULTINOMIAL PROBABILITY DISTRIBUTION
MTH/STA 56 MULTINOMIAL PROBABILITY DISTRIBUTION The multinomial probability distribution is an extension of the binomial probability distribution when the identical trial in the experiment has more than
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationDiscrete Random Variables
CPSC 53 Systems Modeling and Simulation Discrete Random Variables Dr. Anirban Mahanti Department of Computer Science University of Calgary mahanti@cpsc.ucalgary.ca Random Variables A random variable is
More information1 Bernoulli Distribution: Single Coin Flip
STAT 350 - An Introduction to Statistics Named Discrete Distributions Jeremy Troisi Bernoulli Distribution: Single Coin Flip trial of an experiment that yields either a success or failure. X Bern(p),X
More informationManagement Programme. MS-08: Quantitative Analysis for Managerial Applications
MS-08 Management Programme ASSIGNMENT SECOND SEMESTER 2013 MS-08: Quantitative Analysis for Managerial Applications School of Management Studies INDIRA GANDHI NATIONAL OPEN UNIVERSITY MAIDAN GARHI, NEW
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationAlvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587
Cycles of length two in monotonic models José Alvaro Rodrigues-Neto Research School of Economics, Australian National University ANU Working Papers in Economics and Econometrics # 587 October 20122 JEL:
More informationPOISSON PROCESSES 1. THE LAW OF SMALL NUMBERS
POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted
More informationStatistics for Managers Using Microsoft Excel (3 rd Edition)
Statistics for Managers Using Microsoft Excel (3 rd Edition) Chapter 4 Basic Probability and Discrete Probability Distributions 2002 Prentice-Hall, Inc. Chap 4-1 Chapter Topics Basic probability concepts
More informationLinear Models for Regression CS534
Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2
IEOR 316: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 2 More Probability Review: In the Ross textbook, Introduction to Probability Models, read
More informationExponential Tail Bounds
Exponential Tail Bounds Mathias Winther Madsen January 2, 205 Here s a warm-up problem to get you started: Problem You enter the casino with 00 chips and start playing a game in which you double your capital
More informationLing 289 Contingency Table Statistics
Ling 289 Contingency Table Statistics Roger Levy and Christopher Manning This is a summary of the material that we ve covered on contingency tables. Contingency tables: introduction Odds ratios Counting,
More informationMATH Notebook 5 Fall 2018/2019
MATH442601 2 Notebook 5 Fall 2018/2019 prepared by Professor Jenny Baglivo c Copyright 2004-2019 by Jenny A. Baglivo. All Rights Reserved. 5 MATH442601 2 Notebook 5 3 5.1 Sequences of IID Random Variables.............................
More informationDiscrete Probability Refresher
ECE 1502 Information Theory Discrete Probability Refresher F. R. Kschischang Dept. of Electrical and Computer Engineering University of Toronto January 13, 1999 revised January 11, 2006 Probability theory
More informationCorrections to Theory of Asset Pricing (2008), Pearson, Boston, MA
Theory of Asset Pricing George Pennacchi Corrections to Theory of Asset Pricing (8), Pearson, Boston, MA. Page 7. Revise the Independence Axiom to read: For any two lotteries P and P, P P if and only if
More informationTest Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics
Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests
More informationWeek 2. Review of Probability, Random Variables and Univariate Distributions
Week 2 Review of Probability, Random Variables and Univariate Distributions Probability Probability Probability Motivation What use is Probability Theory? Probability models Basis for statistical inference
More informationLecture 4: Probability and Discrete Random Variables
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1
More informationBrief Review of Probability
Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions
More informationProbability Background
CS76 Spring 0 Advanced Machine Learning robability Background Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu robability Meure A sample space Ω is the set of all possible outcomes. Elements ω Ω are called sample
More informationCS 361: Probability & Statistics
February 26, 2018 CS 361: Probability & Statistics Random variables The discrete uniform distribution If every value of a discrete random variable has the same probability, then its distribution is called
More informationNonlinear Programming (NLP)
Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume
More informationJOINT PROBABILITY DISTRIBUTIONS
MTH/STA 56 JOINT PROBABILITY DISTRIBUTIONS The study of random variables in the previous chapters was restricted to the idea that a random variable associates a single real number with each possible outcome
More informationCS 361: Probability & Statistics
March 14, 2018 CS 361: Probability & Statistics Inference The prior From Bayes rule, we know that we can express our function of interest as Likelihood Prior Posterior The right hand side contains the
More informationDiscrete Random Variables
Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2014 Introduction The markets can be thought of as a complex interaction of a large number of random
More information1 Sequences of events and their limits
O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For
More informationfor valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I
Code: 15A04304 R15 B.Tech II Year I Semester (R15) Regular Examinations November/December 016 PROBABILITY THEY & STOCHASTIC PROCESSES (Electronics and Communication Engineering) Time: 3 hours Max. Marks:
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Reading Chapter 5 (continued) Lecture 8 Key points in probability CLT CLT examples Prior vs Likelihood Box & Tiao
More informationSupplemental Material 1 for On Optimal Inference in the Linear IV Model
Supplemental Material 1 for On Optimal Inference in the Linear IV Model Donald W. K. Andrews Cowles Foundation for Research in Economics Yale University Vadim Marmer Vancouver School of Economics University
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationSTAT 7032 Probability Spring Wlodek Bryc
STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationProbability Distribution
Economic Risk and Decision Analysis for Oil and Gas Industry CE81.98 School of Engineering and Technology Asian Institute of Technology January Semester Presented by Dr. Thitisak Boonpramote Department
More informationChapter 1. GMM: Basic Concepts
Chapter 1. GMM: Basic Concepts Contents 1 Motivating Examples 1 1.1 Instrumental variable estimator....................... 1 1.2 Estimating parameters in monetary policy rules.............. 2 1.3 Estimating
More informationStochastic Processes
Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space
More informationTHE QUEEN S UNIVERSITY OF BELFAST
THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationDoes k-th Moment Exist?
Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,
More informationMath 151. Rumbos Spring Solutions to Review Problems for Exam 3
Math 151. Rumbos Spring 2014 1 Solutions to Review Problems for Exam 3 1. Suppose that a book with n pages contains on average λ misprints per page. What is the probability that there will be at least
More information