Sampling Distributions and Asymptotics Part II

Size: px
Start display at page:

Download "Sampling Distributions and Asymptotics Part II"

Transcription

1 Sampling Distributions and Asymptotics Part II Insan TUNALI Econ Econometrics I Koç University 18 October 2018 I. Tunal (Koç University) Week 5 18 October / 29

2 Lecture Outline v Sampling Distributions and Asymptotics Part II Bivariate sampling distributions We closely follow Goldberger Ch.10. Two digit equation numbers (such as 10.1, ) and result references (S1, S2...) in Goldberger have been preserved. See the syllabus for references to secondary sources (Greene). I. Tunal (Koç University) Week 5 18 October / 29

3 Bivariate sampling distributions Bivariate set-up Bivariate sampling distributions - 1 The primary objective of Econometrics is to uncover relations between variables. This requires shifting the focus from univariate to multivariate sampling distributions. As usual there is much to be gained from studying the bivariate case. Consider a bivariate population for which the pmf/pdf of the pair (X, Y ) is f (x, y). The rst and second moments include E (X ) = µ X, E (Y ) = µ Y, V (X ) = σ 2 X, V (Y ) = σ2 Y, C (X, Y ) = σ XY. For nonnegative integers (r, s), the raw and central moments are E (X r Y s ) = µ, rs, E (X r Y s ) = µ rs, where X = X µ X, Y = Y µ Y. For example, we may write µ 20 = σ 2 X, µ 02 = σ2 Y, µ 11 = σ XY. I. Tunal (Koç University) Week 5 18 October / 29

4 Bivariate sampling distributions Bivariate set-up Bivariate sampling distributions - 2 A random sample of size n from the population f (x, y) consists of n independent draws on the pair (X, Y ). Thus {(X i, Y i ), i = 1, 2,..., n} are independently and identically distributed. Independence applies across observations, not within each observation. In general C (X, Y ) = σ XY 6= 0. Armed with this knowledge, the joint pmf/pdf for the random sample can be derived as g n (x 1, y 1, x 2, y 2,..., x n, y n ) = n i=1 f (x i, y i ). I. Tunal (Koç University) Week 5 18 October / 29

5 Bivariate sampling distributions Bivariate set-up Bivariate sampling distributions - 3 Sample statistics that emerge under random sampling from a bivariate pdf/pmf include single-variable statistics such as X, SX 2, Y, S Y 2, but also joint statistics that involve both components of the random vector (X, Y ). One important example is the sample covariance: S XY = 1 n i (X i X )(Y i Y ). We may also be concerned with the joint distribution of several statistics, such as two sample means: X = 1 n i X i, Y = 1 n i Y i. Obviously (X, Y ) will have a joint (bivariate) sampling distribution, by virtue of (X i, Y i ) f (x, y). I. Tunal (Koç University) Week 5 18 October / 29

6 Bivariate sampling distributions Bivariate set-up Bivariate sampling distributions - 4 What can we say about the sampling distribution of (X, Y )? From the SMT, we know the means and variances of X and Y. We can compute their covariance by extending T5 to sums of n random variables, and invoking the i.i.d. property of the pairs: C (X, Y ) = 1 n 2 C i! X i, Y i i = 1 n 2 C (X i, Y i ) = i = σ XY /n. 1 n 2 nσ XY In deriving the asymptotic results we will exploit the S.M.T., the L.L.N. and the C.L.T., along with the Slutsky theorems (and their extensions). I. Tunal (Koç University) Week 5 18 October / 29

7 - 1 Bivariate derivations - insights from the univariate case The derivations we examined under the univariate case are instructive for the bivariate case. For example, consider the theory for the sampling distribution of the sample covariance: S XY = 1 n i (X i X )(Y i Y ). This is a single-variable statistic, so we should be able to apply what we learned directly. S XY is an average, but since it is a function of sample means, the terms in the summation are not independently distributed. We need to follow the steps we used in the case of S 2 : work with the "ideal" covariance rst, then turn turn to S XY itself. The theory for the sampling distribution of a pair of sample means requires more work: To examine how the sampling distribution of (X, Y ) behaves as n!, we need bivariate versions of the convergence results we studied earlier. Remark: Technically S XY and (X, Y ) are sequences. In the interest of brevity, we drop the subscript "n" that identi es a sequence. I. Tunal (Koç University) Week 5 18 October / 29

8 - 2 Example 4a: Ideal Sample Covariance This is the sample second joint moment about the population mean: M 11 = 1 n i (X i µ X )(Y i µ Y ) = 1 n i V i = V, where V i = X i Y i. Now {V i, i = 1, 2,..., n} is a random sample on the variable V = X Y, and M11 = V is a sample mean in that random sample. Hence the earlier theory about the sample mean applies directly. I. Tunal (Koç University) Week 5 18 October / 29

9 - 3 The population mean and variance are E (V ) = E (X Y ) = C (X, Y ) = σ XY = µ 11, V (V ) = E (V 2 ) E 2 (V ) = E (X 2 Y 2 ) E 2 (X Y ) = µ 22 µ From the S.M.T. we have the exact results E (M 11 ) = E (V ) = E (V ) = µ 11 = σ XY, V (M 11 ) = V (V ) = V (V )/n = (µ 22 µ 2 11 )/n. From the L.L.N. and the C.L.T. we also have the asymptotic results M11 P.! µ 11, p n(m 11 µ 11 )! D N(0, µ 22 µ 2 11 ), M 11 A N[µ 11, (µ 22 µ 2 11 )/n]. I. Tunal (Koç University) Week 5 18 October / 29

10 - 4 Example 4b: Sample Covariance The statistic of interest is the sample second joint moment about the sample mean: S XY = M 11 = 1 n i (X i X )(Y i Y ). We may use algebra to write this as M 11 = M 11 (X µ X )(Y µ Y ). (10.1) Direct calculation yields the exact results and E (S XY ) = E [M 11 (X µ X )(Y µ Y )] = σ XY C (X, Y ) = σ XY σ XY /n = (1 1 n )σ XY, V (S XY ) = V [M 11 (X µ X )(Y µ Y )] =... = [(n 1) 2 (µ 22 µ 2 11 ) + 2(n 1)(µ 20 µ 02 )]/n 3. I. Tunal (Koç University) Week 5 18 October / 29

11 - 5 Turning to asymptotics, S XY P.! σ XY, p n(sxy σ XY ) D! N(0, µ 22 µ 2 11 ), S XY A N[σXY, (µ 22 µ 2 11 )/n]. The rst two convergence results require use of the Slutsky Theorems. The the third is the asymptotic distribution obtained by applying the limiting distribution to a nite sample. I. Tunal (Koç University) Week 5 18 October / 29

12 - 6 To prove the rst convergence result, recall that S XY = M 11 (X µ X )(Y µ Y ). Using S2: p lim S XY = p lim M 11 p lim(x µ X )(Y µ Y ) = p lim M 11 p lim(x µ X ).p lim(y µ Y ). Since p lim M11 = σ XY, while p lim(x µ X ) = 0 and p lim(y µ Y ) = 0, the result follows. I. Tunal (Koç University) Week 5 18 October / 29

13 - 7 To prove the second, we rst rewrite (10.1) as p n(m11 µ 11 ) = p n(m 11 µ 11 ) UW (10.2) where U = 4p n(x µ X ), W = 4p n(y µ Y ). Let s focus on the last term, UW. Since p lim U = 0 and p lim W = 0, by S2 we have p lim UW = 0. So by S3 the limiting distribution of p n(s XY σ XY ) is the same as the limiting distribution of p n(m11 µ 11 ). I. Tunal (Koç University) Week 5 18 October / 29

14 - 8 Example 5: Pair of sample means As we noted earlier, the joint distribution of a pair of sample means is a bivariate distribution. To proceed, we need the bivariate versions of the convergence results we studied earlier. Before we state these, note that the fundamentals will not change: For a random vector, convergence in probability means that each component of the vector converges in probability. Convergence in distribution means that the sequence of joint cdf s has as its limit some xed joint cdf. I. Tunal (Koç University) Week 5 18 October / 29

15 Bivariate theorems - 9 BIVARIATE LAW OF LARGE NUMBERS: In random sampling from any bivariate population, the sample mean vector (X, Y ) converges in probability to the population mean vector (µ X, µ Y ). We may write this as: (X, Y ) P! (µ X, µ Y ). BIVARIATE CENTRAL LIMIT THEOREM: In random sampling from any bivariate population, the standardized sample mean vector, (X, Y ) converges in distribution to the SBVN(ρ) distribution, where ρ = σ XY /σ X σ Y. Equivalently, in random sampling from any bivariate population, (X, Y ) A BVN(µ X, µ Y, σ 2 X /n, σ2 Y /n, σ XY /n). I. Tunal (Koç University) Week 5 18 October / 29

16 Bivariate theorems - 10 Remarks: R1. Although the two theorems on the previous page were stated in the context of Example 5, they can be invoked in deriving the convergence results for any pair of sample moments under random sampling. R2. Slutsky thorems S1-S4 extend to vectors in an obvious way. R3. We need an additional tool (extension of S5) for handling functions of sample means. I. Tunal (Koç University) Week 5 18 October / 29

17 Bivariate theorems - 11 BIVARIATE DELTA METHOD: Suppose (T 1, T 2 ) A BVN(θ 1, θ 2, φ 2 1 /n, φ2 2 /n, φ 12 /n). Let U = h(t 1, T 2 ) denote a twice di erentiable function at the point (θ 1, θ 2 ). Then where φ 2 = h 2 1 φ2 1 + h2 2 φ h 1h 2 φ 12, U A BVN[h(θ 1, θ 2 ), φ 2 /n] h 1 h 1 (θ 1, θ 2 ) = h(t 1, T 2 )/ T 1 evaluated at (T 1, T 2 ) = (θ 1, θ 2 ), h 2 h 2 (θ 1, θ 2 ) = h(t 1, T 2 )/ T 2 evaluated at (T 1, T 2 ) = (θ 1, θ 2 ). I. Tunal (Koç University) Week 5 18 October / 29

18 - 12 Example 6: suppose that µ Y 6= 0. Ratio of sample means Let T = h(x, Y ) = X /Y, and Then h(µ X, µ Y ) = µ X /µ Y, so h(.) is twice di erentiable at the point (µ X, µ Y ). Since X P.! µ X and Y P.! µ Y (by the L.L.N.), h(x, Y ) P.! µ X /µ Y (by S2). That is, T P.! θ = µ X /µ Y. So we know the asymptotic mean of T... To nd the asymptotic variance, we need to apply the bivariate Delta method. I. Tunal (Koç University) Week 5 18 October / 29

19 - 13 We compute, in turn, and h 1 (X, Y ) = h(x, Y )/ X = 1/Y, so h 1 (µ X, µ Y ) = 1/µ Y, h 2 (X, Y ) = h(x, Y )/ Y = X /Y 2, so h 2 (µ X, µ Y ) = µ X /µ 2 Y = θ/µ Y. From the bivariate C.L.T. (X, Y ) A BVN(µ X, µ Y, σ 2 X /n, σ2 Y /n, σ XY /n ), so T A N(θ, φ 2 /n) where φ 2 = (1/µ Y ) 2 (σ 2 X + θ 2 σ 2 Y 2θσ XY ). (10.3) Remark: Observe that this example also illustrates that the asymptotic mean (variance) and the exact mean (variance) can be di erent: E (T ) = E (X /Y ) 6= E (X )/E (Y ) = µ X /µ Y = θ, V (T ) = V (X /Y ) 6= φ 2 /n. I. Tunal (Koç University) Week 5 18 October / 29

20 - 14 Example 7: Sample slope In Week 2 we introduced the BLP of Y given X in a bivariate distribution, namely the line BLP(Y jx ) = α + βx, where β = σ XY /σ 2 X, and α = µ Y βµ X. This line is also known as the population linear projection of Y on X in a bivariate population. The sample analog is the sample linear projection of Y on X in a bivariate random sample, namely the line by = A + BX with B = S XY /S 2 X = M 11/M 20, and A = Y BX. Remark: Projection is a linear algebra term. In our bivariate context it describes the best linear approximation to Y that can be formed using X. I. Tunal (Koç University) Week 5 18 October / 29

21 - 15 Observe that the sample slope B = S XY /SX 2 is a function of two sample statistics, S XY and SX 2. The asymptotic properties of these statistics, examined one by one, were the subjects of examples 1 and 4. We now examine their ratio, following the well-established two-step procedure. I. Tunal (Koç University) Week 5 18 October / 29

22 - 16 7a: Ideal sample slope Consider B = M11 / M 20 which is obtained by replacing the sample means by their population counterparts. M11 = 1 n i Xi Yi = V, M2 = 1 n 2 i Xi = W, where V = X Y, W = X 2, and X = X µ X, Y = Y µ Y. Inspection reveals that B = M11 / M 20 = V /W = h(v, W ), is the ratio of sample means in a random sample on (V, W ) and the function h(.) has the same form as in example 6. Thus we may anticipate the asymptotic distribution of the ideal sample slope to have the form B A N(θ, φ 2 /n), and focus on the derivation of θ and φ 2 for the case at hand. I. Tunal (Koç University) Week 5 18 October / 29

23 - 17 Now, by the L.L.N., So V P.! µ V = E (X Y ) = σ XY and W P.! µ W = E (X 2 ) = σ 2 X. h(v, W ) = V /W P.! h(µ V, µ W ) = µ V /µ W = σ XY /σ 2 X = β. Thus θ = β, so we may write where B A N(β, φ 2 /n) φ 2 = (1/µ W ) 2 (σ 2 V + β 2 σ 2 W 2βσ VW ). (10.4) To render (10.4) operational, we need to express the moments of V and W in terms of the moments of X and Y. I. Tunal (Koç University) Week 5 18 October / 29

24 - 18 We know µ W = σ 2 X = µ 20. It is straightforward to apply the S.M.T. to obtain the expressions for the other variances and the covariances: σ 2 V = V (V ) = E (V 2 ) E 2 (V ) = E (X 2 Y 2 ) E 2 (X Y ) = µ 22 µ 2 11, σ 2 W = V (W ) = E (W 2 ) E 2 (W ) = E (X 4 ) E 2 (X 2 ) = µ 40 µ 2 20, σ VW = C (VW ) = E (VW ) E (V )E (W ) = E (X Y X 2 ) E (X Y )E (X 2 ) = µ 31 µ 11 µ 20. Substitution and rearrangement allows us to express φ 2 as a function of the moments of X and Y. Thus with B A N(β, φ 2 /n), φ 2 = (µ 22 + β 2 µ 40 2βµ 31 )/µ (10.5) I. Tunal (Koç University) Week 5 18 October / 29

25 - 19 7b: Sample slope We now return to B = S XY /SX 2 = M 11/M 20. Based on the work we did P. P. earlier (examples 1 and 3) we know that M 11! µ 11 and M 20! µ 20 so we can apply S2 to get B P.! µ 11 /µ 20 = β. Next, we shift focus to B B β = (B β) + (B B ). β and express it as Clearly p lim(b β) = 0, and from 7a p n(b β) D.! N(0, φ 2 ). What remains to be shown is that the second term vanishes. I. Tunal (Koç University) Week 5 18 October / 29

26 - 20 The second term is B B = M 11 /M 20 M11 /M 20 = (1/M20 )[(M 11 M11 ) (M 11/M 20 )(M 20 M20 )], so pn(b B ) = (1/M20 )[p n(m 11 M11 ) (M 11/M 20 ) p n(m 20 M20 )]. Now, based on our earlier work, M20 P.! µ 20, (M 11 /M 20 )! P. µ 11 /µ 20, p n(m11 M11 )! P. 0, and p n(m 20 M20 )! P. 0. So by S3, the limiting distribution of p n(b β) is the same as that of p n(b β). We conclude that B A N(β, φ 2 /n), with φ 2 = (µ 22 + β 2 µ 40 2βµ 31 )/µ I. Tunal (Koç University) Week 5 18 October / 29

27 - 21 Variance of the sample slope The sample slope is a good candidate for quantifying the sample relation between two variables. Indeed, it will emerge as a commonly used measure. Thus its sampling variation, captured by φ 2, deserves further scrutiny. Note that the denominator of φ 2 is µ 2 20 = V 2 (X ). The numerator can be written as µ 22 + β 2 µ 40 2βµ 31 = E (X 2 Y 2 ) + β 2 E (X 4 ) 2βE (X 3 Y ) = E [X 2 (Y βx ) 2 = E (X 2 U 2 ), say, where U = Y βx = (Y µ Y ) β(x µ X ) = Y (α + βx ) is the deviation from the population BLP. So φ 2 = E (X 2 U 2 )/V 2 (X ). (10.6) I. Tunal (Koç University) Week 5 18 October / 29

28 - 22 A special case arises when E (U 2 jx ) does not vary with X. Let σ 2 denote the constant value of E (U 2 jx ). In this case the numerator of (10.6) simpli es as E (X 2 U 2 ) = E X [E (X 2 U 2 jx )] = E X [X 2 E (U 2 jx )] = E [X 2 σ 2 ] = σ 2 E (X 2 ) = σ 2 V (X ). So in this special case, φ 2 = σ 2 /V (X ), and we may conclude that the asymptotic variance of the sample slope will be large when the deviation from the BLP has a large (and constant) variance and/or the marginal variance of X is small. I. Tunal (Koç University) Week 5 18 October / 29

29 - 23 How can this situation arise, and how can we detect it? Suppose the conditional expectation function E (Y jx ) is linear, and let U = Y E (Y jx ). Then E (U 2 jx ) = V (Y 2 jx ). Thus constant E (U 2 jx ) means the conditional variance function V (Y jx ) is also constant over X. In this case we can say that the asymptotic variation of the sample slope will be large if the conditional variance of Y is large and/or the marginal variance of X is small. These conditions (linear CEF, constant CVF) imply restrictions on magnitues that can be observed in a sample, and can therefore be veri ed (i.e. tested). Remark: We studied one population, the bivariate normal, in which both conditions, linearity of the CEF, and the constancy of the CVF, are satis ed. I. Tunal (Koç University) Week 5 18 October / 29

Contents 1. Contents

Contents 1. Contents Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Economics 241B Review of Limit Theorems for Sequences of Random Variables Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence

More information

ECON 3150/4150, Spring term Lecture 6

ECON 3150/4150, Spring term Lecture 6 ECON 3150/4150, Spring term 2013. Lecture 6 Review of theoretical statistics for econometric modelling (II) Ragnar Nymoen University of Oslo 31 January 2013 1 / 25 References to Lecture 3 and 6 Lecture

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770

Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill November

More information

MC3: Econometric Theory and Methods. Course Notes 4

MC3: Econometric Theory and Methods. Course Notes 4 University College London Department of Economics M.Sc. in Economics MC3: Econometric Theory and Methods Course Notes 4 Notes on maximum likelihood methods Andrew Chesher 25/0/2005 Course Notes 4, Andrew

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

The Delta Method and Applications

The Delta Method and Applications Chapter 5 The Delta Method and Applications 5.1 Local linear approximations Suppose that a particular random sequence converges in distribution to a particular constant. The idea of using a first-order

More information

2014 Preliminary Examination

2014 Preliminary Examination 014 reliminary Examination 1) Standard error consistency and test statistic asymptotic normality in linear models Consider the model for the observable data y t ; x T t n Y = X + U; (1) where is a k 1

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Lecture 14: Multivariate mgf s and chf s

Lecture 14: Multivariate mgf s and chf s Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

Economics 241B Estimation with Instruments

Economics 241B Estimation with Instruments Economics 241B Estimation with Instruments Measurement Error Measurement error is de ned as the error resulting from the measurement of a variable. At some level, every variable is measured with error.

More information

ECON 4160, Autumn term Lecture 1

ECON 4160, Autumn term Lecture 1 ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least

More information

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 Instructions: Answer all four (4) questions. Point totals for each question are given in parenthesis; there are 00 points possible. Within

More information

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0)

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0) Delta Method Often estimators are functions of other random variables, for example in the method of moments. These functions of random variables can sometimes inherit a normal approximation from the underlying

More information

c 2007 Je rey A. Miron

c 2007 Je rey A. Miron Review of Calculus Tools. c 007 Je rey A. Miron Outline 1. Derivatives. Optimization 3. Partial Derivatives. Optimization again 5. Optimization subject to constraints 1 Derivatives The basic tool we need

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 9: Asymptotics III(MLE) 1 / 20 Jensen

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Lecture 28: Asymptotic confidence sets

Lecture 28: Asymptotic confidence sets Lecture 28: Asymptotic confidence sets 1 α asymptotic confidence sets Similar to testing hypotheses, in many situations it is difficult to find a confidence set with a given confidence coefficient or level

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression Christopher Ting Christopher Ting : christophert@smu.edu.sg : 688 0364 : LKCSB 5036 January 7, 017 Web Site: http://www.mysmu.edu/faculty/christophert/ Christopher Ting QF 30 Week

More information

We begin by thinking about population relationships.

We begin by thinking about population relationships. Conditional Expectation Function (CEF) We begin by thinking about population relationships. CEF Decomposition Theorem: Given some outcome Y i and some covariates X i there is always a decomposition where

More information

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests

More information

Regression #4: Properties of OLS Estimator (Part 2)

Regression #4: Properties of OLS Estimator (Part 2) Regression #4: Properties of OLS Estimator (Part 2) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #4 1 / 24 Introduction In this lecture, we continue investigating properties associated

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Stat 5101 Notes: Algorithms

Stat 5101 Notes: Algorithms Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Gibbs Sampling in Endogenous Variables Models

Gibbs Sampling in Endogenous Variables Models Gibbs Sampling in Endogenous Variables Models Econ 690 Purdue University Outline 1 Motivation 2 Identification Issues 3 Posterior Simulation #1 4 Posterior Simulation #2 Motivation In this lecture we take

More information

Chapter 5: Joint Probability Distributions

Chapter 5: Joint Probability Distributions Chapter 5: Joint Probability Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 19 Joint pmf Definition: The joint probability mass

More information

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc. ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor guirregabiria SOLUTION TO FINL EXM Monday, pril 14, 2014. From 9:00am-12:00pm (3 hours) INSTRUCTIONS:

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory Charles J. Geyer School of Statistics University of Minnesota 1 Asymptotic Approximation The last big subject in probability

More information

Moments of the maximum of the Gaussian random walk

Moments of the maximum of the Gaussian random walk Moments of the maximum of the Gaussian random walk A.J.E.M. Janssen (Philips Research) Johan S.H. van Leeuwaarden (Eurandom) Model definition Consider the partial sums S n = X 1 +... + X n with X 1, X,...

More information

Lecture Notes on Measurement Error

Lecture Notes on Measurement Error Steve Pischke Spring 2000 Lecture Notes on Measurement Error These notes summarize a variety of simple results on measurement error which I nd useful. They also provide some references where more complete

More information

Chapter 6: Large Random Samples Sections

Chapter 6: Large Random Samples Sections Chapter 6: Large Random Samples Sections 6.1: Introduction 6.2: The Law of Large Numbers Skip p. 356-358 Skip p. 366-368 Skip 6.4: The correction for continuity Remember: The Midterm is October 25th in

More information

HT Introduction. P(X i = x i ) = e λ λ x i

HT Introduction. P(X i = x i ) = e λ λ x i MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework

More information

review session gov 2000 gov 2000 () review session 1 / 38

review session gov 2000 gov 2000 () review session 1 / 38 review session gov 2000 gov 2000 () review session 1 / 38 Overview Random Variables and Probability Univariate Statistics Bivariate Statistics Multivariate Statistics Causal Inference gov 2000 () review

More information

A Bahadur Representation of the Linear Support Vector Machine

A Bahadur Representation of the Linear Support Vector Machine A Bahadur Representation of the Linear Support Vector Machine Yoonkyung Lee Department of Statistics The Ohio State University October 7, 2008 Data Mining and Statistical Learning Study Group Outline Support

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When

More information

Consistent Bivariate Distribution

Consistent Bivariate Distribution A Characterization of the Normal Conditional Distributions MATSUNO 79 Therefore, the function ( ) = G( : a/(1 b2)) = N(0, a/(1 b2)) is a solu- tion for the integral equation (10). The constant times of

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

36. Multisample U-statistics and jointly distributed U-statistics Lehmann 6.1

36. Multisample U-statistics and jointly distributed U-statistics Lehmann 6.1 36. Multisample U-statistics jointly distributed U-statistics Lehmann 6.1 In this topic, we generalize the idea of U-statistics in two different directions. First, we consider single U-statistics for situations

More information

Confidence Intervals Unknown σ

Confidence Intervals Unknown σ Confidence Intervals Unknown σ Estimate σ Student s t-distribution Step-by-step instructions Example Confidence Intervals - Known σ Standard normal distribution aaad1hicbvjbtxqxfc6sf1xvoi/gzojcgrohm4neetahyqipxlbzmaer6xtozjz0kmnk5cmt8zxn33vf+ap8qf44h/xdedcbzp0eubr951lz8kqkwobx7+nplvnzl+4ohopffnk1wvxz+duvk/1yhdocs1+zcxgqrq0lpcsvhqgwbljmer10p91ufwnrcq017ueg/ziusa8gzreijo5zj6jxfcxe38tuznxgpblz0kiojm7a/xvz+9vpxs7c9m/aa75qarluwr1vzkle07zqzgenybjmqogn9lbwyjqvgjdd81aftoiuiojwba4fyaubxiwnlxr+uwrfpktlhkiqzcqcw9ar7o3jud0jviwukbougdccm4n4kbnglb5dmyry47osmcrpbjogtpo+oxigdtf1ek+nkibrp/giujpa4e4gtskuxe0tpibaemohx3rchh47x57pkjjhcqhavyozu7ssr0nbtrsi1nl5gm47m5c0al5ciff+9pxd/yrtjlbabeonyhkshf/gwdtmrajtbvdpaztj4cuqxipgrdb3mfstkso1kpqamtf6z7cxxt9//7htq6tiqitckcjcndkmnvxxgrybn69ffbeuzrd7qyp0m66sulpohygqp3jlsfixeijndhxxdhkvnnexptpvfhigzzhauzwmo4jwkng55cnkktxu9dgl1kx6tdnzekmm1q6rosrjoqhwsppyqbpeu4m+ua+kx+trzzvfw59oarotx1pbpkj1fr6fqtsik=

More information

Theoretical Statistics. Lecture 1.

Theoretical Statistics. Lecture 1. 1. Organizational issues. 2. Overview. 3. Stochastic convergence. Theoretical Statistics. Lecture 1. eter Bartlett 1 Organizational Issues Lectures: Tue/Thu 11am 12:30pm, 332 Evans. eter Bartlett. bartlett@stat.

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter

More information

Chapter 12: Bivariate & Conditional Distributions

Chapter 12: Bivariate & Conditional Distributions Chapter 12: Bivariate & Conditional Distributions James B. Ramsey March 2007 James B. Ramsey () Chapter 12 26/07 1 / 26 Introduction Key relationships between joint, conditional, and marginal distributions.

More information

ECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE 1: BASICS. 1. Bayes Rule. p(b j A)p(A) p(b)

ECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE 1: BASICS. 1. Bayes Rule. p(b j A)p(A) p(b) ECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE : BASICS KRISTOFFER P. NIMARK. Bayes Rule De nition. Bayes Rule. The probability of event A occurring conditional on the event B having

More information

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression 1/9 MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression Dominique Guillot Deartments of Mathematical Sciences University of Delaware February 15, 2016 Distribution of regression

More information

Economics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18)

Economics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18) Economics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18) MichaelR.Roberts Department of Economics and Department of Statistics University of California

More information

δ -method and M-estimation

δ -method and M-estimation Econ 2110, fall 2016, Part IVb Asymptotic Theory: δ -method and M-estimation Maximilian Kasy Department of Economics, Harvard University 1 / 40 Example Suppose we estimate the average effect of class size

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Chapter 3: Maximum Likelihood Theory

Chapter 3: Maximum Likelihood Theory Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Economics 620, Lecture 20: Generalized Method of Moment (GMM)

Economics 620, Lecture 20: Generalized Method of Moment (GMM) Economics 620, Lecture 20: Generalized Method of Moment (GMM) Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 20: GMM 1 / 16 Key: Set sample moments equal to theoretical

More information

Economics 620, Lecture 8: Asymptotics I

Economics 620, Lecture 8: Asymptotics I Economics 620, Lecture 8: Asymptotics I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 8: Asymptotics I 1 / 17 We are interested in the properties of estimators

More information

Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing

Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing Eric Zivot October 12, 2011 Hypothesis Testing 1. Specify hypothesis to be tested H 0 : null hypothesis versus. H 1 : alternative

More information

Is a measure of the strength and direction of a linear relationship

Is a measure of the strength and direction of a linear relationship More statistics: Correlation and Regression Coefficients Elie Gurarie Biol 799 - Lecture 2 January 2, 2017 January 2, 2017 Correlation (r) Is a measure of the strength and direction of a linear relationship

More information

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence

More information

Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi)

Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi) Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi) Our immediate goal is to formulate an LLN and a CLT which can be applied to establish sufficient conditions for the consistency

More information

Week 9 The Central Limit Theorem and Estimation Concepts

Week 9 The Central Limit Theorem and Estimation Concepts Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

ST 740: Linear Models and Multivariate Normal Inference

ST 740: Linear Models and Multivariate Normal Inference ST 740: Linear Models and Multivariate Normal Inference Alyson Wilson Department of Statistics North Carolina State University November 4, 2013 A. Wilson (NCSU STAT) Linear Models November 4, 2013 1 /

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

The Logit Model: Estimation, Testing and Interpretation

The Logit Model: Estimation, Testing and Interpretation The Logit Model: Estimation, Testing and Interpretation Herman J. Bierens October 25, 2008 1 Introduction to maximum likelihood estimation 1.1 The likelihood function Consider a random sample Y 1,...,

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity

Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity Songnian Chen a, Xun Lu a, Xianbo Zhou b and Yahong Zhou c a Department of Economics, Hong Kong University

More information

Econometrics Lecture 1 Introduction and Review on Statistics

Econometrics Lecture 1 Introduction and Review on Statistics Econometrics Lecture 1 Introduction and Review on Statistics Chau, Tak Wai Shanghai University of Finance and Economics Spring 2014 1 / 69 Introduction This course is about Econometrics. Metrics means

More information

MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators

MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators Thilo Klein University of Cambridge Judge Business School Session 4: Linear regression,

More information

Simple Linear Regression: The Model

Simple Linear Regression: The Model Simple Linear Regression: The Model task: quantifying the effect of change X in X on Y, with some constant β 1 : Y = β 1 X, linear relationship between X and Y, however, relationship subject to a random

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

STA 2101/442 Assignment 3 1

STA 2101/442 Assignment 3 1 STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

STA 732: Inference. Notes 4. General Classical Methods & Efficiency of Tests B&D 2, 4, 5

STA 732: Inference. Notes 4. General Classical Methods & Efficiency of Tests B&D 2, 4, 5 STA 732: Inference Notes 4. General Classical Methods & Efficiency of Tests B&D 2, 4, 5 1 Going beyond ML with ML type methods 1.1 When ML fails Example. Is LSAT correlated with GPA? Suppose we want to

More information

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45 Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information