Asymptotic Statistics-III. Changliang Zou

Size: px
Start display at page:

Download "Asymptotic Statistics-III. Changliang Zou"

Transcription

1 Asymptotic Statistics-III Changliang Zou

2 The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n ( X µ ) d Np (0, Σ). By the Cramer-Wold device, this can be proved by finding the limit distribution of the sequences of real variables ( ) c T 1 (X i µ) = 1 (c T X i c T µ). n n i=1 i=1

3 The multivariate central limit theorem Because the random variables c T X i c T µ are iid with zero mean and variance c T Σc, this sequence is AN(0, c T Σc) by Theorem??. This is exactly the distribution of c T X if X possesses an N p (0, Σ).

4 The multivariate central limit theorem Example Suppose that X 1,..., X n is a random sample from the Poisson distribution with mean θ. Let Z n be the proportions of zero observed, i.e., Z n = 1/n n i=1 I {X j =0}. Let us find the joint asymptotic distribution of ( X n, Z n ) Note that E(X 1 ) = θ, EI {X1 =0} = e θ, var(x 1 ) = θ, var(i {X1 =0}) = e θ (1 e θ ), and EX 1 I {X1 =0} = 0. So, cov(x 1, I {X1 =0}) = θe θ. n ( ( X n, Z n ) (θ, e θ ) ) d N2 (0, Σ), where Σ = ( θ θe θ θe θ e θ (1 e θ ) ).

5 Exercises Consider two sequences of random variables X n and Y n. If (X n EX n )/ varx n d X and corr(xn, Y n ) 1, then (Y n EY n )/ vary n d X. Let X 1, X 2,... be iid double exponential (Laplace) random variables with density, f (x) = (2τ) 1 exp{ x /τ}, where τ is a positive parameter that represents the mean deviation, i.e., τ = E X. Let X n = n 1 n i=1 X i and Ȳ n = n 1 n i=1 X i. (a) Find the joint asymptotic distribution of X n and Ȳn. (b) Find the asymptotic distribution of (Ȳn τ)/ X n.

6 (a) Let Y i = X i. Then (X i, Y i ) are iid with E(X i, Y i ) = (0, τ). Since EX 2 = 2τ 2, we have varx i = 2τ 2 and vary i = τ 2. We also have cov(x i, Y i ) = 0. Therefore, from the multivariate CLT, ( ( n( Xn, (Ȳn τ)) d 2τ 2 0 N 2 0, 0 τ 2 )). (b) From CMT with g(x, y) = y/x, continuous except on the line x = 0, we have (Y n τ)/x n = n(y n τ)/( nx n ) d V /U, where U and V are independent normal random variables with zero means and 2τ 2 and τ 2 respectively. This has a Cauchy distribution with median zero and scale parameter 1/ 2, independent of τ. Of course, (Y n τ)/(x n / 2) has a standard Cauchy distribution.

7 CLT: existence of a variance is not necessary Definition A function g : R R is called slowly varying at if, for every t > 0, lim x g(tx)/g(x) = 1. Examples: log x, x/(1 + x), and indeed any function with a finite limit as x ; x or e x are not slowly varying. Theorem Let X 1, X 2,... be iid from a CDF F on R. Let v(x) = x x y 2 df (y). Then, there exist constants {a n }, {b n } such that n i=1 X i a n b n d N(0, 1), if and only if v(x) is slowly varying at. If F has a finite second moment, v(x) is slowly varying at.

8 CLT: existence of a variance is not necessary Example Suppose X 1, X 2,... are iid from a t-distribution with 2 degrees of freedom (t(2)) that has a finite mean but not a finite variance. The density is given by f (y) = c/(2 + y 2 ) 3 2 for some positive c. by a direct integration, for some other constant k, 1 [ v(x) = k 2 + x 2 x 2 + x 2 arcsinh(x/ ] 2). on using the fact that arcsinh(x) = log(2x) + O(x 2 ) as x, we get, for any t > 0, v(tx) v(x) 1. the partial sums n i=1 X i converge to a normal distribution The centering can be taken to be zero for the centered t-distribution; it can be shown that the normalizing required is b n = n log n

9 CLT for the independent not necessarily iid case Theorem (Lindeberg-Feller) Suppose X n is a sequence of independent variables with means µ n and variances σn 2 <. Let sn 2 = n i=1 σ2 i. If for any ɛ > 0 1 s 2 n j=1 where F i is the CDF of X i, then x µ j >ɛs n (x µ j ) 2 df j (x) 0, (1) (X i µ i ) i=1 s n d N(0, 1). The condition (1) is called Lindeberg-Feller condition.

10 CLT for the independent not necessarily iid case Example Let X 1, X 2..., be independent variables such that X j has the uniform distribution on [ j, j], j = 1, 2,.... Let us verify the conditions of the theorem are satisfied. Note that EX j = 0 and σj 2 = 1 j 2j j x 2 dx = j 2 /3 for all j. s 2 n = σj 2 = 1 3 j=1 j 2 = j=1 n(n + 1)(2n + 1). 18 For any ɛ > 0, n < ɛs n for sufficiently large n, since lim n n/s n = 0. Because X j j n, when n is sufficiently large, E(X 2 j I { Xj >ɛs n}) = 0. Consequently, lim n n j=1 E(X 2 j I { X j >ɛs n}) <. Considering s n, Lindeberg s condition holds.

11 CLT for the independent not necessarily iid case It is hard to verify the Lindeberg-Feller condition. A simpler theorem Theorem (Liapounov) Suppose X n is a sequence of independent variables with means µ n and variances σn 2 <. Let sn 2 = n i=1 σ2 i. If for some δ > 0 as n, then 1 s 2+δ n E X j µ j 2+δ 0 (2) j=1 (X i µ i ) i=1 s n d N(0, 1).

12 Liapounov CLT s n, sup j 1 E X j µ j 2+δ < and n 1 s n is bounded In practice, work with δ = 1 or 2. If X i is uniformly bounded and s n, the condition is immediately satisfied with δ = 1.

13 Liapounov CLT Example Let X 1, X 2,... be independent random variables. Suppose that X i has the binomial distribution BIN(p i, 1), i = 1, 2,.... For each i, EX i = p i and E X i EX i 3 = (1 p i ) 3 p i + pi 3(1 p i) 2p i (1 p i ). n i=1 E X i EX i 3 2sn 2 = 2 n i=1 E X i EX i 2 = 2 n i=1 p i(1 p i ). Liapounov s condition (2) holds with δ = 1 if s n. For example, if p i = 1/i or M 1 p i M 2 with two constants belong to (0, 1), s n holds. Accordingly, by Liapounov s theorem, n i=1 (X i p i ) s n d N(0, 1).

14 CLT for double array and triangular array Double array: X 11 with distribution F 1 X 21, X 22 independent, each with distribution F 2 X n1,... X nn independent, each with distribution F n Triangular array: X 11 with distribution F 1 X 21, X 22 independent, with distribution F 21, F 22 X n1,... X nn independent, with distributions F n1,..., F nn.

15 CLT for double array Theorem Let the X ij be distributed as a double array. Then ( n( X n µ n ) P σ n ) x Φ(x) as n for any sequence F n with mean µ n and variance σ 2 n for which E n X n1 µ n 3 /σ 3 n = o( n). Here E n denotes the expectation under F n. Eg, Bin(p n, n), where the success probability depends on n.

16 CLT for triangular array Theorem Let the X ij be distributed as a triangular array and let E(X ij ) = µ ij, var(x ij ) = σ 2 ij <, and s2 n = n j=1 σ2 nj. Then, n j=1 (X nj µ nj ) s n d N(0, 1), provided that 1 s 2+δ n E X nj µ nj 2+δ 0 j=1

17 Hajek-Sidak CLT Theorem (Hajek-Sidak) Suppose X 1, X 2,... are iid random variables with mean µ and variance σ 2 <. Let c n = (c n1, c n2,..., c nn ) be a vector of constants such that max 1 i n c 2 ni cnj 2 j=1 0 (3) as n. Then c ni (X i µ) σ i=1 cnj 2 j=1 d N(0, 1).

18 Hajek-Sidak CLT The condition (3) is to ensure that no coefficient dominates the vector c n, and is referred as Hajek-Sidak condition. For example, if c n = (1, 0,..., 0), then the condition would fail and so would the theorem.

19 Hajek-Sidak CLT Example (Simplest linear regression) Consider the simple linear regression model y i = β 0 + β 1 x i + ε i, where ε i s are iid with mean 0 and variance σ 2 but are not necessarily normally distributed. The least squares estimate of β 1 based on n observations is β 1 = n i=1 (y i ȳ n )(x i x n ) n i=1 (x i x n ) 2 = β 1 + n i=1 ε i(x i x n ) n i=1 (x i x n ) 2.

20 Hajek-Sidak CLT β 1 = β 1 + n i=1 ε ic ni / n j=1 c2 nj, where c ni = x i x n. By the Hajek-Sidak s Theorem provided as n. j=1 c 2 nj β 1 β 1 σ = n i=1 ε ic ni σ n j=1 c2 nj max 1 i n (x i x n ) 2 n j=1 (x j x n ) 2 0 Under some conditions on the design variables d N(0, 1),

21 Lindeberg-Feller multivariate CLT Theorem (Lindeberg-Feller multivariate) Suppose X i is a sequence of independent vectors with means µ i, covariances Σ i and distribution function F i. Suppose that 1 n n i=1 Σ i Σ as n, and that for any ɛ > 0 then 1 n j=1 x µ j >ɛ n 1 n x µ j 2 df j (x) 0, (X i µ i ) d N(0, Σ). i=1

22 Lindeberg-Feller multivariate CLT Example (multiple regression) In the linear regression problem, we observe a vector y = Xβ + ε for a fixed or random matrix X of full rank, and an error vector ε with iid components with mean zero and variance σ 2. The least squares estimator of β is β = (X T X) 1 X T y. This estimator is unbiased and has covariance matrix σ 2 (X T X) 1. If the error vector ε is normally distributed, then β is exactly normally distributed. Under reasonable conditions on the design matrix, β is asymptotically normally distributed for a large range of error distributions.

23 Lindeberg-Feller multivariate CLT Here we fix p and let n tend to infinity. This follows from the representation (X T X) 1/2 ( β β) = (X T X) 1/2 X T ε = a ni ε i, i=1 where a n1,..., a nn are the columns of the (p n) matrix (X T X) 1/2 X T =: A. This sequence is asymptotically normal if the vectors a n1 ε 1,..., a nn ε n satisfy the Lindeberg conditions. The norming matrix (X T X) 1/2 has been chosen to ensure that the vectors in the display have covariance matrix σ 2 I p for every n. The remaining condition is a ni 2 Eε 2 i I { ani ε i >ɛ} 0. i=1

24 Lindeberg-Feller multivariate CLT Because a ni 2 = tr(aa T ) = p, it suffices that max i Eε 2 i I { a ni ε i >ɛ} 0 The expectation Eε 2 i I { a ni ε i >ɛ} can be bounded ɛ k E ε i k+2 a ni k a set of sufficient conditions is a ni k 0; E ε 1 k <, k > 2. i=1

25 CLT for a random number of summands the number of terms present in a partial sum is a random variable. Precisely, {N(t)}, t 0, is a family of (nonnegative) integer-valued random variables, and we want to approximate the distribution of T N(t) Theorem (Anscombe-Renyi) Let X i be iid with mean µ and a finite variance σ 2, and let {N n }, be a sequence of (nonnegative) integer-valued random variables and {a n } a sequence of positive constants tending to such that N n /a n p c, 0 < c <, as n. Then, T Nn N n µ σ N n d N(0, 1) as n.

26 CLT for a random number of summands Example (coupon collection problem) Consider a problem in which a person keeps purchasing boxes of cereals until she obtains a full set of some n coupons. The assumptions are that the boxes have an equal probability of containing any of the n coupons mutually independently. Suppose that the costs of buying the cereal boxes are iid with some mean µ and some variance σ 2. If it takes N n boxes to obtain the complete set of all n coupons, then N n /(n ln n) p 1 as n. The total cost to the customer to obtain the complete set of coupons is T Nn = X X Nn. T Nn N n µ σ n ln n d N(0, 1).

27 CLT for a random number of summands [On the asymptotic behavior of N n ]. Let t i be the boxes to collect the i-th coupon after i 1 coupons have been collected. the probability of collecting a new coupon given i 1 coupons is p i = (n i + 1)/n. t i has a geometric distribution with expectation 1/p i and N n = n i=1 t i. By WLLN, we know 1 n ln n N n 1 n ln n i=1 p 1 i p 0 1 n ln n i=1 p 1 i = 1 n ln n n 1 i i=1 = 1 ln n H n = ln n + γ + o(1); γ is Euler-constant p 1. N n n ln n i=1 1 i =: 1 ln n H n.

28 Exercises Suppose (X i, Y i ), i = 1,..., n are iid bivariate normal samples with E(X 1 ) = µ 1, E(Y 1 ) = µ 2, var(x 1 ) = σ 2 1, var(y 1) = σ 2 2, and corr(x 1, Y 1 ) = ρ. The standard test of the hypothesis H 0 : ρ = 0, or equivalently, H 0 : X, Y are independent, rejects H 0 when the sample correlation r n is sufficiently large in absolute value. Please find the asymptotic critical value. indep Suppose X i (µ, σi 2), where σ2 i = iδ. Find the asymptotic distribution of the best linear unbiased estimate of µ.

29 Homework Prove Theorem or Theorem (choose one of them); Suppose X 1,..., X n are i.i.d. N(µ, µ 2 ), µ > 0. Therefore, X n and S n are both reasonable estimates of µ. Find the limit of P( S n µ < X n µ ); Consider n observations {(x i, y i )} n i=1 from the simple linear regression model y i = β 0 + β 1 x i + ε i, where ε i s are iid with mean 0 and unknown variance σ 2 < (but are not necessarily normally distributed). Assume x i is equally spaced in the design interval [0, 1], say x i = i n. We are interested in testing the null hypothesis H 0 : β 0 = 0 versus H 1 : β 0 0. Please provide a proper test statistic based on the least squares estimate and find the critical value such that the asymptotic level of the test is α.

Lecture Notes on Asymptotic Statistics. Changliang Zou

Lecture Notes on Asymptotic Statistics. Changliang Zou Lecture Notes on Asymptotic Statistics Changliang Zou Prologue Why asymptotic statistics? The use of asymptotic approximation is two-fold. First, they enable us to find approximate tests and confidence

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

f (1 0.5)/n Z =

f (1 0.5)/n Z = Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.

More information

STT 843 Key to Homework 1 Spring 2018

STT 843 Key to Homework 1 Spring 2018 STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Lecture 4: Sampling, Tail Inequalities

Lecture 4: Sampling, Tail Inequalities Lecture 4: Sampling, Tail Inequalities Variance and Covariance Moment and Deviation Concentration and Tail Inequalities Sampling and Estimation c Hung Q. Ngo (SUNY at Buffalo) CSE 694 A Fun Course 1 /

More information

Statistics Ph.D. Qualifying Exam

Statistics Ph.D. Qualifying Exam Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory Charles J. Geyer School of Statistics University of Minnesota 1 Asymptotic Approximation The last big subject in probability

More information

STATISTICS SYLLABUS UNIT I

STATISTICS SYLLABUS UNIT I STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Final Examination Statistics 200C. T. Ferguson June 11, 2009 Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean

More information

Comprehensive Examination Quantitative Methods Spring, 2018

Comprehensive Examination Quantitative Methods Spring, 2018 Comprehensive Examination Quantitative Methods Spring, 2018 Instruction: This exam consists of three parts. You are required to answer all the questions in all the parts. 1 Grading policy: 1. Each part

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

Lecture 3 - Expectation, inequalities and laws of large numbers

Lecture 3 - Expectation, inequalities and laws of large numbers Lecture 3 - Expectation, inequalities and laws of large numbers Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 3 - Expectation, inequalities and laws of large numbersapril 19, 2009 1 / 67 Part

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Notes on the Multivariate Normal and Related Topics

Notes on the Multivariate Normal and Related Topics Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

15 Discrete Distributions

15 Discrete Distributions Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.

More information

Week 9 The Central Limit Theorem and Estimation Concepts

Week 9 The Central Limit Theorem and Estimation Concepts Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population

More information

Confidence Intervals, Testing and ANOVA Summary

Confidence Intervals, Testing and ANOVA Summary Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84 Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

We introduce methods that are useful in:

We introduce methods that are useful in: Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Bivariate Paired Numerical Data

Bivariate Paired Numerical Data Bivariate Paired Numerical Data Pearson s correlation, Spearman s ρ and Kendall s τ, tests of independence University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html

More information

This paper is not to be removed from the Examination Halls

This paper is not to be removed from the Examination Halls ~~ST104B ZA d0 This paper is not to be removed from the Examination Halls UNIVERSITY OF LONDON ST104B ZB BSc degrees and Diplomas for Graduates in Economics, Management, Finance and the Social Sciences,

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Non-parametric Inference and Resampling

Non-parametric Inference and Resampling Non-parametric Inference and Resampling Exercises by David Wozabal (Last update. Juni 010) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend surfing

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

Probability on a Riemannian Manifold

Probability on a Riemannian Manifold Probability on a Riemannian Manifold Jennifer Pajda-De La O December 2, 2015 1 Introduction We discuss how we can construct probability theory on a Riemannian manifold. We make comparisons to this and

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

A Short Course in Basic Statistics

A Short Course in Basic Statistics A Short Course in Basic Statistics Ian Schindler November 5, 2017 Creative commons license share and share alike BY: C 1 Descriptive Statistics 1.1 Presenting statistical data Definition 1 A statistical

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours MATH2750 This question paper consists of 8 printed pages, each of which is identified by the reference MATH275. All calculators must carry an approval sticker issued by the School of Mathematics. c UNIVERSITY

More information

Quick Review on Linear Multiple Regression

Quick Review on Linear Multiple Regression Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Probability Lecture III (August, 2006)

Probability Lecture III (August, 2006) robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

The Lindeberg central limit theorem

The Lindeberg central limit theorem The Lindeberg central limit theorem Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto May 29, 205 Convergence in distribution We denote by P d the collection of Borel probability

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss

More information

Large Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n

Large Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n Large Sample Theory In statistics, we are interested in the properties of particular random variables (or estimators ), which are functions of our data. In ymptotic analysis, we focus on describing the

More information

Statistical Hypothesis Testing

Statistical Hypothesis Testing Statistical Hypothesis Testing Dr. Phillip YAM 2012/2013 Spring Semester Reference: Chapter 7 of Tests of Statistical Hypotheses by Hogg and Tanis. Section 7.1 Tests about Proportions A statistical hypothesis

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

Lecture 8 Inequality Testing and Moment Inequality Models

Lecture 8 Inequality Testing and Moment Inequality Models Lecture 8 Inequality Testing and Moment Inequality Models Inequality Testing In the previous lecture, we discussed how to test the nonlinear hypothesis H 0 : h(θ 0 ) 0 when the sample information comes

More information

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information