Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics
|
|
- Kristin Rich
- 5 years ago
- Views:
Transcription
1 Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,, X n such that X 1) X 2) X n) We call X i) the ith order statistic of the sample In much of this chapter, we will be concerned with the marginal distributions of these order statistics Even though the notation X i) does not explicitly use the sample size n, the distribution of X i) depends essentially on n For this reason, some textbooks use slightly more complicated notation such as X 1:n), X 2;n),, X n;n) for the order statistics of a sample We choose to use the simpler notation here, though we will always understand the sample size to be n The asymptotic distributions of order statistics at the extremes of a sample may be derived without any specialized knowledge other than the limit formula 1 + c ) n e c as n 61) n and its generalization 1 + c ) bn n e c if c n c and b n 62) b n [see Exercise 17b)] Recall that by asymptotic distribution of X 1), we mean sequences k n and a n, along with a nondegenerate random variable X, such that k n X 1) a n ) d X This section consists mostly of a series of illustrative examples 96
2 Example 61 Suppose X 1,, X n are independent and identically distributed uniform0,1) random variables What is the asymptotic distribution of X n)? Since X n) t if and only if X 1 t, X 2 t,, and X n t, by independence we have { 0 if t 0 P X n) t) = t n if 0 < t < 1 63) 1 if t 1 From Equation 63), it is apparent that X n) P 1, though this limit statement does not fully reveal the asymptotic distribution of X n) We desire sequences k n and a n such that k n X n) a n ) has a nondegenerate limiting distribution Evidently, we should expect a n = 1, a fact we shall rederive below Computing the distribution function of k n X n) a n ) directly, we find } F u) = P {k n X n) a n ) u} = P {X n) ukn + a n as long as k n > 0 Therefore, we see that ) n u F u) = + a n for 0 < u + a n < 1 64) k n k n We would like this expression to tend to a limit involving only u as n Keeping expression 62 in mind, we take a n = 1 and k n = n so that F u) = 1 + u/n) n, which tends to e u However, we are not quite finished, since we have not determined which values of u make the above limit valid Equation 64 required that 0 < a n + u/k n ) < 1, which in this case becomes 1 < u/n < 0 This means u may be any negative real number, since for any u < 0, 1 < u/n < 0 for all n > u We conclude that if the random variable U has distribution function { expu) if u 0 F u) = 1 if u > 0, then nx n) 1) d U Since U is simply a standard exponential random variable, we may also write n1 X n) ) d Exponential1) Example 62 Suppose X 1, X 2, are independent and identically distributed exponential random variables with mean 1 What is the asymptotic distribution of X n)? 97
3 As in Equation 63, if u/k n + a n > 0 then P { k n X n) a n ) u } ) = P X n) ukn + a n = { 1 exp a n u )} n k n Taking a n = log n and k n = 1, the rightmost expression above simplifies to } n {1 e u, n which has limit exp e u ) The condition u/k n + a n > 0 becomes u + log n > 0, which is true for all u R as long as n > exp u) Therefore, we conclude that X n) log n d U, where P U u) def = exp{ exp u)} for all u 65) The distribution of U in Equation 65 is known as the extreme value distribution or the Gumbel distribution In Examples 61 and 62, we derived the asymptotic distribution of a maximum from a simple random sample We did this using only the definition of convergence in distribution without relying on any results other than expression 62 In a similar way, we may derive the joint asymptotic distribution of multiple order statistics, as in the following example Example 63 Range of uniform sample: Let X 1,, X n be a simple random sample from Uniform0, 1) Let R n = X n) X 1) denote the range of the sample What is the asymptotic distribution of R n? To answer this question, we begin by finding the joint asymptotic distribution of X n), X 1) ), as follows For sequences k n and l n, as yet unspecified, consider P k n X 1) > x and l n 1 X n) ) > y) = P X 1) > x/k n and X n) < 1 y/l n ) = P x/k n < X 1) < < X n) < 1 y/l n ), where we have assumed that k n and l n are positive Since the probability above is simply the probability that the entire sample is to be found in the interval x/k n, 1 y/l n ), we conclude that as long as 0 < x k n < 1 y l n < 1, 66) we have P k n X 1) > x and l n 1 X n) ) > y) = 1 y l n x k n ) n 98
4 Expression 61) suggests that we set k n = l n = n, resulting in P nx 1) > x and n1 X n) ) > y) = 1 y n n) x n Expression 66 becomes 0 < x n < 1 y n < 1, which is satisfied for large enough n if and only if x and y are both positive We conclude that for x > 0, y > 0, P nx 1) > x and n1 X n) ) > y) e x e y Since this is the joint distribution of independent standard exponential random variables, say, Y 1 and Y 2, we conclude that ) ) nx1) d Y1 n1 X n) ) Therefore, applying the continuous function fa, b) = a + b to both sides gives n1 X n) + X 1) ) = n1 R n ) d Y 1 + Y 2 Gamma2, 1) Let us consider a different example in which the asymptotic joint distribution does not involve independent random variables Example 64 As in Example 63, let X 1,, X n be independent and identically distributed from uniform0, 1) What is the joint asymptotic distribution of X n 1) and X n)? Proceeding as in Example 63, we obtain [ ) )] n1 Xn 1) ) x P > = P n1 X n) ) y Y 2 X n 1) < 1 x n and X n) < 1 y n ) 67) We consider two separate cases: If 0 < x < y, then the right hand side of 67) is simply P X n) < 1 y/n), which converges to e y as in Example 61 On the other hand, if 0 < y < x, then P X n 1) < 1 x n and X n) < 1 y ) = P X n) < 1 x ) n n 99 = +P 1 x n e x 1 + x y) X n 1) < 1 x n < X n) < 1 y n ) n + n 1 x ) n 1 x n n n) y )
5 The second equality above arises because X n 1) < a < X n) < b if and only if exactly n 1 of the X i are less than a and exactly one is between a and b We now know the joint asymptotic distribution of n1 X n 1) ) and n1 X n 1) ); but can we describe this joint distribution in a simple way? Suppose that Y 1 and Y 2 are independent standard exponential variables Consider the joint distribution of Y 1 and Y 1 + Y 2 : If 0 < x < y, then P Y 1 + Y 2 > x and Y 1 > y) = P Y 1 > y) = e y On the other hand, if 0 < y < x, then P Y 1 + Y 2 > x and Y 1 > y) = P Y 1 > max{y, x Y 2 }) = E e max{y,x Y 2} Therefore, we conclude that ) n1 Xn 1) ) n1 X n) ) = e y P y > x Y 2 ) + = e x 1 + x y) d Y1 + Y 2 Y 1 ) x y 0 e t x e t dt Notice that marginally, we have shown that n1 X n 1) ) d Gamma2, 1) Recall that if F is a continuous, invertible distribution function and U is a standard uniform random variable, then F 1 U) F The proof is immediate, since P {F 1 U) t} = P {U F t)} = F t) We may use this fact in conjunction with the result of Example 64 as in the following example Example 65 Suppose X 1,, X n are independent standard exponential random variables What is the joint asymptotic distribution of X n 1), X n) )? The distribution function of a standard exponential distribution is F t) = 1 e t, whose inverse is F 1 u) = log1 u) Therefore, ) ) log1 Un 1) ) d Xn 1) =, log1 U n) ) X n) where = d means has the same distribution Thus, [ log n1 Un 1) ) ] ) log [ n1 U n) ) ] d = Xn 1) log n X n) log n We conclude by the result of Example 64 that ) ) Xn 1) log n d logy1 + Y 2 ), X n) log n log Y 1 where Y 1 and Y 2 are independent standard exponential variables 100 )
6 Exercises for Section 61 Exercise 61 For a given n, let X 1,, X n be independent and identically distributed with distribution function P X i t) = t3 + θ 3 2θ 3 for t [ θ, θ] Let X 1) denote the first order statistic from the sample of size n; that is, X 1) is the smallest of the X i a) Prove that X 1) is consistent for θ b) Prove that nθ + X 1) ) d Y, where Y is a random variable with an exponential distribution Find E Y ) in terms of θ c) For a fixed α, define δ α,n = 1 + α ) X 1) n Find, with proof, α such that n θ + δ α,n) d Y E Y ), where Y is the same random variable as in part b) d) Compare the two consistent θ-estimators δ α,n and X 1) empirically as follows For n {10 2, 10 3, 10 4 }, take θ = 1 and simulate 1000 samples of size n from the distribution of X i From these 1000 samples, estimate the bias and mean squared error of each estimator Which of the two appears better? Do your empirical results agree with the theoretical results in parts c) and d)? Exercise 62 Let X 1, X 2, be independent uniform 0, θ) random variables Let X n) = max{x 1,, X n } and consider the three estimators δn 0 = X n) δn 1 = n ) 2 n n 1 X n) δn 2 = X n) n 1 a) Prove that each estimator is consistent for θ 101
7 b) Perform an empirical comparison of these three estimators for n = 10 2, 10 3, 10 4 Use θ = 1 and simulate 1000 samples of size n from uniform 0, 1) From these 1000 samples, estimate the bias and mean squared error of each estimator Which one of the three appears to be best? c) Find the asymptotic distribution of nθ δ i n) for i = 0, 1, 2 Based on your results, which of the three appears to be the best estimator and why? For the latter question, don t attempt to make a rigorous mathematical argument; simply give an educated guess) Exercise 63 Find, with proof, the asymptotic distribution of X n) if X 1,, X n are independent and identically distributed with each of the following distributions That is, find k n, a n, and a nondegenerate random variable X such that k n X n) a n ) d X) a) Beta3, 1) with distribution function F x) = x 2 for x 0, 1) b) Standard logistic with distribution function F x) = e x /1 + e x ) Exercise 64 Let X 1,, X n be independent uniform0, 1) random variables Find the joint asymptotic distribution of [ nx 2), n1 X n 1) ) ] Hint: To find a probability such as P a < X 2) < X n) < b), consider the trinomial distribution with parameters [n; a, b a, 1 b)] and note that the probability in question is the same as the probability that the numbers in the first and third categories are each 1 Exercise 65 Let X 1,, X n be a simple random sample from the distribution function F x) = [1 1/x)]I{x > 1} a) Find the joint asymptotic distribution of X n 1) /n, X n) /n) Hint: Proceed as in Example 65 b) Find the asymptotic distribution of X n 1) /X n) Exercise 66 If X 1,, X n are independent and identically distributed uniform0, 1) variables, prove that X 1) /X 2) d uniform0, 1) Exercise 67 Let X 1,, X n be independent uniform0, 2θ) random variables a) Let M = X 1) + X n) )/2 Find the asymptotic distribution of nm θ) b) Compare the asymptotic performance of the three estimators M, X n, and the sample median X n by considering their relative efficiencies 102
8 c) For n {101, 1001, 10001}, generate 500 samples of size n, taking θ = 1 Keep track of M, X n, and X n for each sample Construct a 3 3 table in which you report the sample variance of each estimator for each value of n Do your simulation results agree with your theoretical results in part b)? Exercise 68 Let X 1,, X n be a simple random sample from a logistic distribution with distribution function F t) = e t/θ /1 + e t/θ ) for all t a) Find the asymptotic distribution of X n) X n 1) Hint: zero Use the fact that log U n) and log U n 1) both converge in probability to b) Based on part a), construct an approximate 95% confidence interval for θ Use the fact that the 025 and 975 quantiles of the standard exponential distribution are and 36889, respectively c) Simulate 1000 samples of size n = 40 with θ = 2 How many confidence intervals contain θ? 62 Sample Quantiles To derive the distribution of sample quantiles, we begin by obtaining the exact distribution of the order statistics of a random sample from a uniform distribution To facilitate this derivation, we begin with a quick review of changing variables Suppose X has density f X x) and Y = gx), where g : R k R k is differentiable and has a well-defined inverse, which we denote by h : R k R k In particular, we have X = h[y]) The density for Y is f Y y) = Det[ hy)] f X [hy)], 68) where Det[ hy)] is the absolute value of the determinant of the k k matrix hy) 621 Uniform Order Statistics We now show that the order statistics of a uniform distribution may be obtained using ratios of gamma random variables Suppose X 1,, X n+1 are independent standard exponential, or Gamma1, 1), random variables For j = 1,, n, define Y j = j i=1 X i n+1 i=1 X 69) i 103
9 We will show that the joint distribution of Y 1,, Y n ) is the same as the joint distribution of the order statistics U 1),, U n) ) of a simple random sample from uniform0, 1) by demonstrating that their joint density function is the same as that of the uniform order statistics, namely n!i{0 < u 1) < < u n) < 1} We derive the joint density of Y 1,, Y n ) as follows As an intermediate step, define Z j = j i=1 X i for j = 1,, n + 1 Then X i = { Zi if i = 1 Z i Z i 1 if i > 1, which means that the gradient of the transformation from Z to X is upper triangular with ones on the diagonal, a matrix whose determinant is one This implies that the density for Z is f Z z) = exp{ z n+1 }I{0 < z 1 < z 2 < < z n+1 } Next, if we define Y n+1 = Z n+1, then we may express Z in terms of Y as Z i = { Yn+1 Y i if i < n + 1 Y n+1 if i = n ) The gradient of the transformation in Equation 610) is lower triangular, with y n+1 along the diagonal except for a 1 in the lower right corner The determinant of this matrix is y n n+1, so the density of Y is f Y y) = y n n+1 exp{ y n+1 }I{y n+1 > 0}I{0 < y 1 < < y n < 1} 611) Equation 611) reveals several things: First, Y 1,, Y n ) is independent of Y n+1 and the marginal distribution of Y n+1 is Gamman + 1, 1) More important for our purposes, the marginal joint density of Y 1,, Y n ) is proportional to I{0 < y 1 < < y n < 1}, which is exactly what we needed to prove We conclude that the vector Y defined in Equation 69) has the same distribution as the vector of order statistics of a simple random sample from uniform0, 1) 622 Uniform Sample Quantiles Using the result of Section 621, we may derive the joint asymptotic distribution of a set of sample quantiles for a uniform simple random sample Suppose we are interested in the p 1 and p 2 quantiles, where 0 < p 1 < p 2 < 1 The following argument may be generalized to obtain the joint asymptotic distribution of any finite number 104
10 of quantiles If U 1,, U n are independent uniform0, 1) random variables, then the p 1 and p 2 sample quantiles may be taken to be the a n th and b n th order statistics, respectively, where def def = 5 + np 1 and b n = 5 + np x is simply x rounded to the nearest integer) a n Next, let A n def = 1 n a n i=1 X i, B n def = 1 n b n i=a n+1 X i, and C n def = 1 n n+1 i=b n+1 X i We proved in Section 621 that U an), U bn)) has the same distribution as ) ga n, B n, C n ) def A n A n + B n =, 612) A n + B n + C n A n + B n + C n The asymptotic distribution of ga n, B n, C n ) may be determined using the delta method if we can determine the joint asymptotic distribution of A n, B n, C n ) A bit of algebra reveals that an nan n An p 1 ) = an np ) 1 n a n a n = ) an nan an 1 + a n np 1 n a n n By the central limit theorem, a n na n /a n 1) d N0, 1) because the X i have mean 1 and variance 1 Furthermore, a n /n p 1 and the rightmost term above goes to 0, so Slutsky s theorem gives n An p 1 ) d N0, p 1 ) Similar arguments apply to B n and to C n Because A n and B n and C n are independent of one another, we may stack them as in Exercise 219 to obtain A n p 1 d n B n p 2 p 1 N 3 0 p , 0 p 2 p 1 0 C n 1 p p 2 by Slutsky s theorem For g : R 3 R 2 defined in Equation 612), we obtain 1 ga, b, c) = b + c c a c a + b + c) 2 a a b Therefore, [ gp 1, p 2 p 1, 1 p 2 )] T 1 p1 p = 1 p 1 1 p 2 1 p 2 p 2 ), 105
11 so the delta method gives { ) Uan) n U bn) p1 p 2 )} { ) d 0 p1 1 p N 2, 1 ) p 1 1 p 2 ) 0 p 1 1 p 2 ) p 2 1 p 2 ) )} 613) The method used above to derive the joint distribution 613) of two sample quantiles may be extended to any number of quantiles; doing so yields the following theorem: Theorem 66 Suppose that for given constants p 1,, p k with 0 < p 1 < < p k < 1, there exist sequences {a 1n },, {a kn } such that for all 1 i k, a in np i n 0 Then if U 1,, U n is a sample from Uniform0,1), n U a1n ) U akn ) p 1 p 1 1 p 1 ) p 1 1 p k ) d N k 0, p k 0 p 1 1 p k ) p k 1 p k ) Note that for i < j, both the i, j) and j, i) entries in the covariance matrix above equal p i 1 p j ) and p j 1 p i ) never occurs in the matrix 623 General sample quantiles Let F x) be the distribution function for a random variable X The quantile function F u) of Definition 313 is nondecreasing on 0, 1) and it has the property that F U) has the same distribution as X for U uniform0, 1) This property follows from Lemma 314, since P [F U) x] = P [U F x)] = F x) for all x Since F u) is nondecreasing, it preserves ordering; thus, if X 1,, X n is a random sample from F x), then X 1),, X n) ) d = [ F U 1) ),, F U n) ) ] The symbol d = means has the same distribution as ) Now, suppose that at some point ξ, the derivative F ξ) exists and is positive Then F x) must be continuous and strictly increasing in a neighborhood of ξ This implies that in this 106
12 neighborhood, F x) has a well-defined inverse, which must be differentiable at the point p def = F ξ) If F 1 u) denotes the inverse that exists in a neighborhood of p, then df 1 p) dp = 1 F ξ) 614) Equation 614) may be derived by differentiating the equation F [F 1 p)] = p Note also that whenever the inverse F 1 u) exists, it must coincide with the quantile function F u) Thus, the condition that F ξ) exists and is positive is a sufficient condition to imply that F u) is differentiable at p This differentiability is important: If we wish to transform the uniform order statistics U ain ) of Theorem 66 into order statistics X ain ) using the quantile function F u), the delta method requires the differentiability of F u) at each of the points p 1,, p k The delta method, along with Equation 614), yields the following corollary of Theorem 66: Theorem 67 Let X 1,, X n be a simple random sample from a distribution function F x) such that F x) is differentiable at each of the points ξ 1 < < ξ k and F ξ i ) > 0 for all i Denote F ξ i ) by p i Then under the assumptions of Theorem 66, p X 1 1 p 1 ) p a1n ) ξ 1 F d n N k 0 ξ p k ) ) 2 F ξ 1 )F ξ k ), X akn ) ξ k 0 p 1 1 p k ) p F ξ 1 )F ξ k k 1 p k ) ) F ξ k ) 2 Exercises for Section 62 Exercise 69 Let X 1 be Uniform0, 2π) and let X 2 be standard exponential, independent of X 1 Find the joint distribution of Y 1, Y 2 ) = 2X 2 cos X 1, 2X 2 sin X 1 ) Note: Since log U has a standard exponential distribution if U uniform0, 1), this problem may be used to simulate normal random variables using simulated uniform random variables Exercise 610 Suppose X 1,, X n is a simple random sample with P X i x) = F x θ), where F x) is symmetric about zero We wish to estimate θ by Q p + Q 1 p )/2, where Q p and Q 1 p are the p and 1 p sample quantiles, respectively Find the smallest possible asymptotic variance for the estimator and the p for which it is achieved for each of the following forms of F x): a) Standard Cauchy 107
13 b) Standard normal c) Standard double exponential Hint: For at least one of the three parts of this question, you will have to solve for a minimizer numerically Exercise 611 When we use a boxplot to assess the symmetry of a distribution, one of the main things we do is visually compare the lengths of Q 3 Q 2 and Q 2 Q 1, where Q i denotes the ith sample quartile a) Given a random sample of size n from N0, 1), find the asymptotic distribution of Q 3 Q 2 ) Q 2 Q 1 ) b) Repeat part a) if the sample comes from a standard logistic distribution c) Using 1000 simulations from each distribution, use graphs to assess the accuracy of each of the asymptotic approximations above for n = 5 and n = 13 For a sample of size 4n + 1, define Q i to be the in + 1)th order statistic) For each value of n and each distribution, plot the empirical distribution function against the theoretical limiting distribution function Exercise 612 Let X 1,, X n be a random sample from Uniform0, 2θ) Find the asymptotic distributions of the median, the midquartile range, and 2X 3 3n/4 The midquartile range is the mean of the 1st and 3rd quartiles) Compare these three estimates of θ based on their variances 108
The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.
Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That
More information1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =
Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values
More informationAsymptotic Statistics-VI. Changliang Zou
Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous
More informationORDER STATISTICS, QUANTILES, AND SAMPLE QUANTILES
ORDER STATISTICS, QUANTILES, AND SAMPLE QUANTILES 1. Order statistics Let X 1,...,X n be n real-valued observations. One can always arrangetheminordertogettheorder statisticsx (1) X (2) X (n). SinceX (k)
More informationMaximum Likelihood Estimation
Chapter 7 Maximum Likelihood Estimation 7. Consistency If X is a random variable (or vector) with density or mass function f θ (x) that depends on a parameter θ, then the function f θ (X) viewed as a function
More informationRecall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n
Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationMaximum Likelihood Estimation
Chapter 8 Maximum Likelihood Estimation 8. Consistency If X is a random variable (or vector) with density or mass function f θ (x) that depends on a parameter θ, then the function f θ (X) viewed as a function
More informationThe Delta Method and Applications
Chapter 5 The Delta Method and Applications 5.1 Local linear approximations Suppose that a particular random sequence converges in distribution to a particular constant. The idea of using a first-order
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationIB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1)
IB Mathematics HL Year Unit : Completion of Algebra (Core Topic ) Homewor for Unit Ex C:, 3, 4, 7; Ex D: 5, 8, 4; Ex E.: 4, 5, 9, 0, Ex E.3: (a), (b), 3, 7. Now consider these: Lesson 73 Sequences and
More informationSTA 260: Statistics and Probability II
Al Nosedal. University of Toronto. Winter 2017 1 Properties of Point Estimators and Methods of Estimation 2 3 If you can t explain it simply, you don t understand it well enough Albert Einstein. Definition
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationGaussian Random Fields
Gaussian Random Fields Mini-Course by Prof. Voijkan Jaksic Vincent Larochelle, Alexandre Tomberg May 9, 009 Review Defnition.. Let, F, P ) be a probability space. Random variables {X,..., X n } are called
More informationMonte Carlo Studies. The response in a Monte Carlo study is a random variable.
Monte Carlo Studies The response in a Monte Carlo study is a random variable. The response in a Monte Carlo study has a variance that comes from the variance of the stochastic elements in the data-generating
More information22m:033 Notes: 3.1 Introduction to Determinants
22m:033 Notes: 3. Introduction to Determinants Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman October 27, 2009 When does a 2 2 matrix have an inverse? ( ) a a If A =
More informationChapter 10. U-statistics Statistical Functionals and V-Statistics
Chapter 10 U-statistics When one is willing to assume the existence of a simple random sample X 1,..., X n, U- statistics generalize common notions of unbiased estimation such as the sample mean and the
More informationLECTURE 10: REVIEW OF POWER SERIES. 1. Motivation
LECTURE 10: REVIEW OF POWER SERIES By definition, a power series centered at x 0 is a series of the form where a 0, a 1,... and x 0 are constants. For convenience, we shall mostly be concerned with the
More information(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.
54 We are given the marginal pdfs of Y and Y You should note that Y gamma(4, Y exponential( E(Y = 4, V (Y = 4, E(Y =, and V (Y = 4 (a With U = Y Y, we have E(U = E(Y Y = E(Y E(Y = 4 = (b Because Y and
More informationSolution Set 7, Fall '12
Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationChapter 11 - Sequences and Series
Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a
More informationLimits and Continuity
Chapter Limits and Continuity. Limits of Sequences.. The Concept of Limit and Its Properties A sequence { } is an ordered infinite list x,x,...,,... The n-th term of the sequence is, and n is the index
More information1 Basic Combinatorics
1 Basic Combinatorics 1.1 Sets and sequences Sets. A set is an unordered collection of distinct objects. The objects are called elements of the set. We use braces to denote a set, for example, the set
More informationWe are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero
Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.
More information5 Introduction to the Theory of Order Statistics and Rank Statistics
5 Introduction to the Theory of Order Statistics and Rank Statistics This section will contain a summary of important definitions and theorems that will be useful for understanding the theory of order
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More information1 The distributive law
THINGS TO KNOW BEFORE GOING INTO DISCRETE MATHEMATICS The distributive law The distributive law is this: a(b + c) = ab + bc This can be generalized to any number of terms between parenthesis; for instance:
More informationEconomics 241B Review of Limit Theorems for Sequences of Random Variables
Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence
More informationRelative efficiency. Patrick Breheny. October 9. Theoretical framework Application to the two-group problem
Relative efficiency Patrick Breheny October 9 Patrick Breheny STA 621: Nonparametric Statistics 1/15 Relative efficiency Suppose test 1 requires n 1 observations to obtain a certain power β, and that test
More informationStatistical Data Analysis
DS-GA 0 Lecture notes 8 Fall 016 1 Descriptive statistics Statistical Data Analysis In this section we consider the problem of analyzing a set of data. We describe several techniques for visualizing the
More informationREVIEW OF DIFFERENTIAL CALCULUS
REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be
More informationSampling Random Variables
Sampling Random Variables Introduction Sampling a random variable X means generating a domain value x X in such a way that the probability of generating x is in accordance with p(x) (respectively, f(x)),
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationNon-parametric Inference and Resampling
Non-parametric Inference and Resampling Exercises by David Wozabal (Last update. Juni 010) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend surfing
More informationChapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics
Chapter 3a Topics in differentiation Lectures in Mathematical Economics L Cagandahan Abueg De La Salle University School of Economics Problems in differentiation Problems in differentiation Problem 1.
More information3 The language of proof
3 The language of proof After working through this section, you should be able to: (a) understand what is asserted by various types of mathematical statements, in particular implications and equivalences;
More informationFirst of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if
5 Matrices in Different Coordinates In this section we discuss finding matrices of linear maps in different coordinates Earlier in the class was the matrix that multiplied by x to give ( x) in standard
More informationARE211 FINAL EXAM DECEMBER 12, 2003
ARE2 FINAL EXAM DECEMBER 2, 23 This is the final exam for ARE2. As announced earlier, this is an open-book exam. Try to allocate your 8 minutes in this exam wisely, and keep in mind that leaving any questions
More informationMATH 117 LECTURE NOTES
MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set
More informationDiscrete dynamics on the real line
Chapter 2 Discrete dynamics on the real line We consider the discrete time dynamical system x n+1 = f(x n ) for a continuous map f : R R. Definitions The forward orbit of x 0 is: O + (x 0 ) = {x 0, f(x
More informationExponential Functions Dr. Laura J. Pyzdrowski
1 Names: (4 communication points) About this Laboratory An exponential function is an example of a function that is not an algebraic combination of polynomials. Such functions are called trancendental
More informationMS 3011 Exercises. December 11, 2013
MS 3011 Exercises December 11, 2013 The exercises are divided into (A) easy (B) medium and (C) hard. If you are particularly interested I also have some projects at the end which will deepen your understanding
More informationTable of contents. d 2 y dx 2, As the equation is linear, these quantities can only be involved in the following manner:
M ath 0 1 E S 1 W inter 0 1 0 Last Updated: January, 01 0 Solving Second Order Linear ODEs Disclaimer: This lecture note tries to provide an alternative approach to the material in Sections 4. 4. 7 and
More informationLecture 4: Completion of a Metric Space
15 Lecture 4: Completion of a Metric Space Closure vs. Completeness. Recall the statement of Lemma??(b): A subspace M of a metric space X is closed if and only if every convergent sequence {x n } X satisfying
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationNotes on the Matrix-Tree theorem and Cayley s tree enumerator
Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will
More informationStat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota
Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory Charles J. Geyer School of Statistics University of Minnesota 1 Asymptotic Approximation The last big subject in probability
More informationConsidering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.
Lecture 3 Usual complex functions MATH-GA 245.00 Complex Variables Polynomials. Construction f : z z is analytic on all of C since its real and imaginary parts satisfy the Cauchy-Riemann relations and
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More information1 Lecture 8: Interpolating polynomials.
1 Lecture 8: Interpolating polynomials. 1.1 Horner s method Before turning to the main idea of this part of the course, we consider how to evaluate a polynomial. Recall that a polynomial is an expression
More informationUndergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics
Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights
More informationTangent spaces, normals and extrema
Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent
More information36. Multisample U-statistics and jointly distributed U-statistics Lehmann 6.1
36. Multisample U-statistics jointly distributed U-statistics Lehmann 6.1 In this topic, we generalize the idea of U-statistics in two different directions. First, we consider single U-statistics for situations
More informationIf g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get
18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in
More informationIntroduction to Topology
Introduction to Topology Randall R. Holmes Auburn University Typeset by AMS-TEX Chapter 1. Metric Spaces 1. Definition and Examples. As the course progresses we will need to review some basic notions about
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationDOUBLE SERIES AND PRODUCTS OF SERIES
DOUBLE SERIES AND PRODUCTS OF SERIES KENT MERRYFIELD. Various ways to add up a doubly-indexed series: Let be a sequence of numbers depending on the two variables j and k. I will assume that 0 j < and 0
More information8.5 Taylor Polynomials and Taylor Series
8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationLecture 3f Polar Form (pages )
Lecture 3f Polar Form (pages 399-402) In the previous lecture, we saw that we can visualize a complex number as a point in the complex plane. This turns out to be remarkable useful, but we need to think
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationSTAT Sample Problem: General Asymptotic Results
STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative
More informationStatistics 300B Winter 2018 Final Exam Due 24 Hours after receiving it
Statistics 300B Winter 08 Final Exam Due 4 Hours after receiving it Directions: This test is open book and open internet, but must be done without consulting other students. Any consultation of other students
More informationMatrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices
Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationAsymptotic statistics using the Functional Delta Method
Quantiles, Order Statistics and L-Statsitics TU Kaiserslautern 15. Februar 2015 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random
More informationThe 4-periodic spiral determinant
The 4-periodic spiral determinant Darij Grinberg rough draft, October 3, 2018 Contents 001 Acknowledgments 1 1 The determinant 1 2 The proof 4 *** The purpose of this note is to generalize the determinant
More informationIntroductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19
Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each
More informationMultivariate Analysis and Likelihood Inference
Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationExercises from other sources REAL NUMBERS 2,...,
Exercises from other sources REAL NUMBERS 1. Find the supremum and infimum of the following sets: a) {1, b) c) 12, 13, 14, }, { 1 3, 4 9, 13 27, 40 } 81,, { 2, 2 + 2, 2 + 2 + } 2,..., d) {n N : n 2 < 10},
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More informationB553 Lecture 1: Calculus Review
B553 Lecture 1: Calculus Review Kris Hauser January 10, 2012 This course requires a familiarity with basic calculus, some multivariate calculus, linear algebra, and some basic notions of metric topology.
More informationMath 309 Notes and Homework for Days 4-6
Math 309 Notes and Homework for Days 4-6 Day 4 Read Section 1.2 and the notes below. The following is the main definition of the course. Definition. A vector space is a set V (whose elements are called
More informationThe Multivariate Gaussian Distribution [DRAFT]
The Multivariate Gaussian Distribution DRAFT David S. Rosenberg Abstract This is a collection of a few key and standard results about multivariate Gaussian distributions. I have not included many proofs,
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationCONSTRUCTION OF THE REAL NUMBERS.
CONSTRUCTION OF THE REAL NUMBERS. IAN KIMING 1. Motivation. It will not come as a big surprise to anyone when I say that we need the real numbers in mathematics. More to the point, we need to be able to
More informationMATH4427 Notebook 4 Fall Semester 2017/2018
MATH4427 Notebook 4 Fall Semester 2017/2018 prepared by Professor Jenny Baglivo c Copyright 2009-2018 by Jenny A. Baglivo. All Rights Reserved. 4 MATH4427 Notebook 4 3 4.1 K th Order Statistics and Their
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More information4 Invariant Statistical Decision Problems
4 Invariant Statistical Decision Problems 4.1 Invariant decision problems Let G be a group of measurable transformations from the sample space X into itself. The group operation is composition. Note that
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationSupplementary Notes for W. Rudin: Principles of Mathematical Analysis
Supplementary Notes for W. Rudin: Principles of Mathematical Analysis SIGURDUR HELGASON In 8.00B it is customary to cover Chapters 7 in Rudin s book. Experience shows that this requires careful planning
More informationFinal Examination Statistics 200C. T. Ferguson June 11, 2009
Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean
More information12. Perturbed Matrices
MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,
More informationChapter 8: Taylor s theorem and L Hospital s rule
Chapter 8: Taylor s theorem and L Hospital s rule Theorem: [Inverse Mapping Theorem] Suppose that a < b and f : [a, b] R. Given that f (x) > 0 for all x (a, b) then f 1 is differentiable on (f(a), f(b))
More information7 Influence Functions
7 Influence Functions The influence function is used to approximate the standard error of a plug-in estimator. The formal definition is as follows. 7.1 Definition. The Gâteaux derivative of T at F in the
More informationSolutions to Math 41 First Exam October 18, 2012
Solutions to Math 4 First Exam October 8, 202. (2 points) Find each of the following its, with justification. If the it does not exist, explain why. If there is an infinite it, then explain whether it
More informationCS229 Supplemental Lecture notes
CS229 Supplemental Lecture notes John Duchi 1 Boosting We have seen so far how to solve classification (and other) problems when we have a data representation already chosen. We now talk about a procedure,
More informationIntroduction to Statistical Data Analysis Lecture 3: Probability Distributions
Introduction to Statistical Data Analysis Lecture 3: Probability Distributions James V. Lambers Department of Mathematics The University of Southern Mississippi James V. Lambers Statistical Data Analysis
More informationInference on distributions and quantiles using a finite-sample Dirichlet process
Dirichlet IDEAL Theory/methods Simulations Inference on distributions and quantiles using a finite-sample Dirichlet process David M. Kaplan University of Missouri Matt Goldman UC San Diego Midwest Econometrics
More informationDifferential Topology Final Exam With Solutions
Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function
More informationAlgebra Performance Level Descriptors
Limited A student performing at the Limited Level demonstrates a minimal command of Ohio s Learning Standards for Algebra. A student at this level has an emerging ability to A student whose performance
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More information