Siegel s formula via Stein s identities
|
|
- Veronica Grant
- 5 years ago
- Views:
Transcription
1 Siegel s formula via Stein s identities Jun S. Liu Department of Statistics Harvard University Abstract Inspired by a surprising formula in Siegel (1993), we find it convenient to compute covariances, even for order statistics, by using Stein (1972) s identities. Generalizations of Siegel s formula to other order statistics as well as other distributions are obtained along this line. KEY WORDS: Exponential family; multivariate Stein s identity; covariance with order statistics. 1 Siegel s result and Stein s identities In a recent paper of Siegel (1993), a remarkable formula is found for the covariance of X 1 with the minimum of the multivariate normal vector (X 1,..., X n ). The theorem is stated as follows. Theorem [Siegel] Let (X 1,..., X n ) be multivariate normal, such that the variables are distinct, with arbitrary mean and variance structure. Then cov[x 1, min(x 1,..., X n )] = cov(x 1, X i )Pr[X i = min(x 1,..., X n )]. i=1 On the other hand, Stein (1972) observes that for Z N(µ, σ 2 ) and any function f such that E f (Z) <, E[(Z µ)f(z)] = σ 2 E[f (Z)]. (1) Similar identities hold for many other distributions. For example, if Z Poisson(λ), then E[(Z λ)f(z)] = λe[f(z + 1) f(z)]. 0 Address corresponds to: Jun S. Liu, Department of Statistics, Harvard University, Cambridge, MA
2 If Z t ν (0, σ 2 ), i.e., Z = Y/ W/ν, where Y N(0, σ 2 ) and W χ 2 ν, then E[Zf(Z)] = ν/(ν 2)σ 2 E[f (Z )], where Z t ν 2 (µ, σ 2 ). The above identities are directly proved as an application of integration by parts. To illustrate, we show the last identity when σ = 1: E[Zf(Z)] = = = = ( ) (ν+1)/2 Γ[(ν + 1)/2] zf(z) 1 + z2 dz νπγ(ν/2) ν ( ) ( Γ[(ν + 1)/2] ν f(z) νπγ(ν/2) ν z2 ν ) (ν 1)/2 ( ) ν (ν 1)/2 f Γ[(ν 1)/2] (z) 1 + z2 dz ν 2 (ν 2)πΓ(ν/2 1) ν 2 ν/(ν 2)E[f (Z )]. dz There is also a multivariate version of Stein s identity as follows. Lemma 1 Let X = (X 1,..., X n ) be multivariate normally distributed with arbitrary mean vector µ and covariance matrix Σ. For any function h(x 1,..., x n ) such that h/ x i exists almost everywhere and E x i h(x) <, i = 1,..., n, we write h(x) = ( h(x),..., h(x) x n ) T. Then the following identity is true: x 1 cov[x, h(x)] = Σ E[ h(x)]. (2) Specifically, cov[x 1, h(x 1,..., X n )] = i=1 cov(x 1, X i )E[ x i h(x 1,..., X n )]. (3) Proof: Let Z = (Z 1,..., Z n ), where the Z i are i.i.d. N(0, 1) random variables. It is straightforward to show by (1) that for any g(x), differentiable almost everywhere, cov[z i, g(z)] = E[ g(z)/ z i ]. Hence cov[z, g(z)] = E[ g(z)]. Also see Stein (1981) for a more elaborate proof. Now for the random vector X, we can write X = Σ 1/2 Z + µ, and g(z) = h(σ 1/2 Z + µ). Hence the left hand side of (2) is cov[x, h(x)] = cov[σ 1/2 Z, g(z)] = Σ 1/2 E[ g(z)]. Since g(z) = Σ 1/2 h(x), the identity (2) follows immediately. 2
3 REMARK: A version of this lemma appears in Stein (1981) with Σ being the identity matrix. However, the search of an explicit formula as (2) is not successful, although it is believed that the form is well-known in the fields of sliced inverse regression and decision theory. Siegel (1993) s method can also be applied to prove the identity (2) but is less direct. We find that Siegel s theorem is closely related to Stein s identities and can be generalized to covariances of X 1 with other order statistics, other functionals, and for other distributions. 2 A simple proof of the generalized Siegel s formula Let f u (z) = min(z, u). Then f u is piecewise linear in z. Hence for Z N(µ, σ 2 ), Stein s identity can be applied to get that cov[z, f u (Z)] = σ 2 E[f u(z)] = σ 2 E(I {Z<u} ) = σ 2 Pr[Z = min(z, u)], which easily leads to Lemmas 2 and 3 of Siegel (1993). Then some more properties of the normal distribution are used in Siegel (1993) to obtain the proof of his theorem. We find, however, that it is convenient to use a multivariate version of Stein s identity directly to prove the following generalized Siegel s formula. Theorem 1 Let (X 1,..., X n ) be multivariate normal, such that the variables are distinct, with arbitrary mean and variance structure. Let X (i) be the ith largest among X 1,..., X n, then cov[x 1, X (i) ] = cov(x 1, X j )P r(x j = X (i) ). j=1 Proof: Define h(x 1,..., x n ) = x (i). Then h is piecewise linear and therefore almost everywhere differentiable. Also it is noticed that h(x 1,..., x n )/ x j is one if x j = x (i) and zero if x j x (i). Hence by applying identity (3) we obtain the result. REMARK: When X 1,..., X n are i.i.d., Carl Morris and I found the following ancillary argument to prove the above result. Since X (i) X is independent of X, cov( X, X(i) ) = cov( X, X) = var(x 1 )/n. On the other hand, since cov(x j, X (i) ) = cov(x k, X (i) ) for any j and k because of the i.i.d. assumption, we obtain that cov(x 1, X (i) ) = var(x 1 )/n. For a multivariate normal random vector, the covariance of X 1 with min( X 1,..., X n ) can also be computed similarly. Define h(x 1,..., x n ) = min( x 1,..., x n ), then h(x)/ x j is equal to 3
4 1 if 0 < x j < u; 1 if u < x j < 0; and 0 if x j > u, where u = min( X k, k j). Hence Stein s identity (3) leads to the following corollary. Corollary 1 Let Y = min( X 1,..., X n ), where (X 1,..., X n ) is multivariate normal with arbitrary mean and covariance structure. Then cov(x 1, Y ) = cov(x 1, X i )[Pr( X i = Y, X i > 0) Pr( X i = Y, X i < 0)]. i=1 When X 1,..., X n have mean zero, the above covariance equals zero. 3 Generalizations to other distributions Stein s identities for other distributions can also be used to obtain similar results. Corollary 2 Let X 1 Poisson(λ 1 ) be independent of X 2,..., X n, then cov[x 1, min(x 1,..., X n )] = λ 1 P r[x 1 < X j, for all j 1). Proof: Define f u (x) the same way as before, then by Stein s identity for a Poisson distribution: cov[x 1, f u (X 1 )] = λ 1 E[f u (X 1 + 1) f u (X 1 )] = λ 1 Pr(X 1 < u). Hence conditionally cov[x 1, f u (X 1 ) X 2,..., X n ] = λ 1 Pr[X 1 < min(x 2,..., X n ) X 2,... X n ]. Since cov(u, V )=E[cov(U, V W )]+cov[e(u W ), E(V W )], and X 1 is independent of X j, j 1, the conclusion follows. Corollary 3 Let X 1 t ν (0, σ 2 ) be independent of X 2,..., X n, then ν cov[x 1, min(x 1,..., X n )] = ν 2 σ2 P r[x1 < min(x 2,..., X n )], where X 1 t ν 2(0, σ 2 ). Let X (i) be the ith order statistics, then more generally, ν cov[x 1, X (i) ] = ν 2 σ2 P r[x (i 1) < X1 < X (i+1) ]. Proof: Use Stein s identity for t-distribution. 4
5 More generally, Morris (1983) provides Stein-type identities (theorem 5.2) for natural exponential families with quadratic variance functions (NEF-QVF), which can be used to generalize the above arguments to a broader class of distributions (NEF-QVF) without much difficulty. To illustrate, we compute cov[y 1, min(y 1,..., Y n )] where Y i Exponential(θ i ) are independent, E(Y i ) = 1/θ i, and var(y i ) = 1/θi 2. Stein s identity takes the form cov[y 1, f(y 1 )] = 1 θ 1 E[Y 1 f (Y 1 )]. Let f u (x) = min(x, u), then cov[y 1, f u (Y 1 )] = (1/θ 1 )E(Y 1 I {Y1 <u}). Since U = min(y 2,..., Y n ) Exponential(θ θ n ), cov[y 1, min(y 1,..., Y n )] = 1 θ 1 E[E(Y 1 I {Y1 <U} U)] = 1 θ 1 E[Y 1 Pr(U > Y 1 Y 1 )] = 1 (θ θ n ) 2. REMARK: Firstly, it is noted that this covariance does not depend on θ 1 at all, and a more general version is cov[y i, min(y 1,..., Y n )] = var[min(y 1,..., Y n )]. Secondly, Professor Carl Morris provides another ancillary argument for the result: consider a location shift family of exponential distributions with known scale parameters θ 1,..., θ n. Then Y (1) = min(y 1,..., Y n ) is complete sufficient for the location parameter. Hence by Basu s theorem, Y 1 Y (1) is independent of Y (1), and therefore the result holds. Acknowledgement: The author thanks Professors Hal Stern and Carl Morris for bringing up the problem and providing very stimulating discussions. Professor Morris also suggested the computations for the exponential distribution. REFERENCES Morris, C.N. (1983) Natural exponential families with quadratic variance functions: statistical theory, Ann. Statist., 11, Siegel, A.F. (1993) A surprising covariance involving the minimum of multivariate normal variables, J. Amer. Statist. Assoc., 88, Stein, C.M. (1972) A bound for the error in the normal approximation to the distribution of a sum of dependent random variables, Proc. Sixth Berkeley Symp. Math. Statist. Probab., 2,
6 Stein, C.M. (1981) Estimation of the mean of a multivariate normal distribution, Ann. Statist., 6,
ECE Lecture #9 Part 2 Overview
ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationLecture Note 12: Kalman Filter
ECE 645: Estimation Theory Spring 2015 Instructor: Prof. Stanley H. Chan Lecture Note 12: Kalman Filter LaTeX prepared by Stylianos Chatzidakis) May 4, 2015 This lecture note is based on ECE 645Spring
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More informationJournal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh
Journal of Statistical Research 007, Vol. 4, No., pp. 5 Bangladesh ISSN 056-4 X ESTIMATION OF AUTOREGRESSIVE COEFFICIENT IN AN ARMA(, ) MODEL WITH VAGUE INFORMATION ON THE MA COMPONENT M. Ould Haye School
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationStatistics STAT:5100 (22S:193), Fall Sample Final Exam B
Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2
More informationMULTIVARIATE PROBABILITY DISTRIBUTIONS
MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationStat 5101 Notes: Algorithms
Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................
More informationA Short Introduction to Stein s Method
A Short Introduction to Stein s Method Gesine Reinert Department of Statistics University of Oxford 1 Overview Lecture 1: focusses mainly on normal approximation Lecture 2: other approximations 2 1. The
More informationLet X and Y be two real valued stochastic variables defined on (Ω, F, P). Theorem: If X and Y are independent then. . p.1/21
Multivariate transformations The remaining part of the probability course is centered around transformations t : R k R m and how they transform probability measures. For instance tx 1,...,x k = x 1 +...
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationFinal Exam. Economics 835: Econometrics. Fall 2010
Final Exam Economics 835: Econometrics Fall 2010 Please answer the question I ask - no more and no less - and remember that the correct answer is often short and simple. 1 Some short questions a) For each
More informationA Gentle Introduction to Stein s Method for Normal Approximation I
A Gentle Introduction to Stein s Method for Normal Approximation I Larry Goldstein University of Southern California Introduction to Stein s Method for Normal Approximation 1. Much activity since Stein
More informationLecture 7: February 6
CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 7: February 6 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationIMPROVING TWO RESULTS IN MULTIPLE TESTING
IMPROVING TWO RESULTS IN MULTIPLE TESTING By Sanat K. Sarkar 1, Pranab K. Sen and Helmut Finner Temple University, University of North Carolina at Chapel Hill and University of Duesseldorf October 11,
More informationVariance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18
Variance reduction p. 1/18 Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction p. 2/18 Example Use simulation to compute I = 1 0 e x dx We
More informationLECTURE-15 : LOGARITHMS AND COMPLEX POWERS
LECTURE-5 : LOGARITHMS AND COMPLEX POWERS VED V. DATAR The purpose of this lecture is twofold - first, to characterize domains on which a holomorphic logarithm can be defined, and second, to show that
More informationPractice Examination # 3
Practice Examination # 3 Sta 23: Probability December 13, 212 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use a single
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More information1 Complete Statistics
Complete Statistics February 4, 2016 Debdeep Pati 1 Complete Statistics Suppose X P θ, θ Θ. Let (X (1),..., X (n) ) denote the order statistics. Definition 1. A statistic T = T (X) is complete if E θ g(t
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More informationKalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator
Kalman Filtering Namrata Vaswani March 29, 2018 Notes are based on Vincent Poor s book. 1 Kalman Filter as a causal MMSE estimator Consider the following state space model (signal and observation model).
More informationNonlife Actuarial Models. Chapter 14 Basic Monte Carlo Methods
Nonlife Actuarial Models Chapter 14 Basic Monte Carlo Methods Learning Objectives 1. Generation of uniform random numbers, mixed congruential method 2. Low discrepancy sequence 3. Inversion transformation
More informationPreliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com
1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics,
More informationUC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes
UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores
More informationSTA 2101/442 Assignment 3 1
STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance
More informationRandom Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30
Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices
More informationarxiv: v1 [math.st] 9 Feb 2014
Degrees of Freedom and Model Search Ryan J. Tibshirani arxiv:1402.1920v1 [math.st] 9 Feb 2014 Abstract Degrees of freedom is a fundamental concept in statistical modeling, as it provides a quantitative
More informationThe Delta Method and Applications
Chapter 5 The Delta Method and Applications 5.1 Local linear approximations Suppose that a particular random sequence converges in distribution to a particular constant. The idea of using a first-order
More informationBiostat 2065 Analysis of Incomplete Data
Biostat 2065 Analysis of Incomplete Data Gong Tang Dept of Biostatistics University of Pittsburgh October 20, 2005 1. Large-sample inference based on ML Let θ is the MLE, then the large-sample theory implies
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics. Probability Distributions Prof. Dr. Klaus Reygers (lectures) Dr. Sebastian Neubert (tutorials) Heidelberg University WS 07/8 Gaussian g(x; µ, )= p exp (x µ) https://en.wikipedia.org/wiki/normal_distribution
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More informationElements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley
Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the
More informationKRUSKAL-WALLIS ONE-WAY ANALYSIS OF VARIANCE BASED ON LINEAR PLACEMENTS
Bull. Korean Math. Soc. 5 (24), No. 3, pp. 7 76 http://dx.doi.org/34/bkms.24.5.3.7 KRUSKAL-WALLIS ONE-WAY ANALYSIS OF VARIANCE BASED ON LINEAR PLACEMENTS Yicheng Hong and Sungchul Lee Abstract. The limiting
More informationAdvanced topics from statistics
Advanced topics from statistics Anders Ringgaard Kristensen Advanced Herd Management Slide 1 Outline Covariance and correlation Random vectors and multivariate distributions The multinomial distribution
More informationMath 181B Homework 1 Solution
Math 181B Homework 1 Solution 1. Write down the likelihood: L(λ = n λ X i e λ X i! (a One-sided test: H 0 : λ = 1 vs H 1 : λ = 0.1 The likelihood ratio: where LR = L(1 L(0.1 = 1 X i e n 1 = λ n X i e nλ
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationLecture 28: Asymptotic confidence sets
Lecture 28: Asymptotic confidence sets 1 α asymptotic confidence sets Similar to testing hypotheses, in many situations it is difficult to find a confidence set with a given confidence coefficient or level
More informationMultinomial Data. f(y θ) θ y i. where θ i is the probability that a given trial results in category i, i = 1,..., k. The parameter space is
Multinomial Data The multinomial distribution is a generalization of the binomial for the situation in which each trial results in one and only one of several categories, as opposed to just two, as in
More informationUniversity of Regina. Lecture Notes. Michael Kozdron
University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating
More informationCourse information: Instructor: Tim Hanson, Leconte 219C, phone Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment.
Course information: Instructor: Tim Hanson, Leconte 219C, phone 777-3859. Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment. Text: Applied Linear Statistical Models (5th Edition),
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationJoint p.d.f. and Independent Random Variables
1 Joint p.d.f. and Independent Random Variables Let X and Y be two discrete r.v. s and let R be the corresponding space of X and Y. The joint p.d.f. of X = x and Y = y, denoted by f(x, y) = P(X = x, Y
More informationLectures 6: Degree Distributions and Concentration Inequalities
University of Washington Lecturer: Abraham Flaxman/Vahab S Mirrokni CSE599m: Algorithms and Economics of Networks April 13 th, 007 Scribe: Ben Birnbaum Lectures 6: Degree Distributions and Concentration
More informationPlugin Confidence Intervals in Discrete Distributions
Plugin Confidence Intervals in Discrete Distributions T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania Philadelphia, PA 19104 Abstract The standard Wald interval is widely
More informationSTAT 4385 Topic 01: Introduction & Review
STAT 4385 Topic 01: Introduction & Review Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Spring, 2016 Outline Welcome What is Regression Analysis? Basics
More information(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.
54 We are given the marginal pdfs of Y and Y You should note that Y gamma(4, Y exponential( E(Y = 4, V (Y = 4, E(Y =, and V (Y = 4 (a With U = Y Y, we have E(U = E(Y Y = E(Y E(Y = 4 = (b Because Y and
More informationAn Introduction to Signal Detection and Estimation - Second Edition Chapter III: Selected Solutions
An Introduction to Signal Detection and Estimation - Second Edition Chapter III: Selected Solutions H. V. Poor Princeton University March 17, 5 Exercise 1: Let {h k,l } denote the impulse response of a
More informationFinal Solutions Fri, June 8
EE178: Probabilistic Systems Analysis, Spring 2018 Final Solutions Fri, June 8 1. Small problems (62 points) (a) (8 points) Let X 1, X 2,..., X n be independent random variables uniformly distributed on
More informationSummary of EE226a: MMSE and LLSE
1 Summary of EE226a: MMSE and LLSE Jean Walrand Here are the key ideas and results: I. SUMMARY The MMSE of given Y is E[ Y]. The LLSE of given Y is L[ Y] = E() + K,Y K 1 Y (Y E(Y)) if K Y is nonsingular.
More informationSufficiency 1. Sufficiency. Math Stat II
Sufficiency 1 Sufficiency Sufficiency 2 Outline for Sufficiency 1. Measures of Quality of Estimators 2. A sufficient Statistic for a Parameter 3. Properties of a Sufficient Statistic 4. Completeness and
More informationPrediction. Prediction MIT Dr. Kempthorne. Spring MIT Prediction
MIT 18.655 Dr. Kempthorne Spring 2016 1 Problems Targets of Change in value of portfolio over fixed holding period. Long-term interest rate in 3 months Survival time of patients being treated for cancer
More informationStat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1
Stat 366 A1 Fall 6) Midterm Solutions October 3) page 1 1. The opening prices per share Y 1 and Y measured in dollars) of two similar stocks are independent random variables, each with a density function
More informationProbability and Statistics Notes
Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1
More informationOptimization and Simulation
Optimization and Simulation Variance reduction Michel Bierlaire Transport and Mobility Laboratory School of Architecture, Civil and Environmental Engineering Ecole Polytechnique Fédérale de Lausanne M.
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationSimple Linear Regression
Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.
More informationYaming Yu Department of Statistics, University of California, Irvine Xiao-Li Meng Department of Statistics, Harvard University.
Appendices to To Center or Not to Center: That is Not the Question An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency Yaming Yu Department of Statistics, University of
More informationEconomics 620, Lecture 2: Regression Mechanics (Simple Regression)
1 Economics 620, Lecture 2: Regression Mechanics (Simple Regression) Observed variables: y i ; x i i = 1; :::; n Hypothesized (model): Ey i = + x i or y i = + x i + (y i Ey i ) ; renaming we get: y i =
More informationFinal Exam # 3. Sta 230: Probability. December 16, 2012
Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists
More informationLecture 5. i=1 xi. Ω h(x,y)f X Θ(y θ)µ Θ (dθ) = dµ Θ X
LECURE NOES 25 Lecture 5 9. Minimal sufficient and complete statistics We introduced the notion of sufficient statistics in order to have a function of the data that contains all information about the
More informationRESEARCH REPORT. A note on gaps in proofs of central limit theorems. Christophe A.N. Biscio, Arnaud Poinas and Rasmus Waagepetersen
CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING 2017 www.csgb.dk RESEARCH REPORT Christophe A.N. Biscio, Arnaud Poinas and Rasmus Waagepetersen A note on gaps in proofs of central limit theorems
More informationCS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part X Factor analysis When we have data x (i) R n that comes from a mixture of several Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting,
More informationLECTURES IN ECONOMETRIC THEORY. John S. Chipman. University of Minnesota
LCTURS IN CONOMTRIC THORY John S. Chipman University of Minnesota Chapter 5. Minimax estimation 5.. Stein s theorem and the regression model. It was pointed out in Chapter 2, section 2.2, that if no a
More informationTwelfth Problem Assignment
EECS 401 Not Graded PROBLEM 1 Let X 1, X 2,... be a sequence of independent random variables that are uniformly distributed between 0 and 1. Consider a sequence defined by (a) Y n = max(x 1, X 2,..., X
More informationEXPLICIT MULTIVARIATE BOUNDS OF CHEBYSHEV TYPE
Annales Univ. Sci. Budapest., Sect. Comp. 42 2014) 109 125 EXPLICIT MULTIVARIATE BOUNDS OF CHEBYSHEV TYPE Villő Csiszár Budapest, Hungary) Tamás Fegyverneki Budapest, Hungary) Tamás F. Móri Budapest, Hungary)
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationWeek 9 The Central Limit Theorem and Estimation Concepts
Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population
More informationLesson 4: Stationary stochastic processes
Dipartimento di Ingegneria e Scienze dell Informazione e Matematica Università dell Aquila, umberto.triacca@univaq.it Stationary stochastic processes Stationarity is a rather intuitive concept, it means
More informationMiscellaneous Errors in the Chapter 6 Solutions
Miscellaneous Errors in the Chapter 6 Solutions 3.30(b In this problem, early printings of the second edition use the beta(a, b distribution, but later versions use the Poisson(λ distribution. If your
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationChapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic
Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationStudentization and Prediction in a Multivariate Normal Setting
Studentization and Prediction in a Multivariate Normal Setting Morris L. Eaton University of Minnesota School of Statistics 33 Ford Hall 4 Church Street S.E. Minneapolis, MN 55455 USA eaton@stat.umn.edu
More informationDepartment of Large Animal Sciences. Outline. Slide 2. Department of Large Animal Sciences. Slide 4. Department of Large Animal Sciences
Outline Advanced topics from statistics Anders Ringgaard Kristensen Covariance and correlation Random vectors and multivariate distributions The multinomial distribution The multivariate normal distribution
More informationOptimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X.
Optimization Background: Problem: given a function f(x) defined on X, find x such that f(x ) f(x) for all x X. The value x is called a maximizer of f and is written argmax X f. In general, argmax X f may
More informationSTAT 350. Assignment 6 Solutions
STAT 350 Assignment 6 Solutions 1. For the Nitrogen Output in Wallabies data set from Assignment 3 do forward, backward, stepwise all subsets regression. Here is code for all the methods with all subsets
More informationWrite your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answer sheet.
2017 Booklet No. Test Code : PSA Forenoon Questions : 30 Time : 2 hours Write your Registration Number, Test Centre, Test Code and the Number of this booklet in the appropriate places on the answer sheet.
More informationON TWO RESULTS IN MULTIPLE TESTING
ON TWO RESULTS IN MULTIPLE TESTING By Sanat K. Sarkar 1, Pranab K. Sen and Helmut Finner Temple University, University of North Carolina at Chapel Hill and University of Duesseldorf Two known results in
More informationCSE 312 Final Review: Section AA
CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationCh4. Distribution of Quadratic Forms in y
ST4233, Linear Models, Semester 1 2008-2009 Ch4. Distribution of Quadratic Forms in y 1 Definition Definition 1.1 If A is a symmetric matrix and y is a vector, the product y Ay = i a ii y 2 i + i j a ij
More informationVariances and covariances
Page 1 Chapter 4 Variances and covariances variance The expected value of a random variable gives a crude measure of the center of location of the distribution of that random variable. For instance, if
More informationProblem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51
Yi Lu Correlation and Covariance Yi Lu ECE 313 2/51 Definition Let X and Y be random variables with finite second moments. the correlation: E[XY ] Yi Lu ECE 313 3/51 Definition Let X and Y be random variables
More informationStat 5101 Notes: Algorithms (thru 2nd midterm)
Stat 5101 Notes: Algorithms (thru 2nd midterm) Charles J. Geyer October 18, 2012 Contents 1 Calculating an Expectation or a Probability 2 1.1 From a PMF........................... 2 1.2 From a PDF...........................
More informationSimulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris
Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms
More informationBasic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:
8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions
More information