Lecture 4: Law of Large Number and Central Limit Theorem
|
|
- Arron Higgins
- 5 years ago
- Views:
Transcription
1 ECE 645: Estimation Theory Sring 2015 Instructor: Prof. Stanley H. Chan Lecture 4: Law of Large Number and Central Limit Theorem (LaTeX reared by Jing Li) March 31, 2015 This lecture note is based on ECE 645(Sring 2015) by Prof. Stanley H. Chan in the School of Electrical and Comuter Engineering at Purdue University. 1 Probability Bounds for P F and P M In the revious lectures we have studied various detection methods. Starting from this lecture, we want to take a ste further to analyze the erformance of these detection methods. In order to motivate ourselves to learn a set of new tools called Large Deviation Theory, let us first review some standard tools, namely the Law of Large Number and the Central Limit Theorem. Tobeginourdiscussion, let usfirstconsidertherobability offalsealarm P F andtherobability of miss P M. If Y = y is a one-dimensional observation, we can show the following roosition. Proosition 1. Given a one-dimensional observation Y = y and a decision rule δ(y), it holds that and where l(y) def = log L(y) is the log-likelihood ratio. Given δ(y), it holds that P F = f 0 (y)dy +γ P F P(l(y) η H 0 ), (1) P M P(l(y) η H 1 ), (2) f 0 (y)dy f 0 (y)dy = P(l(y) η H 0 ), l(y)>η l(y)=η where the inequality holds because γ 1. Similarly, we have P M = f 1 (y)dy +(1 γ) f 1 (y)dy f 1 (y)dy = P(l(y) η H 1 ). l(y)<η l(y)=η l(y) η While the derivation shows that P F and P M can be evaluated through the robability of having l(y) η, the same trick becomes much more difficult if we roceed to a high-dimensional observation Y = y. In this case, we let y = [y 1,y 2,...,y n ] T. (3)
2 Then, f 0 (y)dy = f 0 (y 1,...,y n )dy 1...dy n = n f 0 (y i )dy 1...dy n. (4) Unfortunately, (4) involves multivariate integration and is extremely difficult to comute. To overcome this difficulty, it will be useful to note that P F P(l(y) η H 0 ). (5) Since l(y) = log f 1(y) f 0 (y) = l i (y i ), where l i (y i ) def = log f 1(y i ), it holds that f 0 (y i ) [ ] P(l(y) η H 0 ) = P l i (y i ) η H 0. (6) By letting X i = l i (y i ), we see that P F can be equivalently bounded as [ ] P F P X i η H 0. (7) if we can derive an accurate uer bounds for P( n X i η H 0 ), then we can find an uer bound of P F. So the question now is: How do we find good uer bounds for P( n X i η H 0 )? 2 Weak Law of Large Number We begin the analysis by reviewing some elementary robability inequalities. Theorem 1. Markov Inequality For any random variable X 0, and for any ǫ > 0, P(X > ǫ) E[X] ǫ (8) ǫp(x > ǫ) = ǫ ǫ f X (x)dx (a) ǫ xf X (x)dx (b) where (a) holds because ǫ < x, and (b) holds because xf X (x) 0. 0 xf X (x)dx = E[X], 2
3 TO DO: Add a ictorial exlanation using E[X] = 0 (1 F X(x))dx. Theorem 2. Chebyshev Inequality Let X be a random variable such that E[X] = µ and Var(X) <. Then, for all ǫ > 0, P( X µ > ǫ) Var[X] ǫ 2. (9) P( X µ > ǫ) = P((X µ) 2 > ǫ 2 ) E[(X µ)2 ] ǫ 2 where the inequality is due to Markov. = Var[X] ǫ 2 With Chebyshev inequality, we can now rove the following result. Proosition 2. Let X 1,...,X n be iid random variables with E[X k ] = µ and Var(X k ) = σ 2. If Y n = 1 n X k, then for any ǫ > 0, we have By Chebyshev inequality, we have P( Y n µ > ǫ) σ2 nǫ2. (10) Now, we can show that E[(Y n µ) 2 ] = Var(Y n ) = Var P( Y n µ > ǫ) E[(Y n µ) 2 ] ǫ 2. ( 1 n ) X k = 1 n 2 Var(X k ) = σ2 n. The interretation of Proosition 2 is imortant. It says that if we have a sequence of iid random variables X 1,...,X n, the mean Y n will stay around the mean of X 1. In articular: lim P( Y σ 2 n µ > ǫ) lim n n nǫ 2 = 0 This result is known as the Weak Law of Large Numbers. 3
4 Examle Consider a unit square containing an arbitrary shae Ω. Let X 1,...,X n be a sequence of iid Bernoulli random variables with robability = Ω, i.e., = area of Ω. Let Y n = 1 n n X k. We can show that and E[Y n ] = 1 n E[X k ] = n n =, (11) Var(Y n ) = 1 n 2 Var(X k ) = 1 (1 ) n2n(1 ) =. (12) n Therefore: P( Y n µ > ǫ) (1 ) nǫ 2 0 as n. So by throwing arbitrarily n darts to the unit square we can aroximate the area Ω. Examle TO DO: Add an examle of aroximating y = n a ix i by Y = n a ix i I i / i. The convergence behavior demonstrated by WLLN is known as the convergence in robability. Formally, it says the followings. Definition 1. Convergence in Probability We say that a sequence of random variables Y 1,...,Y n converges in robability to µ, denoted by Y n µ if lim P( Y n µ > ǫ) = 0. (13) n For more discussion regarding WLLN, we refer the readers to standard robability textbooks. We close this section by mentioning the following roosition, which aears to be very useful in ractice. Proosition 3. If Y n µ, then f(yn ) f(µ) for any function f that is continuous at µ. Since f is continuous at µ, by continuity we must have that ǫ > 0, δ > 0 such that x µ < δ f(x) f(µ) < ǫ. P( Y n µ < δ) P( f(x) f(µ) < ǫ) because Y n µ < δ is a subset of f(x) f(µ) < ǫ. Hence P( Y n µ < δ) P( f(x) f(µ) < ǫ) 0 as n 4
5 Examle Let X 1,...,X n be iid Poisson(λ). Then if Y n = 1 n n X k, and Y n λ, then 3 Central Limit Theorem e Yn e λ In introductory robability courses we have also learned the Central Limit Theorem. Central Limit Theorem concerns about the convergence of a sequence of distributions. Definition 2. A sequence of distributions with CDF F 1,...,F n is said to converge to another distribution F, denoted as F n F, if F n (x) F(x) at all continuous oints x of F. Definition 3. Convergence in Distribution A sequence of random variables Y 1,...,Y n is said to converge to Y in distribution F, denoted as Y n d F, if Fn F, where F n is the cdf of Y n and F is the CDF of Y. Examle The notation Y n d N(0,1) means that the distribution of Yn is converging to N(0,1). Note that Y n d Y does not mean that Yn is becoming Y. It only means that F Yn is becoming F Y. Remark d Y n Y Yn Y, but the converse is not true. For examle, let X and Y be two iid random variables with distribution N(0,1). Let Y n = Y + 1 n. Then it can be shown that Y n Y, as well as Y n d Y. This gives Yn d X, as X has the same distribution as Y. However Yn X is not true, as Y n is becoming Y, not X. We now resent the Central Limit Theorem. Theorem 3. Central Limit Theorem Let X 1,...,X n be iid random variables with E[X k ] = µ and Var(X k ) = σ 2 <, Then n(yn µ) d N(0,σ 2 ) where Y n = 1 n n X k. It is sufficient to rove that ( ) Yn µ d n N(0,1) σ Let Z n = n( Yn µ σ ). The moment generating function of Z n is M Zn (s) def = E[e szn ] = E [e ] s n( Yn µ ) σ = n E [e ] s σ n (X k µ). 5
6 By Taylor aroximation, we have E [e ] [ s σ n (X k µ) = E 1+ s ] σ n (X k µ)+ s2 σ 2 n (X k µ) 2 1 +O( σ 3 n 3(X k µ) 3 ) = (1+0+ s2 2n ). M Zn (s) = ) n (1+0+ s2 (a) e s2 2, 2n as n. To rove (a), we let y n = (1 + s2 2n )n. Then, logy n = nlog(1 + s2 2n ), and by Taylor aroximation we have log(1+x 0 ) x 0 x logy n = nlog(1+ s2 2n ) = n(s2 2n s4 s2 4n2) = 2 s4 4n n s2 2. As a corollary of the Central Limit Theorem, we also derive the following roosition. Proosition 4. Delta Method If n(t n θ) d N(0,τ 2 ), then n(f(t n ) f(θ)) d N(0,τ 2 (f (θ) 2 )), rovided f (θ) exists. This result is known as the Delta Method. By Taylor exansion f(t n ) = f(θ)+(t n θ)f (θ)+o((t n θ) 2 ) n(f(tn ) f(θ)) = n(t n θ)f (θ) d N(0,τ 2 (f (θ) 2 )). We close this section by discussing the limitation of the Central Limit Theorem. Recall that our analysis question is to study: ( ) P X i η. (14) Central Limit Theorem says that [ n lim P ( X ] i nµ ) ǫ = Φ(ǫ) n nσ 6
7 This imlies that and hence ( lim P X i nµ+ ) nσǫ = Φ(ǫ), n ( lim P 1 n n X i µ+ σǫ n ) = Φ(ǫ). As n, σǫ n 0. the deviation that central limit theorem can handle is small deviation. TO DO: Add a icture to exlain small deviation VS large deviation. 7
Chapter 7: Special Distributions
This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli
More information6.1 Moment Generating and Characteristic Functions
Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,
More informationLecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN
Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and
More informationProving the central limit theorem
SOR3012: Stochastic Processes Proving the central limit theorem Gareth Tribello March 3, 2019 1 Purpose In the lectures and exercises we have learnt about the law of large numbers and the central limit
More informationElements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley
Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the
More informationSampling Distributions
Sampling Distributions Mathematics 47: Lecture 9 Dan Sloughter Furman University March 16, 2006 Dan Sloughter (Furman University) Sampling Distributions March 16, 2006 1 / 10 Definition We call the probability
More informationLimiting Distributions
Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the
More informationECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable
ECE353: Probability and Random Processes Lecture 7 -Continuous Random Variable Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu Continuous
More informationLECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]
LECTURE 7 NOTES 1. Convergence of random variables. Before delving into the large samle roerties of the MLE, we review some concets from large samle theory. 1. Convergence in robability: x n x if, for
More informationLimiting Distributions
We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results
More informationB8.1 Martingales Through Measure Theory. Concept of independence
B8.1 Martingales Through Measure Theory Concet of indeendence Motivated by the notion of indeendent events in relims robability, we have generalized the concet of indeendence to families of σ-algebras.
More informationMAS113 Introduction to Probability and Statistics
MAS113 Introduction to Probability and Statistics School of Mathematics and Statistics, University of Sheffield 2018 19 Identically distributed Suppose we have n random variables X 1, X 2,..., X n. Identically
More informationElementary Analysis in Q p
Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some
More informationAdvanced Calculus I. Part A, for both Section 200 and Section 501
Sring 2 Instructions Please write your solutions on your own aer. These roblems should be treated as essay questions. A roblem that says give an examle requires a suorting exlanation. In all roblems, you
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More information1 Probability Spaces and Random Variables
1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition 1.2.1
More informationChapter 6: Large Random Samples Sections
Chapter 6: Large Random Samples Sections 6.1: Introduction 6.2: The Law of Large Numbers Skip p. 356-358 Skip p. 366-368 Skip 6.4: The correction for continuity Remember: The Midterm is October 25th in
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationRobustness of classifiers to uniform l p and Gaussian noise Supplementary material
Robustness of classifiers to uniform l and Gaussian noise Sulementary material Jean-Yves Franceschi Ecole Normale Suérieure de Lyon LIP UMR 5668 Omar Fawzi Ecole Normale Suérieure de Lyon LIP UMR 5668
More informationElements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley
Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the
More informationChapter 4. Continuous Random Variables 4.1 PDF
Chapter 4 Continuous Random Variables In this chapter we study continuous random variables. The linkage between continuous and discrete random variables is the cumulative distribution (CDF) which we will
More informationIntroduction to Probability and Statistics
Introduction to Probability and Statistics Chater 8 Ammar M. Sarhan, asarhan@mathstat.dal.ca Deartment of Mathematics and Statistics, Dalhousie University Fall Semester 28 Chater 8 Tests of Hyotheses Based
More informationCS145: Probability & Computing
CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,
More informationSTATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero
STATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero 1999 32 Statistic used Meaning in plain english Reduction ratio T (X) [X 1,..., X n ] T, entire data sample RR 1 T (X) [X (1),..., X (n) ] T, rank
More informationLecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs
Lecture Notes 3 Multiple Random Variables Joint, Marginal, and Conditional pmfs Bayes Rule and Independence for pmfs Joint, Marginal, and Conditional pdfs Bayes Rule and Independence for pdfs Functions
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More informationProperties of Random Variables
Properties of Random Variables 1 Definitions A discrete random variable is defined by a probability distribution that lists each possible outcome and the probability of obtaining that outcome If the random
More informationTHE SET CHROMATIC NUMBER OF RANDOM GRAPHS
THE SET CHROMATIC NUMBER OF RANDOM GRAPHS ANDRZEJ DUDEK, DIETER MITSCHE, AND PAWE L PRA LAT Abstract. In this aer we study the set chromatic number of a random grah G(n, ) for a wide range of = (n). We
More informationReview of Probability Theory II
Review of Probability Theory II January 9-3, 008 Exectation If the samle sace Ω = {ω, ω,...} is countable and g is a real-valued function, then we define the exected value or the exectation of a function
More information1 Solution to Problem 2.1
Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationMATH 2710: NOTES FOR ANALYSIS
MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite
More informationA CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS. 1. Abstract
A CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS CASEY BRUCK 1. Abstract The goal of this aer is to rovide a concise way for undergraduate mathematics students to learn about how rime numbers behave
More informationEconometrics I. September, Part I. Department of Economics Stanford University
Econometrics I Deartment of Economics Stanfor University Setember, 2008 Part I Samling an Data Poulation an Samle. ineenent an ientical samling. (i.i..) Samling with relacement. aroximates samling without
More informationEE514A Information Theory I Fall 2013
EE514A Information Theory I Fall 2013 K. Mohan, Prof. J. Bilmes University of Washington, Seattle Department of Electrical Engineering Fall Quarter, 2013 http://j.ee.washington.edu/~bilmes/classes/ee514a_fall_2013/
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationLecture 3 Consistency of Extremum Estimators 1
Lecture 3 Consistency of Extremum Estimators 1 This lecture shows how one can obtain consistency of extremum estimators. It also shows how one can find the robability limit of extremum estimators in cases
More informationAdvanced Econometrics II (Part 1)
Advanced Econometrics II (Part 1) Dr. Mehdi Hosseinkouchack Goethe University Frankfurt Summer 2016 osseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 1 / 22 Distribution For simlicity,
More informationConcentration inequalities and tail bounds
Concentration inequalities and tail bounds John Duchi Outline I Basics and motivation 1 Law of large numbers 2 Markov inequality 3 Cherno bounds II Sub-Gaussian random variables 1 Definitions 2 Examples
More informationAPPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2
APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so
More informationECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.
ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers
More informationSupplementary Materials for Robust Estimation of the False Discovery Rate
Sulementary Materials for Robust Estimation of the False Discovery Rate Stan Pounds and Cheng Cheng This sulemental contains roofs regarding theoretical roerties of the roosed method (Section S1), rovides
More information3 Operations on One Random Variable - Expectation
3 Operations on One Random Variable - Expectation 3.0 INTRODUCTION operations on a random variable Most of these operations are based on a single concept expectation. Even a probability of an event can
More informationLecture Note 12: Kalman Filter
ECE 645: Estimation Theory Spring 2015 Instructor: Prof. Stanley H. Chan Lecture Note 12: Kalman Filter LaTeX prepared by Stylianos Chatzidakis) May 4, 2015 This lecture note is based on ECE 645Spring
More information8 Laws of large numbers
8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable
More informationTopic 7: Convergence of Random Variables
Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information
More information1 Gambler s Ruin Problem
Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins
More informationLecture Notes 3 Convergence (Chapter 5)
Lecture Notes 3 Convergence (Chapter 5) 1 Convergence of Random Variables Let X 1, X 2,... be a sequence of random variables and let X be another random variable. Let F n denote the cdf of X n and let
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationECE 4400:693 - Information Theory
ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More information1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th
18.415/6.854 Advanced Algorithms ovember 7, 1996 Euclidean TSP (art I) Lecturer: Michel X. Goemans MIT These notes are based on scribe notes by Marios Paaefthymiou and Mike Klugerman. 1 Euclidean TSP Consider
More informationProbability and Measure
Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/
More informationOn Z p -norms of random vectors
On Z -norms of random vectors Rafa l Lata la Abstract To any n-dimensional random vector X we may associate its L -centroid body Z X and the corresonding norm. We formulate a conjecture concerning the
More informationProbability. Paul Schrimpf. January 23, UBC Economics 326. Probability. Paul Schrimpf. Definitions. Properties. Random variables.
Probability UBC Economics 326 January 23, 2018 1 2 3 Wooldridge (2013) appendix B Stock and Watson (2009) chapter 2 Linton (2017) chapters 1-5 Abbring (2001) sections 2.1-2.3 Diez, Barr, and Cetinkaya-Rundel
More informationCSE 312 Final Review: Section AA
CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material
More informationImproved Bounds on Bell Numbers and on Moments of Sums of Random Variables
Imroved Bounds on Bell Numbers and on Moments of Sums of Random Variables Daniel Berend Tamir Tassa Abstract We rovide bounds for moments of sums of sequences of indeendent random variables. Concentrating
More informationSums of independent random variables
3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where
More informationConvergence in Distribution
Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal
More informationConsistency and asymptotic normality
Consistency an asymtotic normality Class notes for Econ 842 Robert e Jong March 2006 1 Stochastic convergence The asymtotic theory of minimization estimators relies on various theorems from mathematical
More informationLecture 5: Moment generating functions
Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example
More informationwe get our formula for the saddle-point integral (5.294) h(w) e xf(w). (5.301) xf 00 (w) (w j )
5.3 The Abel-Plana Formula and the Casimir E ect 9 and using f (w) = e i and + = to show that e i = e i i = e i = f (w) (5.3) we get our formula for the saddle-oint integral (5.94) I(x) / h(w) e xf(w).
More informationReview: mostly probability and some statistics
Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random
More informationTheory of Statistics.
Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More information2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized
BIT 38 :2 (998), 283{292. QUASI-MONTE CARLO METHODS FOR NUMERICAL INTEGRATION OF MULTIVARIATE HAAR SERIES II KARL ENTACHER y Deartment of Mathematics, University of Salzburg, Hellbrunnerstr. 34 A-52 Salzburg,
More informationp. 6-1 Continuous Random Variables p. 6-2
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables
More informationε i (E j )=δj i = 0, if i j, form a basis for V, called the dual basis to (E i ). Therefore, dim V =dim V.
Covectors Definition. Let V be a finite-dimensional vector sace. A covector on V is real-valued linear functional on V, that is, a linear ma ω : V R. The sace of all covectors on V is itself a real vector
More informationStochastic Models (Lecture #4)
Stochastic Models (Lecture #4) Thomas Verdebout Université libre de Bruxelles (ULB) Today Today, our goal will be to discuss limits of sequences of rv, and to study famous limiting results. Convergence
More informationCOMP2610/COMP Information Theory
COMP2610/COMP6261 - Information Theory Lecture 9: Probabilistic Inequalities Mark Reid and Aditya Menon Research School of Computer Science The Australian National University August 19th, 2014 Mark Reid
More informationAsymptotic Statistics-VI. Changliang Zou
Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous
More informationChapter 4. Chapter 4 sections
Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation
More informationMaximum Likelihood Asymptotic Theory. Eduardo Rossi University of Pavia
Maximum Likelihood Asymtotic Theory Eduardo Rossi University of Pavia Slutsky s Theorem, Cramer s Theorem Slutsky s Theorem Let {X N } be a random sequence converging in robability to a constant a, and
More informationMeasuring center and spread for density curves. Calculating probabilities using the standard Normal Table (CIS Chapter 8, p 105 mainly p114)
Objectives 1.3 Density curves and Normal distributions Density curves Measuring center and sread for density curves Normal distributions The 68-95-99.7 (Emirical) rule Standardizing observations Calculating
More informationOn the rate of convergence in the martingale central limit theorem
Bernoulli 192), 2013, 633 645 DOI: 10.3150/12-BEJ417 arxiv:1103.5050v2 [math.pr] 21 Mar 2013 On the rate of convergence in the martingale central limit theorem JEAN-CHRISTOPHE MOURRAT Ecole olytechnique
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationRound-off Errors and Computer Arithmetic - (1.2)
Round-off Errors and Comuter Arithmetic - (.). Round-off Errors: Round-off errors is roduced when a calculator or comuter is used to erform real number calculations. That is because the arithmetic erformed
More informationUses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).
1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need
More informationConvex Analysis and Economic Theory Winter 2018
Division of the Humanities and Social Sciences Ec 181 KC Border Conve Analysis and Economic Theory Winter 2018 Toic 16: Fenchel conjugates 16.1 Conjugate functions Recall from Proosition 14.1.1 that is
More informationOn a Markov Game with Incomplete Information
On a Markov Game with Incomlete Information Johannes Hörner, Dinah Rosenberg y, Eilon Solan z and Nicolas Vieille x{ January 24, 26 Abstract We consider an examle of a Markov game with lack of information
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationTowards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK
Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)
More informationCOMPSCI 240: Reasoning Under Uncertainty
COMPSCI 240: Reasoning Under Uncertainty Andrew Lan and Nic Herndon University of Massachusetts at Amherst Spring 2019 Lecture 20: Central limit theorem & The strong law of large numbers Markov and Chebyshev
More informationLECTURE 18: Inequalities, convergence, and the Weak Law of Large Numbers. Inequalities
LECTURE 18: Inequalities, convergence, and the Weak Law of Large Numbers Inequalities bound P(X > a) based on limited information about a distribution Markov inequality (based on the mean) Chebyshev inequality
More informationMeasuring center and spread for density curves. Calculating probabilities using the standard Normal Table (CIS Chapter 8, p 105 mainly p114)
Objectives Density curves Measuring center and sread for density curves Normal distributions The 68-95-99.7 (Emirical) rule Standardizing observations Calculating robabilities using the standard Normal
More informationarxiv: v1 [math-ph] 29 Apr 2016
INFORMATION THEORY AND STATISTICAL MECHANICS REVISITED arxiv:64.8739v [math-h] 29 Ar 26 JIAN ZHOU Abstract. We derive Bose-Einstein statistics and Fermi-Dirac statistics by Princile of Maximum Entroy alied
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationLecture 21: Convergence of transformations and generating a random variable
Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous
More information1. Point Estimators, Review
AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.
More informationRandom Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.
Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More information