Topic 7: Convergence of Random Variables
|
|
- Ethan Blake
- 5 years ago
- Views:
Transcription
1 Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0
2 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information about the probability space by analyzing a sample of outcomes. This process is referre to as statistical inference. Inference proceures are parametric when we make assumptions about the probability space from which our sample is rawn (for example, each sample observation represents an outcome of a normally istribute ranom variable with unknown mean an unit variance). If we make no such assumptions our proceures are nonparametric. Our focus in this course is on parametric inference. This involves both the estimation of population parameters an testing hypotheses about them. Page 1
3 Defining a Statistic Definition: Any real-value function T = r(x 1,..., X n ) is calle a statistic. Notice that: a statistic is itself a ranom variable we ve consiere several functions of ranom variables, whose istributions are well efine such as : Y = X µ σ where X N(µ, σ 2 ). We showe that Y N(0, 1). Y = n X i, where each X i has a bernoulli istribution with parameter p was shown to have a binomial istribution with parameters n an p. Y = n X 2 i where each X i has a stanar normal istribution was shown to have a χ 2 n istribution etc... Only some of these functions of ranom variables are statistics. This istinction is important because statistics have sample counterparts. In a problem of estimating an unknown parameter, θ, our estimator will be a statistic whose value can be regare as an estimate of θ. It turns out that for large samples, the istributions of some statistics, such as the sample mean, are well-known. Page 2
4 Markov s Inequality We begin with some useful inequalities which provie us with istribution-free bouns on the probability of certain events an are useful in proving the law of large numbers, one of our two main large sample theorems. Markov s Inequality: Let X be a ranom variable with ensity function f(x) such that P(X 0) = 1. Then for any given number t > 0, P(X t) E(X) t Proof. (for iscrete istributions) E(X) = x xf(x) = x<t xf(x) + x t xf(x) All terms in these summations are non-negative by assumption, so we have E(X) xf(x) tf(x) = tp(x t) x t x t This inequality obviously hols for t E(X) (why?). Its main interest is in bouning the probability in the tails. For example, if the mean of X is 1, the probability of X taking values bigger than 100 is less than.01. This is true irrespective of the istribution of X- this is what makes the result powerful. Page 3
5 Chebyshev s Inequality This is a special case of Markov s inequality an relates the variance of a istribution to the probability associate with eviations from the mean. Chebyshev s Inequality: Let X be a ranom variable with finite variance σ 2 an mean µ. Then, for every t > 0, or equivalently, P( X µ t) σ2 t 2 P( X µ < t) 1 σ2 t 2 Proof. Use Markov s inequality with Y = (X µ) 2 an use t 2 in place of the constant t. Then Y takes only non-negative values an E(Y) = Var(X) = σ 2. In particular, this tells us that for any ranom variable, the probability that values taken by the variable will be more than 2 stanar eviations away from the mean cannot excee 1 4 an 3 stanar eviations away cannot excee 1 9 P( X µ 3σ) 1 9 For most istributions, this upper boun is consierably higher than the actual probability of this event. What are these probabilities for a Normal istribution? Page 4
6 Probability bouns..an example Chebyshev s Inequality can, in principle be use for computing bouns for the probabilities of certain events. In practice this is not often use because the bouns it provies are much larger than the actual probabilities of tail events as seen in the following example: 1 Let the ensity function of X be given by f(x) = (2 3) I ( 3, 3) (x). In this case µ = 0 an σ 2 = (b a)2 12 = 1. If t = 3 2, then Pr( X µ 3 2 ) = Pr( X 3 2 ) = x = =.13 Chebyshev s inequality gives us 1 t 2 = 4 9 while our boun is 1 4. which is much higher. If t = 2, the exact probability is 0, Page 5
7 The sample mean Recall the mean an variance of the sample mean: E(X n ) = 1 n n E(X i ) = n 1.nµ = µ Var(X n ) = 1 n 2 Var( n X i ) = 1 n 2 n X n = 1 n (X X n ) Var(X i ) = 1 n 2 nσ 2 = σ2 n So we know the following about the sample mean Its expectation is equal to that of the population. It is more concentrate aroun its mean value µ than was the original istribution. The larger the sample, the lower the variance of X n. Page 6
8 Sample size an precision of the sample mean We can use Chebyshev s Inequality to ask how big a sample we shoul take, if we want to ensure a certain level of precision in our estimate of the sample mean. Suppose the ranom sample is picke from a istribution which unknown mean an variance equal to 4 an we want to ensure an estimate which is within 1 unit of the real mean with probability.99. So we want Pr( X µ 1).01. Applying Chebyshev s Inequality we get Pr( X µ 1) n 4. Since we want n 4 n = 400. =.01 we take This calculation oes not use any information on the istribution of X an therefore often gives us a much larger number than we woul get if this information was available. That s the trae-off between the more efficient parametric proceures an more robust non-parametric ones. Page 7
9 Sample size an precision: Bernoulli example We want to pick a ranom sample size of size n from a Bernoulli istribution an use the sample mean to estimate the parameter p. How big shoul n be for the sample mean of (X 1,... X n ) to lie within.1 of the population mean with probability equal to k? If we on t use the istribution of X i an apply Chebyshev s Inequality, we get: P( X n µ.1) 1 σ2 Xn 1 or P( X n µ.1) 1 p(1 p) n Setting the RHS equal to k, we get: n = 100p(1 p) 1 k With σ 2 Xn = 4 1, (p = 1 2 ) an k =.7, we choose n = 84 so that our sample mean to lies in the interval [.4,.6], with probability equal to.7. What if p 1 2? Now let s use the fact that X i is Bernoulli. We want P(p.1 Xi n p +.1) k P(np.1n X i np +.1n) k When n = 15 an p =.5, np = 7.5 an for the Binomial istribution F(9) F(6) =.7 so we can use this sample size for the esire level of precision in our estimate of X n. Page 8
10 Convergence of Real Sequences We woul like our estimators to be well-behave. What oes this mean? One esirable property is that our estimates get closer to the parameter that we are trying to estimate as our sample gets larger. We re going to make precise this notion of getting closer. Recall that a sequence is just a function from the set of natural numbers N to any set A (Examples: y n = 2 n, y n = 1 n ) A real number sequence {y n } converges to y if for every ɛ > 0, there exists N(ɛ) for which n N(ɛ) = y n y < ɛ. In such as case, we say that {y n } y Which of the above sequences converge? If we have a sequence of functions {f n }, the sequence is sai to converge to a function f if f n (x) f(x) for all x in the omain of f. In the case of matrices, a sequence of matrices converges if each the sequences forme by (i, j) th elements converge, i.e. Y n [i, j] Y[i, j]. Page 9
11 Sequences of Ranom Variables A sequence of ranom variables is a sequence for which the set A is a collection of ranom variables an the function efining the sequence puts these ranom variables in a specific orer. n Examples: Y n = n 1 X i, where X i N(µ, σ 2 ), or Y n = n X i, where X i Bernoulli(p) We now nee to moify our notion of convergence, since the sequence {Y n } no longer efines a given sequence of real numbers, but rather, many ifferent real number sequences, epening on the realizations of X 1,..., X n. Convergence questions can no longer be verifie unequivocally since we are not referring to a given real sequence, but they can be assigne a probability of occurrence base on the probability space for ranom variables involve. There are several types of ranom variable convergence iscusse in the literature. We ll focus on two of these: Convergence in Distribution Convergence in Probability Page 10
12 Convergence in Distribution Definition: Let {Y n } be a sequence of ranom variables, an let {F n } be the associate sequence of cumulative istribution functions. If there exists a cumulative istribution function F such that F n (y) F(y) then F is calle the limiting CDF of {Y n }. Denoting by Y the r.v. with the istribution function F, we say that Y n converges in istribution to the ranom variable Y an enote this by Y n Y. The notation Y n F is also use to enote Y n Y F Convergence in istribution hols if there is convergence in the sequence of ensities (f n (y) f(y)) or in the sequence of MGFs (M Yn (t) M Y (t)). In some cases, it may be easier to use these to show convergence in istribution. Result:Let X n X, an let the ranom variable g(x) be efine by a function continous function g(.) Then g(x n ) g(x) Example: Suppose Z n Z N(0, 1), then 2Z n + 5 2Z + 5 N(5, 4) (why?) Page 11
13 Convergence in Probability This concept formalizes the iea that we can bring the outcomes of the ranom variable Y n arbitrarily close to the outcomes of the ranom variable Y for large enough n. Definition: The sequence of ranom variables {Y n } converges in probability to the ranom variable Y iff lim P( y n y < ɛ) = 1 ɛ > 0 n p We enote this by Y n Y or plim Y n = Y. This justifies using outcomes of Y as an approximation for outcomes of Y n since the two are very close for large n. Notice that while convergence in istribution is a statement about the istribution functions of Y n an Y whereas convergence in probability is a statement about the joint ensity of outcomes, y n an y. Distribution functions of very ifferent experiments may be the same: an even number on a fair ie an a hea on a fair coin have the same istribution function, but the outcomes of these ranom variables are unrelate. p Therefore Y n Y implies Y n p c an the two are equivalent. Y n Y. In the special case where Y n c, we also have Page 12
14 The Weak Law of Large Numbers Consier now the convergence of the ranom variable sequence whose n th term is given by: X n = 1 n n X i WLLN: Let {X n } be a sequence of i.i.. ranom variables with finite mean µ an variance σ 2. Then p µ. X n Proof. Using Chebyshev s Inequality, P( X µ < ɛ) 1 σ2 nɛ 2 Hence lim P( X µ < ɛ) = 1 or plim X = µ n The WLLN will allow us to use the sample mean as an estimate of the population mean, uner very general conitions. Page 13
15 Central Limit Theorems Central limit theorems specify conitions uner which sequences of ranom variables converge in istribution to known families of istributions. These are very useful in eriving asymptotic istributions of test statistics whose exact istributions are either cumbersome or ifficult to erive. There are a large number of theorems which vary by the assumptions they place on the ranom variables ( epenent or inepenent, ientically or non-inentically istribute). The Linberg-Levy CLT: Let {X n } be a sequence of i.i.. ranom variables with EX i = µ an var(x i ) = σ 2 (0, ) i. Then X n µ σ = n n(xn µ) σ N(0, 1) In wors: Whenever a ranom sample of size n (large) is taken from any istribution with mean µ an variance σ 2, the sample mean X n will have a istribution which is approximately normal with mean µ an variance σ2 n. Page 14
16 Linberg-Levy CLT..applications Approximating Binomial Probabilities via the Normal Distribution: Let {X n } be a sequence of i.i.. Bernoulli ranom variables. Then, by the LLCLT: Xi n p p(1 p) n = n X i np np(1 p) N(0, 1) an n a X i N(np, np(1 p)) In this case, n X i is of the form az n + b with a = np(1 p) an b = np an since Z n converges to Z in istribution, the asymptotic istribution n X i is normal with mean an variance given above (base on our results on normally istribute variables). Approximating χ 2 Probabilities via the Normal Distribution: Let {X n } be a sequence of i.i.. chi-square ranom variables with 1 egree of freeom. Using the aitivity property of variables with gamma istributions, we have n X i χ 2 n Recall that the mean of gamma istribution is α β an its variance is α β 2. For a χ 2 n ranom variable, α = v 2 an β = 1 2. Then, by the LLCLT: Xi n 1 2 n = n X i n 2n N(0, 1) an n a X i N(n, 2n) Page 15
Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013
Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing
More informationConvergence of Random Walks
Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of
More informationLinear and quadratic approximation
Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function
More informationMAS113 Introduction to Probability and Statistics
MAS113 Introduction to Probability and Statistics School of Mathematics and Statistics, University of Sheffield 2018 19 Identically distributed Suppose we have n random variables X 1, X 2,..., X n. Identically
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationEconometrics I. September, Part I. Department of Economics Stanford University
Econometrics I Deartment of Economics Stanfor University Setember, 2008 Part I Samling an Data Poulation an Samle. ineenent an ientical samling. (i.i..) Samling with relacement. aroximates samling without
More informationTransformations of Random Variables
Transformations of Ranom Variables September, 2009 We begin with a ranom variable an we want to start looking at the ranom variable Y = g() = g where the function g : R R. The inverse image of a set A,
More informationTutorial on Maximum Likelyhood Estimation: Parametric Density Estimation
Tutorial on Maximum Likelyhoo Estimation: Parametric Density Estimation Suhir B Kylasa 03/13/2014 1 Motivation Suppose one wishes to etermine just how biase an unfair coin is. Call the probability of tossing
More informationIterated Point-Line Configurations Grow Doubly-Exponentially
Iterate Point-Line Configurations Grow Doubly-Exponentially Joshua Cooper an Mark Walters July 9, 008 Abstract Begin with a set of four points in the real plane in general position. A to this collection
More informationComputing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions
Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5
More informationLimiting Distributions
We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results
More informationThe derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)
Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)
More information1. Aufgabenblatt zur Vorlesung Probability Theory
24.10.17 1. Aufgabenblatt zur Vorlesung By (Ω, A, P ) we always enote the unerlying probability space, unless state otherwise. 1. Let r > 0, an efine f(x) = 1 [0, [ (x) exp( r x), x R. a) Show that p f
More informationCalculus and optimization
Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationREAL ANALYSIS I HOMEWORK 5
REAL ANALYSIS I HOMEWORK 5 CİHAN BAHRAN The questions are from Stein an Shakarchi s text, Chapter 3. 1. Suppose ϕ is an integrable function on R with R ϕ(x)x = 1. Let K δ(x) = δ ϕ(x/δ), δ > 0. (a) Prove
More informationRobust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k
A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine
More informationNOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,
NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which
More informationWhat Is Econometrics. Chapter Data Accounting Nonexperimental Data Types
Chapter What Is Econometrics. Data.. Accounting Definition. Accounting ata is routinely recore ata (that is, recors of transactions) as part of market activities. Most of the ata use in econmetrics is
More informationLimiting Distributions
Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationFundamental Tools - Probability Theory IV
Fundamental Tools - Probability Theory IV MSc Financial Mathematics The University of Warwick October 1, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory IV 1 / 14 Model-independent
More information6 The normal distribution, the central limit theorem and random samples
6 The normal distribution, the central limit theorem and random samples 6.1 The normal distribution We mentioned the normal (or Gaussian) distribution in Chapter 4. It has density f X (x) = 1 σ 1 2π e
More informationA Modification of the Jarque-Bera Test. for Normality
Int. J. Contemp. Math. Sciences, Vol. 8, 01, no. 17, 84-85 HIKARI Lt, www.m-hikari.com http://x.oi.org/10.1988/ijcms.01.9106 A Moification of the Jarque-Bera Test for Normality Moawa El-Fallah Ab El-Salam
More informationUses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).
1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationTransforms. Convergence of probability generating functions. Convergence of characteristic functions functions
Transforms For non-negative integer value ranom variables, let the probability generating function g X : [0, 1] [0, 1] be efine by g X (t) = E(t X ). The moment generating function ψ X (t) = E(e tx ) is
More information7 Random samples and sampling distributions
7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces
More informationConsistency and asymptotic normality
Consistency an asymtotic normality Class notes for Econ 842 Robert e Jong March 2006 1 Stochastic convergence The asymtotic theory of minimization estimators relies on various theorems from mathematical
More informationLecture 6 : Dimensionality Reduction
CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing
More informationPDE Notes, Lecture #11
PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =
More informationLecture 8 Sampling Theory
Lecture 8 Sampling Theory Thais Paiva STA 111 - Summer 2013 Term II July 11, 2013 1 / 25 Thais Paiva STA 111 - Summer 2013 Term II Lecture 8, 07/11/2013 Lecture Plan 1 Sampling Distributions 2 Law of Large
More informationLeast-Squares Regression on Sparse Spaces
Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction
More informationIntroduction to Probability
LECTURE NOTES Course 6.041-6.431 M.I.T. FALL 2000 Introduction to Probability Dimitri P. Bertsekas and John N. Tsitsiklis Professors of Electrical Engineering and Computer Science Massachusetts Institute
More informationCS145: Probability & Computing
CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,
More informationThe chromatic number of graph powers
Combinatorics, Probability an Computing (19XX) 00, 000 000. c 19XX Cambrige University Press Printe in the Unite Kingom The chromatic number of graph powers N O G A A L O N 1 an B O J A N M O H A R 1 Department
More informationSeparation of Variables
Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical
More informationIntroduction to Markov Processes
Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav
More informationarxiv:math/ v1 [math.pr] 3 Jul 2006
The Annals of Applie Probability 006, Vol. 6, No., 475 55 DOI: 0.4/050560500000079 c Institute of Mathematical Statistics, 006 arxiv:math/0607054v [math.pr] 3 Jul 006 OPTIMAL SCALING FOR PARTIALLY UPDATING
More informationConvergence rates of moment-sum-of-squares hierarchies for optimal control problems
Convergence rates of moment-sum-of-squares hierarchies for optimal control problems Milan Kora 1, Diier Henrion 2,3,4, Colin N. Jones 1 Draft of September 8, 2016 Abstract We stuy the convergence rate
More information3.6. Let s write out the sample space for this random experiment:
STAT 5 3 Let s write out the sample space for this ranom experiment: S = {(, 2), (, 3), (, 4), (, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)} This sample space assumes the orering of the balls
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationTable of Common Derivatives By David Abraham
Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec
More informationA New Minimum Description Length
A New Minimum Description Length Soosan Beheshti, Munther A. Dahleh Laboratory for Information an Decision Systems Massachusetts Institute of Technology soosan@mit.eu,ahleh@lis.mit.eu Abstract The minimum
More informationThe Exact Form and General Integrating Factors
7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily
More informationLeast Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2
High-Resolution Analysis of Least Distortion of Fixe-Rate Vector Quantizers Begin with Bennett's Integral D 1 M 2/k Fin best inertial profile Zaor's Formula m(x) λ 2/k (x) f X(x) x Fin best point ensity
More information1 Lecture 13: The derivative as a function.
1 Lecture 13: Te erivative as a function. 1.1 Outline Definition of te erivative as a function. efinitions of ifferentiability. Power rule, erivative te exponential function Derivative of a sum an a multiple
More informationFunction Spaces. 1 Hilbert Spaces
Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure
More informationLectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs
Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationunder the null hypothesis, the sign test (with continuity correction) rejects H 0 when α n + n 2 2.
Assignment 13 Exercise 8.4 For the hypotheses consiere in Examples 8.12 an 8.13, the sign test is base on the statistic N + = #{i : Z i > 0}. Since 2 n(n + /n 1) N(0, 1) 2 uner the null hypothesis, the
More informationarxiv: v4 [math.pr] 27 Jul 2016
The Asymptotic Distribution of the Determinant of a Ranom Correlation Matrix arxiv:309768v4 mathpr] 7 Jul 06 AM Hanea a, & GF Nane b a Centre of xcellence for Biosecurity Risk Analysis, University of Melbourne,
More informationMath Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors
Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+
More informationLecture 5. Symmetric Shearer s Lemma
Stanfor University Spring 208 Math 233: Non-constructive methos in combinatorics Instructor: Jan Vonrák Lecture ate: January 23, 208 Original scribe: Erik Bates Lecture 5 Symmetric Shearer s Lemma Here
More informationELEC3114 Control Systems 1
ELEC34 Control Systems Linear Systems - Moelling - Some Issues Session 2, 2007 Introuction Linear systems may be represente in a number of ifferent ways. Figure shows the relationship between various representations.
More informationFLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction
FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationX = X X n, + X 2
CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk
More informationA note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz
A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae
More informationLecture 1: Review of Basic Asymptotic Theory
Lecture 1: Instructor: Department of Economics Stanfor University Prepare by Wenbo Zhou, Renmin University Basic Probability Theory Takeshi Amemiya, Avance Econometrics, 1985, Harvar University Press.
More informationIntroduction to the Vlasov-Poisson system
Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its
More informationTime-of-Arrival Estimation in Non-Line-Of-Sight Environments
2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor
More informationLecture 4: September Reminder: convergence of sequences
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused
More information4.2 First Differentiation Rules; Leibniz Notation
.. FIRST DIFFERENTIATION RULES; LEIBNIZ NOTATION 307. First Differentiation Rules; Leibniz Notation In this section we erive rules which let us quickly compute the erivative function f (x) for any polynomial
More informationJUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson
JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises
More informationTopic 3: The Expectation of a Random Variable
Topic 3: The Expectation of a Random Variable Course 003, 2017 Page 0 Expectation of a discrete random variable Definition (Expectation of a discrete r.v.): The expected value (also called the expectation
More informationSchrödinger s equation.
Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of
More informationLevy Process and Infinitely Divisible Law
Stat205B: Probability Theory (Spring 2003) Lecture: 26 Levy Process an Infinitely Divisible Law Lecturer: James W. Pitman Scribe: Bo Li boli@stat.berkeley.eu Levy Processes an Infinitely Divisible Law
More informationColin Cameron: Asymptotic Theory for OLS
Colin Cameron: Asymtotic Theory for OLS. OLS Estimator Proerties an Samling Schemes.. A Roama Consier the OLS moel with just one regressor y i = βx i + u i. The OLS estimator b β = ³ P P i= x iy i canbewrittenas
More informationensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y
Ph195a lecture notes, 1/3/01 Density operators for spin- 1 ensembles So far in our iscussion of spin- 1 systems, we have restricte our attention to the case of pure states an Hamiltonian evolution. Toay
More informationOn combinatorial approaches to compressed sensing
On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu
More informationRelative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation
Relative Entropy an Score Function: New Information Estimation Relationships through Arbitrary Aitive Perturbation Dongning Guo Department of Electrical Engineering & Computer Science Northwestern University
More informationCalculus of Variations
16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t
More informationLogarithmic spurious regressions
Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationOptimization Notes. Note: Any material in red you will need to have memorized verbatim (more or less) for tests, quizzes, and the final exam.
MATH 2250 Calculus I Date: October 5, 2017 Eric Perkerson Optimization Notes 1 Chapter 4 Note: Any material in re you will nee to have memorize verbatim (more or less) for tests, quizzes, an the final
More informationHigh-Dimensional p-norms
High-Dimensional p-norms Gérar Biau an Davi M. Mason Abstract Let X = X 1,...,X be a R -value ranom vector with i.i.. components, an let X p = j=1 X j p 1/p be its p-norm, for p > 0. The impact of letting
More informationMath 115 Section 018 Course Note
Course Note 1 General Functions Definition 1.1. A function is a rule that takes certain numbers as inputs an assigns to each a efinite output number. The set of all input numbers is calle the omain of
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationTom Salisbury
MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom
More informationA FURTHER REFINEMENT OF MORDELL S BOUND ON EXPONENTIAL SUMS
A FURTHER REFINEMENT OF MORDELL S BOUND ON EXPONENTIAL SUMS TODD COCHRANE, JEREMY COFFELT, AND CHRISTOPHER PINNER 1. Introuction For a prime p, integer Laurent polynomial (1.1) f(x) = a 1 x k 1 + + a r
More informationSome Examples. Uniform motion. Poisson processes on the real line
Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 3
MATH 56A: STOCHASTIC PROCESSES CHAPTER 3 Plan for rest of semester (1) st week (8/31, 9/6, 9/7) Chap 0: Diff eq s an linear recursion (2) n week (9/11...) Chap 1: Finite Markov chains (3) r week (9/18...)
More informationMath 342 Partial Differential Equations «Viktor Grigoryan
Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More informationMonte Carlo Methods with Reduced Error
Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for
More informationG4003 Advanced Mechanics 1. We already saw that if q is a cyclic variable, the associated conjugate momentum is conserved, L = const.
G4003 Avance Mechanics 1 The Noether theorem We alreay saw that if q is a cyclic variable, the associate conjugate momentum is conserve, q = 0 p q = const. (1) This is the simplest incarnation of Noether
More informationPractice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example
A little more E(X Practice Problem - Skewness of Bernoulli Random Variable Lecture 7: and the Law of Large Numbers Sta30/Mth30 Colin Rundel February 7, 014 Let X Bern(p We have shown that E(X = p Var(X
More informationQF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim
QF101: Quantitative Finance September 5, 2017 Week 3: Derivatives Facilitator: Christopher Ting AY 2017/2018 I recoil with ismay an horror at this lamentable plague of functions which o not have erivatives.
More informationExam 2 Review Solutions
Exam Review Solutions 1. True or False, an explain: (a) There exists a function f with continuous secon partial erivatives such that f x (x, y) = x + y f y = x y False. If the function has continuous secon
More informationConvergence of random variables, and the Borel-Cantelli lemmas
Stat 205A Setember, 12, 2002 Convergence of ranom variables, an the Borel-Cantelli lemmas Lecturer: James W. Pitman Scribes: Jin Kim (jin@eecs) 1 Convergence of ranom variables Recall that, given a sequence
More informationCalculus in the AP Physics C Course The Derivative
Limits an Derivatives Calculus in the AP Physics C Course The Derivative In physics, the ieas of the rate change of a quantity (along with the slope of a tangent line) an the area uner a curve are essential.
More informationQuantile function expansion using regularly varying functions
Quantile function expansion using regularly varying functions arxiv:705.09494v [math.st] 9 Aug 07 Thomas Fung a, an Eugene Seneta b a Department of Statistics, Macquarie University, NSW 09, Australia b
More information19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control
19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationECON Fundamentals of Probability
ECON 351 - Fundamentals of Probability Maggie Jones 1 / 32 Random Variables A random variable is one that takes on numerical values, i.e. numerical summary of a random outcome e.g., prices, total GDP,
More informationLecture 4: Sampling, Tail Inequalities
Lecture 4: Sampling, Tail Inequalities Variance and Covariance Moment and Deviation Concentration and Tail Inequalities Sampling and Estimation c Hung Q. Ngo (SUNY at Buffalo) CSE 694 A Fun Course 1 /
More informationInverse Problems = Quest for Information
Journal of Geophysics, 198, 50, p. 159 170 159 Inverse Problems = Quest for Information Albert Tarantola an Bernar Valette Institut e Physique u Globe e Paris, Université Pierre et Marie Curie, 4 place
More information1 Probability theory. 2 Random variables and probability theory.
Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major
More information