Lecture 2: CENTRAL LIMIT THEOREM
|
|
- Martina Snow
- 5 years ago
- Views:
Transcription
1 A Theorist s Toolkit (CMU 8-859T, Fall 3) Lectre : CENTRAL LIMIT THEOREM September th, 3 Lectrer: Ryan O Donnell Scribe: Anonymos SUM OF RANDOM VARIABLES Let X, X, X 3,... be i.i.d. random variables (Here i.i.d. means independent and identically distribted ), s.t. Pr[X i = ] = p, Pr[X i = ] = p. X i is also called Bernolli random variable. Let S n = X + + X n. We will be interested in the random variable S n which is called Binomial random variable (S n B(n, p)). If yo toss a coin for n times, and X i = represents the event that the reslt is head in the ith trn, then S n is jst the total nmber of appearance of head in n times. Recall some basic facts on expectation and variance, where Y, Y, Y are random variables. E[Y + Y ] = E[Y ] + E[Y ] E[Y Y ] = E[Y ] E[Y ] if random variables Y and Y are independent (Y Y ) E[cY ] = ce[y ], E[c + Y ] = c + E[Y ], where c is a constant If we denote µ = E[Y ], the variance of Y Var[Y ] = E[(Y µ) ] = E[Y µy + µ ] = E[Y ] µ E[Y ] + µ = E[Y ] E[Y ] If Y Y Var[Y + Y ] = E[(Y + Y ) ] E[(Y + Y )] = (E[Y ] + E[Y Y ] + E[Y ]) (E[Y ] + E[Y ] E[Y ] + E[Y ] ) = (E[Y ] E[Y ] ) + (E[Y ] E[Y ] ) = Var[Y ] + Var[Y ] Var[cY ] = c Var[Y ], Var[c + Y ] = Var[Y ] The standard derivation σ = stddev[y ] = Var[Y ]
2 For any X i we have the expectation E[X i ] = Pr[X i = ] + Pr[X i = ] = p, therefore [ n ] E[S n ] = E X i = i= n E[X i ] = np i= Since the variance of X i Var[X i ] = E[Xi ] E[X i ] = p p = p( p) and X i s are independent, the variance of S n is [ n ] n Var[S n ] = Var X i = Var[X i ] = np( p) i= We wold like to somehow normalize the random variable S n s.t. its mean is and its variance is. Let Z n := S n µ σ where µ and σ is the mean and standard derivation of S n. It is easy to see that and Since S n = σz n + pn, we have i= E[Z n ] = E[(S n µ)/σ] = (E[S n ] µ)/σ = Var[Z n ] = Var[(S n µ)/σ] = Var[S n ]/σ = Pr[S n ] = Pr[σZ n + pn ] = Pr [ Z n pn ] σ So if we know the probability distribtion of Z n, we may also know the probability distribtion of S n, vice versa. Example.. Sppose p =, E[S n] = n, Var[S n] = n, Z n = X + + X n n n = (X ) + + (X n ) n It can be seen as X i = { + w.p. w.p. Recall from Lectre, the probability that Z n is is Pr[Z n = ] = Θ( n ), when n is even, as in Figre.
3 . Θ ( / n)..8 density z n Figre : Histogram of Z n for n = 6 and p = / GAUSSIAN DISTRIBUTION Definition. (Gassian Distribtion). A random variable Z is Gassian distribted with parameters µ and σ, (abbreviated N(µ, σ )), if it is continos with p.d.f. (probability density fnction) φ(z) = π e z where µ refers to the mean and σ refers to the variance of Gassian. Particlarly we call Z N(, ) a standard Gassian. Fact.. Let Z,..., Z d be i.i.d. standard Gassians, Z = (Z,..., Z d ). Then Z s distribtion is rotationally symmetric, which means for all Z = r, the probability density of Z is the same. Figre shows the probability density of -dimension standard Gassian. Proof. p.d.f. of Z at (Z,..., Z d ) with Z = r is φ( ( ) d d Z ) = φ(z )φ(z )... φ(z d ) = π i= = e Z i ( ) d ( ) d ( ) d e (Z + +Z d ) = e (Z,...,Z d ) = e r / π π π Therefore the probability density of Z only depends on r, which means Z s distribtion is rotationally symmetric. Corollary.3. φ(x)dx = 3
4 y r x Figre : Distribtion of D-standard Gassian is rotationally symmetric Proof. Consider abot the -dimension standard Gassian Z = (Z, Z ), which has p.d.f. π e (Z +Z ). We can first integrate the p.d.f. in a natral way. The integral becomes π e ( (z +z ) dz dz = Therefore, in order to prove π e z dz) = φ(x)dx =, it sffices to prove e (z +z ) dz dz = π ( φ(z)dz since φ(x)dx >. On the otherhand, as in Figre 3, we can intergrate the fnction e (z +z ) by height. The height of each cylinder is dh, while the radim of it is r which satisfies e r / = h. Therefore we have r = ln h Since < h = e r /, we have e (z +z ) dz dz = πr dh = ) π ln h dh = π ( h ln h + h ) = π Corollary.4 (Sm of Independent Gassian). Sopose X N(µ, σ ), Y N(µ, σ ) are independent. We have ax + by (aµ + bµ, a σ + b σ ) 4
5 .5 density..5 h = e r / dh h r y x Figre 3: Integration of D Gassian by height Proof. If µ = µ =, σ = σ =, then X, Y N(, ). We want to prove ax + by N(, a + b ), and we have Z = ax + by = (a, b) (X, Y ) Becase D standard Gassian (X, Y ) is rotationally symmetric, we can rotate (a, b) to ( a + b, ) as in Figre 4. Now Z = ( a + b, ) (X, Y ) = a + b X = N(, a + b ). Sppose X N(µ, σ ), Y N(µ, σ ). Since (X µ )/σ, (Y µ )/σ N(, ), we have Z aµ bµ = a(x µ ) + b(y µ ) = aσ X µ σ + bσ Y µ σ N(, a σ + b σ ) Therefore Z (aµ + bµ, a σ + b σ ). 3 CENTRAL LIMIT THEOREM Theorem 3. (Central Limit Theorem). For any i.i.d. X,..., X n, Z n n is a standard Gassian N(, ), i.e. R, lim n Pr[Z n ] = Pr[Z ] Z, where Z Remember the Central Limit Theorem is kind of seless since we don t know how qickly Z n will converge to a standard Gassian. 5
6 .5 y.5 (a, b) ( (a +b ),).5 (X, Y) x Figre 4: Rotating (a, b) to ( a + b, ) since (X, Y ) is rotationally symmetric The name of the sefl version is Berry-Esseen Theorem. Theorem 3. (Berry-Esseen Theorem). Let X,..., X n be independent r.v.s, Assme (W.O.L.O.G.) E[X i ] =. we write σi = E[Xi ] = Var[X i ] and assme n i= σ i =. Let S = X +... X n so E[S] =, Var[S] =. Then R, where Z N(, ), β = n i= E[ X i 3 ] Pr[S ] Pr[Z ] O() β Remember β is not always small. Here is an example. { + w.p. Example 3.3. Let X = and X w.p.,..., X n. In this scenario, S jst has the same distribtion as X. Then β = is a big nmber. On the other hand, this theorem does work in some cases. { + n w.p. Example 3.4. X i =, n w.p. E[X i ] = n, E[ X i 3 ] = Therefore β = / n. In this case β is small, and n 3 Pr[S ] Pr[Z ] = O, i ( n ) 6
7 The most recent pper bond of O() is O().554 [She3]. In this scenario, we have Pr[S ] Pr[Z ].56 n We can find that Pr[Z ]. when = 3 by compter or in standard normal table. Sppose H is the nmber of appearance of head when tossing a coin for n times. As in Example., we have H n = S n which means H n + n Assign = 3, we have Pr[H n/.5 n]., which is qite small. Sometimes β might be extremely large. Example 3.5. Let +n w.p. n X = n w.p. n otherwise and X,..., X n. n i= E[X i ] = E[X] =. In this scenario, Pr[S = ], which means S does not converge to a normal distribtion. In this case β +. From this example, we can see that the constraint on β = n i= E[ X i 3 ] captre two things: The r.v.s will not become extremely hge with small probability; The sm does not only depend on finite nmber of random variables. Notice that Berry-Esseen Theorem is good becase it does not care abot the vale of. We have [ Pr H n n + even thogh =. n which is spertiny. ] O( n ) φ(z)dz 4 CUMULATIVE DISTRIBUTION Definition 4. (Cmlative Distribtion Fnction of Standard Gassian). We denote Φ() as the c.d.f. (cmlative distribtion fnction) of standard Gassian N(, ) and define Φ() = Pr[Z ] = Φ() = + φ(z)dz = Pr[Z ] = Φ( ) 7
8 It is trivial that Φ() = Φ() =. Fact 4.. when >. ( ) φ() Φ() = O.8 density.6.4 e / / (π). 3 +δ 3 x Figre 5: Find a δ s.t. φ( + δ) c φ(), where c is a constant and less than Proof. We want to find a δ s.t. φ( + δ) c φ(), where c is a constant and less than (Figre 5). δ = / satisfies or conditions. φ( + ) = π e (+ ) / = e φ() e e φ() In general, we have φ(+ k ) e k φ() where > and k N. Since φ() is descreasing when >, sing the method in Lectre, we have Φ() = + φ()d k= φ( + k ) φ() e k = k= e e φ() ( ) φ() = O Proposition 4.3. when +. In fact, Φ() φ() ( 3 )φ() Φ() φ() when +. 8
9 References [She3] I. G. Shevtsova. On the absolte constants in the berry esseen ineqality and its strctral and nonniform improvements. Inform. Primen., 7():4 5, 3. 9
MATH2715: Statistical Methods
MATH275: Statistical Methods Exercises VI (based on lectre, work week 7, hand in lectre Mon 4 Nov) ALL qestions cont towards the continos assessment for this modle. Q. The random variable X has a discrete
More informationLecture 01: the Central Limit Theorem. 1 Central Limit Theorem for i.i.d. random variables
CSCI-B609: A Theorist s Toolkit, Fall 06 Aug 3 Lecture 0: the Cetral Limit Theorem Lecturer: Yua Zhou Scribe: Yua Xie & Yua Zhou Cetral Limit Theorem for iid radom variables Let us say that we wat to aalyze
More informationLecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN
Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and
More information6 The normal distribution, the central limit theorem and random samples
6 The normal distribution, the central limit theorem and random samples 6.1 The normal distribution We mentioned the normal (or Gaussian) distribution in Chapter 4. It has density f X (x) = 1 σ 1 2π e
More informationContinuous Probability Distributions
1 Chapter 5 Continuous Probability Distributions 5.1 Probability density function Example 5.1.1. Revisit Example 3.1.1. 11 12 13 14 15 16 21 22 23 24 25 26 S = 31 32 33 34 35 36 41 42 43 44 45 46 (5.1.1)
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationCSE 103 Homework 8: Solutions November 30, var(x) = np(1 p) = P r( X ) 0.95 P r( X ) 0.
() () a. X is a binomial distribution with n = 000, p = /6 b. The expected value, variance, and standard deviation of X is: E(X) = np = 000 = 000 6 var(x) = np( p) = 000 5 6 666 stdev(x) = np( p) = 000
More informationCS145: Probability & Computing
CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,
More informationBayes and Naïve Bayes Classifiers CS434
Bayes and Naïve Bayes Classifiers CS434 In this lectre 1. Review some basic probability concepts 2. Introdce a sefl probabilistic rle - Bayes rle 3. Introdce the learning algorithm based on Bayes rle (ths
More informationCS145: Probability & Computing Lecture 9: Gaussian Distributions, Conditioning & Bayes Rule
CS45: Probabilit & Compting ectre 9: Gassian Distribtions, Conditioning & Baes Rle Instrctor: Erik Sdderth Brown Universit Compter Science Febrar 6, 5 PDF f,y (, ) Bffon s needle Parallel lines at distance
More informationDiscrete Mathematics and Probability Theory Fall 2015 Note 20. A Brief Introduction to Continuous Probability
CS 7 Discrete Mathematics and Probability Theory Fall 215 Note 2 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces Ω, where the number
More informationIntroduction to Computational Finance and Financial Econometrics Probability Review - Part 2
You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /
More informationMAS113 Introduction to Probability and Statistics. Proofs of theorems
MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a
More informationChapter 4. Chapter 4 sections
Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation
More informationMATH Solutions to Probability Exercises
MATH 5 9 MATH 5 9 Problem. Suppose we flip a fair coin once and observe either T for tails or H for heads. Let X denote the random variable that equals when we observe tails and equals when we observe
More informationCourse information: Instructor: Tim Hanson, Leconte 219C, phone Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment.
Course information: Instructor: Tim Hanson, Leconte 219C, phone 777-3859. Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment. Text: Applied Linear Statistical Models (5th Edition),
More informationCHARACTERIZATIONS OF EXPONENTIAL DISTRIBUTION VIA CONDITIONAL EXPECTATIONS OF RECORD VALUES. George P. Yanev
Pliska Std. Math. Blgar. 2 (211), 233 242 STUDIA MATHEMATICA BULGARICA CHARACTERIZATIONS OF EXPONENTIAL DISTRIBUTION VIA CONDITIONAL EXPECTATIONS OF RECORD VALUES George P. Yanev We prove that the exponential
More informationX = X X n, + X 2
CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationTwelfth Problem Assignment
EECS 401 Not Graded PROBLEM 1 Let X 1, X 2,... be a sequence of independent random variables that are uniformly distributed between 0 and 1. Consider a sequence defined by (a) Y n = max(x 1, X 2,..., X
More informationMATH2715: Statistical Methods
MATH275: Statistical Methods Exercises III (based on lectres 5-6, work week 4, hand in lectre Mon 23 Oct) ALL qestions cont towards the continos assessment for this modle. Q. If X has a niform distribtion
More informationContinuous Probability Distributions
1 Chapter 5 Continuous Probability Distributions 5.1 Probability density function Example 5.1.1. Revisit Example 3.1.1. 11 12 13 14 15 16 21 22 23 24 25 26 S = 31 32 33 34 35 36 41 42 43 44 45 46 (5.1.1)
More informationSection 7.4: Integration of Rational Functions by Partial Fractions
Section 7.4: Integration of Rational Fnctions by Partial Fractions This is abot as complicated as it gets. The Method of Partial Fractions Ecept for a few very special cases, crrently we have no way to
More informationExample continued. Math 425 Intro to Probability Lecture 37. Example continued. Example
continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with
More informationAn Investigation into Estimating Type B Degrees of Freedom
An Investigation into Estimating Type B Degrees of H. Castrp President, Integrated Sciences Grop Jne, 00 Backgrond The degrees of freedom associated with an ncertainty estimate qantifies the amont of information
More informationDiscrete Random Variables
Discrete Random Variables We have a probability space (S, Pr). A random variable is a function X : S V (X ) for some set V (X ). In this discussion, we must have V (X ) is the real numbers X induces a
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/
More information8 Laws of large numbers
8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable
More informationFinal Solutions Fri, June 8
EE178: Probabilistic Systems Analysis, Spring 2018 Final Solutions Fri, June 8 1. Small problems (62 points) (a) (8 points) Let X 1, X 2,..., X n be independent random variables uniformly distributed on
More informationStatistics 100A Homework 5 Solutions
Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to
More informationMAS113 Introduction to Probability and Statistics. Proofs of theorems
MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a
More informationSTAT2201. Analysis of Engineering & Scientific Data. Unit 3
STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random
More informationLecture 9: March 26, 2014
COMS 6998-3: Sub-Linear Algorithms in Learning and Testing Lecturer: Rocco Servedio Lecture 9: March 26, 204 Spring 204 Scriber: Keith Nichols Overview. Last Time Finished analysis of O ( n ɛ ) -query
More information3 Multiple Discrete Random Variables
3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f
More informationSources of Non Stationarity in the Semivariogram
Sorces of Non Stationarity in the Semivariogram Migel A. Cba and Oy Leangthong Traditional ncertainty characterization techniqes sch as Simple Kriging or Seqential Gassian Simlation rely on stationary
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More informationLecture 8: Continuous random variables, expectation and variance
Lecture 8: Continuous random variables, expectation and variance Lejla Batina Institute for Computing and Information Sciences Digital Security Version: autumn 2013 Lejla Batina Version: autumn 2013 Wiskunde
More information5.6 The Normal Distributions
STAT 41 Lecture Notes 13 5.6 The Normal Distributions Definition 5.6.1. A (continuous) random variable X has a normal distribution with mean µ R and variance < R if the p.d.f. of X is f(x µ, ) ( π ) 1/
More informationVariance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18
Variance reduction p. 1/18 Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction p. 2/18 Example Use simulation to compute I = 1 0 e x dx We
More information1 Review of Probability
1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x
More informationDiscrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 20
CS 70 Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 20 Today we shall discuss a measure of how close a random variable tends to be to its expectation. But first we need to see how to compute
More informationECON Fundamentals of Probability
ECON 351 - Fundamentals of Probability Maggie Jones 1 / 32 Random Variables A random variable is one that takes on numerical values, i.e. numerical summary of a random outcome e.g., prices, total GDP,
More informationcontinuous random variables
continuous random variables continuous random variables Discrete random variable: takes values in a finite or countable set, e.g. X {1,2,..., 6} with equal probability X is positive integer i with probability
More informationWeek 12-13: Discrete Probability
Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible
More informationHomework for 1/13 Due 1/22
Name: ID: Homework for 1/13 Due 1/22 1. [ 5-23] An irregularly shaped object of unknown area A is located in the unit square 0 x 1, 0 y 1. Consider a random point distributed uniformly over the square;
More informationChapter 2: Random Variables
ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationMath 151. Rumbos Fall Solutions to Review Problems for Exam 2. Pr(X = 1) = ) = Pr(X = 2) = Pr(X = 3) = p X. (k) =
Math 5. Rumbos Fall 07 Solutions to Review Problems for Exam. A bowl contains 5 chips of the same size and shape. Two chips are red and the other three are blue. Draw three chips from the bowl at random,
More informationRandom Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay
1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued
More information1 Undiscounted Problem (Deterministic)
Lectre 9: Linear Qadratic Control Problems 1 Undisconted Problem (Deterministic) Choose ( t ) 0 to Minimize (x trx t + tq t ) t=0 sbject to x t+1 = Ax t + B t, x 0 given. x t is an n-vector state, t a
More informationChapter 4 : Expectation and Moments
ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average
More informationn px p x (1 p) n x. p x n(n 1)... (n x + 1) x!
Lectures 3-4 jacques@ucsd.edu 7. Classical discrete distributions D. The Poisson Distribution. If a coin with heads probability p is flipped independently n times, then the number of heads is Bin(n, p)
More informationA generalized Alon-Boppana bound and weak Ramanujan graphs
A generalized Alon-Boppana bond and weak Ramanjan graphs Fan Chng Abstract A basic eigenvale bond de to Alon and Boppana holds only for reglar graphs. In this paper we give a generalized Alon-Boppana bond
More informationECON 5350 Class Notes Review of Probability and Distribution Theory
ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one
More informationLecture 8 Sampling Theory
Lecture 8 Sampling Theory Thais Paiva STA 111 - Summer 2013 Term II July 11, 2013 1 / 25 Thais Paiva STA 111 - Summer 2013 Term II Lecture 8, 07/11/2013 Lecture Plan 1 Sampling Distributions 2 Law of Large
More informationSolutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π
Solutions to Homework Set #5 (Prepared by Lele Wang). Neural net. Let Y X + Z, where the signal X U[,] and noise Z N(,) are independent. (a) Find the function g(y) that minimizes MSE E [ (sgn(x) g(y))
More informationBandits, Experts, and Games
Bandits, Experts, and Games CMSC 858G Fall 2016 University of Maryland Intro to Probability* Alex Slivkins Microsoft Research NYC * Many of the slides adopted from Ron Jin and Mohammad Hajiaghayi Outline
More informationON THE SHAPES OF BILATERAL GAMMA DENSITIES
ON THE SHAPES OF BILATERAL GAMMA DENSITIES UWE KÜCHLER, STEFAN TAPPE Abstract. We investigate the for parameter family of bilateral Gamma distribtions. The goal of this paper is to provide a thorogh treatment
More information7 Random samples and sampling distributions
7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces
More informationEssentials on the Analysis of Randomized Algorithms
Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random
More informationLimiting Distributions
Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the
More information1 The space of linear transformations from R n to R m :
Math 540 Spring 20 Notes #4 Higher deriaties, Taylor s theorem The space of linear transformations from R n to R m We hae discssed linear transformations mapping R n to R m We can add sch linear transformations
More informationMAS113 Introduction to Probability and Statistics
MAS113 Introduction to Probability and Statistics School of Mathematics and Statistics, University of Sheffield 2018 19 Identically distributed Suppose we have n random variables X 1, X 2,..., X n. Identically
More informationELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables
Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 2 Discrete Random Variables Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Random Variable Discrete Random
More informationANOVA INTERPRETING. It might be tempting to just look at the data and wing it
Introdction to Statistics in Psychology PSY 2 Professor Greg Francis Lectre 33 ANalysis Of VAriance Something erss which thing? ANOVA Test statistic: F = MS B MS W Estimated ariability from noise and mean
More informationLimiting Distributions
We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More information1 Probability Review. 1.1 Sample Spaces
1 Probability Review Probability is a critical tool for modern data analysis. It arises in dealing with uncertainty, in randomized algorithms, and in Bayesian analysis. To understand any of these concepts
More informationRandom Variables and Expectations
Inside ECOOMICS Random Variables Introduction to Econometrics Random Variables and Expectations A random variable has an outcome that is determined by an experiment and takes on a numerical value. A procedure
More informationECE Lecture #10 Overview
ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions
More informationMoments. Raw moment: February 25, 2014 Normalized / Standardized moment:
Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230
More informationMathematics of Finance Problem Set 1 Solutions
Mathematics of Finance Problem Set 1 Solutions 1. (Like Ross, 1.7) Two cards are randomly selected from a deck of 52 playing cards. What is the probability that they are both aces? What is the conditional
More informationSystem Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models
System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models Fatih Cavdur fatihcavdur@uludag.edu.tr March 29, 2014 Introduction Introduction The world of the model-builder
More informationThe Central Limit Theorem
The Central Limit Theorem Patrick Breheny September 27 Patrick Breheny University of Iowa Biostatistical Methods I (BIOS 5710) 1 / 31 Kerrich s experiment Introduction 10,000 coin flips Expectation and
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationOptimization and Simulation
Optimization and Simulation Variance reduction Michel Bierlaire Transport and Mobility Laboratory School of Architecture, Civil and Environmental Engineering Ecole Polytechnique Fédérale de Lausanne M.
More informationIntroducing the Normal Distribution
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 10: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2,
More informationECE 302 Division 2 Exam 2 Solutions, 11/4/2009.
NAME: ECE 32 Division 2 Exam 2 Solutions, /4/29. You will be required to show your student ID during the exam. This is a closed-book exam. A formula sheet is provided. No calculators are allowed. Total
More informationUCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the
More informationIntroducing the Normal Distribution
Department of Mathematics Ma 3/13 KC Border Introduction to Probability and Statistics Winter 219 Lecture 1: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2, 2.2,
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationCentral Limit Theorem and the Law of Large Numbers Class 6, Jeremy Orloff and Jonathan Bloom
Central Limit Theorem and the Law of Large Numbers Class 6, 8.5 Jeremy Orloff and Jonathan Bloom Learning Goals. Understand the statement of the law of large numbers. 2. Understand the statement of the
More informationAnalysis of Engineering and Scientific Data. Semester
Analysis of Engineering and Scientific Data Semester 1 2019 Sabrina Streipert s.streipert@uq.edu.au Example: Draw a random number from the interval of real numbers [1, 3]. Let X represent the number. Each
More informationE X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.
E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More informationECE353: Probability and Random Processes. Lecture 5 - Cumulative Distribution Function and Expectation
ECE353: Probability and Random Processes Lecture 5 - Cumulative Distribution Function and Expectation Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu
More informationLecture 2: Review of Basic Probability Theory
ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent
More informationWeek 1 Quantitative Analysis of Financial Markets Distributions A
Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October
More informationContinuous Expectation and Variance, the Law of Large Numbers, and the Central Limit Theorem Spring 2014
Continuous Expectation and Variance, the Law of Large Numbers, and the Central Limit Theorem 18.5 Spring 214.5.4.3.2.1-4 -3-2 -1 1 2 3 4 January 1, 217 1 / 31 Expected value Expected value: measure of
More informationLecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)
Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution
More informationUncertainties of measurement
Uncertainties of measrement Laboratory tas A temperatre sensor is connected as a voltage divider according to the schematic diagram on Fig.. The temperatre sensor is a thermistor type B5764K [] with nominal
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationStationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.
Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of
More informationM(t) = 1 t. (1 t), 6 M (0) = 20 P (95. X i 110) i=1
Math 66/566 - Midterm Solutions NOTE: These solutions are for both the 66 and 566 exam. The problems are the same until questions and 5. 1. The moment generating function of a random variable X is M(t)
More informationCharacterizations of probability distributions via bivariate regression of record values
Metrika (2008) 68:51 64 DOI 10.1007/s00184-007-0142-7 Characterizations of probability distribtions via bivariate regression of record vales George P. Yanev M. Ahsanllah M. I. Beg Received: 4 October 2006
More informationConditional distributions (discrete case)
Conditional distributions (discrete case) The basic idea behind conditional distributions is simple: Suppose (XY) is a jointly-distributed random vector with a discrete joint distribution. Then we can
More informationExpectation, Variance and Standard Deviation for Continuous Random Variables Class 6, Jeremy Orloff and Jonathan Bloom
Expectation, Variance and Standard Deviation for Continuous Random Variables Class 6, 8.5 Jeremy Orloff and Jonathan Bloom Learning Goals. Be able to compute and interpret expectation, variance, and standard
More information