Stat410 Probability and Statistics II (F16)

Size: px
Start display at page:

Download "Stat410 Probability and Statistics II (F16)"

Transcription

1 Some Basic Cocepts of Statistical Iferece (Sec 5.) Suppose we have a rv X that has a pdf/pmf deoted by f(x; θ) or p(x; θ), where θ is called the parameter. I previous lectures, we focus o probability problems where the value of θ is give. From ow o, we focus o statistical problems where θ is ukow ad we try to get some iformatio about θ from a radom sample (X,..., X ) from this distributio. Parameter θ Radom sample: (X,..., X ) iid f( ; θ) Observed sample: (x,..., x ) oe realizatio of (X,..., X ) Statistic T T (X,..., X ): a fuctio of the sample, which is also radom. Estimator ˆθ ˆθ(X,..., X ) of θ is a fuctio of the sample, i.e., a statistic. Give a observed sample (X x,..., X x ), the value of ˆθ(x,..., x ) is called a Estimate of θ. So, a estimator is a radom variable, while a estimate is a real umber (i.e., oe realizatio of the Estimator). Overview of Estimatio How to derive a estimator? Method of Momets: suppose E(X) h(θ). Set the sample mea X h( θ), the solve for θ. Maximum Likelihood Estimator (see below). Notatio: I ll use ˆθ as a geeric otatio for a estimator of parameter θ. I problems where we eed derive estimators based o differet approaches, I use θ for oe estimator of θ ad ˆθ aother estimator of θ, for example, θ for method of momets estimator ad ˆθ for MLE. How to evaluate the performace of a estimator? Note that ˆθ is a radom variable, usually a cotiuous radom variable, so the chace that ˆθ θ is zero. But we ca say whether o average ˆθ is equal to θ, which leads to the defiitio of Bias(ˆθ) E(ˆθ) θ. Aother metric is the averaged square distace from ˆθ to the target θ, which leads to the defiitio of Mea Squared Error MSE(ˆθ) E [ (ˆθ θ) 2].

2 If we have derived multiple estimators for θ, we ca compare their MSE s by their relative efficiecy. Maximum Likelihood Estimator (MLE, Sec 6.) MLE: the estimator or estimators that maximize the Likelihood fuctio How to derive MLE? L(θ; x) f(x,..., x ; θ) f(x i ; θ). Step : Compute log f(x; θ); Step 2: Plug x i ito the expressio derived at Step ad sum them over i, which gives us the log likelihood fuctio: [ ] l(θ) log f(x i ; θ) log f(x i ; θ). Step 3: Fid the maximum of l(θ): ˆθ arg max θ l(θ). Tips/Tools for Step 3. Take derivative of l(θ) with respect to θ ad solve l (θ) 0 for θ. Be careful whe the parameter θ is i a bouded regio, say θ 0 or θ [0, ]. Make sure the solutio ˆθ is i the rage of θ. If it s outside the rage, usually you eed to check whether oe of the boudary poits is the maximum. If X is bouded ad the bouds deped o θ, remember to add the idicator fuctio i f(x; θ). For example, if X Uif(0, θ], the f(x; θ) θ (0 x θ). Some special optimizatio results (proofs are i the Appedix): mi a x i a is achieved at a media(x i ). max p,...,p m m j j log p j where 0 p j ad j p j ad j 0, is achieved by settig p j j / where + + m. The objective fuctio could have multiple solutios or o solutio (i.e., MLE does t exist). Thm 6..2 (Ivariace property of MLE) Let X,..., X be a radom sample with the pdf f(x; θ). Let η g(θ) be a parameter of iterest. Suppose ˆθ is the mle of θ. The g(ˆθ) is the mle of g(θ). MLE may ot be uique. 2

3 Below we list the MLE the method of momets (MM) estimators for various distributio families. The derivatio is t difficult. I class, I ll go through some of them, but expect you to go through the remaiig by yourself. Dist pmf/pdf MLE MM Ber(θ) θ x ( θ) x X X Bi(, θ) ( ) y θ y ( θ) y Y/ Y/ Geo(p) p( p) (x ) / X / X Po(λ) λ x x! e λx X X Ex(λ) λx λx / X / X Ex(/θ) θ x x/θ X X N(µ, σ 2 ) 2πσ 2 e (x µ)2 /(2σ 2 ) X, X, i (X i X) 2 i (X i X) 2 Beta(α, ) Beta(/θ, ) Γ(α+) Γ(α) x α /( i log X i ) Γ( θ +) Γ( θ ) x /θ i log X i X X X Ga(α, β), α kow β α Γ(α) xα e βx α/ X α/ X Ga(α, /θ), α kow Γ(α)θ α x α e x/θ X/α X/α Table : List of MLE ad MM estimators for various parametric families. For the biomial distributio, we cosider the case where we have oly oe observatio Y Bi(, θ). If we have Y,..., Y m iid Bi(, θ), the the MLE ad MM estimator will be Ȳ / (Y + + Y m )/(m). The parameterizatio of Geo(p) is differet from the oe i Appedix D. Here X Geo(p) deotes the umber of Beroulli trials you have coducted before seeig the first Head, icludig the last trial i which you observe a Head. That is, X, 2,... ad you observe oe Head ad (X ) Tails. Beta(/θ, ) is discussed o p3 of Week7 Estimatioas.pdf. 3

4 More Examples (6..3): Let X,..., X be a radom sample from the Laplace distributio with pdf f(x; θ) 2 e x θ. Show that the mle of θ is give by ˆθ media(x,..., X ). Sice the mea of this distributio is θ, the method of momets estimator should be X. (6.4.5): Suppose we have a bag of marbles i three differet colors, red, blue, ad gree. To estimate the proportio of the three colors (p, p 2, p 3 ), we coducted the followig experimet: we radomly draw a marble from the bag, record its color, X i, if red 2, if blue 3, if gree ad put it back to the bag; repeat this process times. X i s are iid samples from a multiomial distributio: X i, 2, 3 with probabilities p, p 2, p 3 respectively. The joit likelihood of X,..., X is equal to f(x,..., X p, p 2, p 3 ) p p 2 2 p 3 3, which oly depeds o j umber of j s we have i the marbles. To fid the MLE of p (p, p 2, p 3 ) t, we ca maximize the log-likelihood fuctio subject to the costraits that l(p) log p + 2 log p log p 3 () 0 p, p 2, p 3 ad p + p 2 + p 3. The MLE is give by ˆp, ˆp 2 2, ˆp 3 3, i.e., we use the sample frequecies to estimate the proportio. The MLE ad MM estimator are the same. If we have observed 2 red marbles, 3 blue marbles, ad 5 gree marbles, the the MLE for (p, p 2, p 3 ) is give by ˆp.2, ˆp 2.3, ˆp 3.5 Let X,..., X be a radom sample from Ex(λ) with pdf f(x; θ) λx λx, x > 0. 4

5 a) (6..2) Show that the mle of λ is give by ˆλ / X. b) What s the mle of the probability P(X > )? c) If we parameterize the Expoetial family by θ /λ, what s the mle of θ? Let X,..., X be a radom sample from Ber(θ) with pmf p(x; θ) θ x ( θ) x, x 0,. a) (6..) Show that the mle of θ is give by ˆθ X. b) Let Y X + + X. So Y Bi(, θ). Derive the mle of θ give Y. c) (6..6) Show that the MLE of θ is give by ˆθ mi( X, 3 ), if 0 θ /3. (Example from Week8 Estimatio2as.pdf) Let λ ad let X,..., X be a radom sample from the distributio with pdf f(x; λ) 2λ 2 x 3 e λx2, x > 0. Defie Y X 2, which is a oe-to-oe trasformatio sice X > 0. The pdf for Y is give by f Y (y) f X ( y) dy dx, dx where dy d y dy 2 y 2λ 2 y 3/2 e λy 2 y ye λy i.e., Y Ga(2, /λ). So the MLE ad the MM estimator of λ are the same, give by 2/Ȳ 2 i X2 i 2. i X2 i (6..5): Let X,..., X be a radom sample from Uif(0, θ] with pdf a) Derive the MLE of θ. The likelihood fuctio is give by f(x,..., x ) j f(x; θ) θ I (0<x θ). θ I (0<x i θ) θ I (0< all x i s θ). Graph this fuctio: it s a mootoe decreasig fuctio of θ whe θ max i x i, but zero whe θ < max i x i s. So ˆθ max i X i. 5

6 b) Derive the MM of θ. EX 2θ, so θ X/2. c) What if we chage the distributio to be Uif(0, θ), i.e., f(x; θ) θ I (0<x<θ). The likelihood fuctio becomes θ I (0< all xi s <θ), which is a decreasig fuctio whe θ > max i x i, but zero whe θ max i x i. So the MLE does ot exist i this case. Ubiased Estimators A estimator is called ubiased if E(ˆθ) θ. checkig whether a estimator is biased/ubiased. The followig results are useful whe E( i a i X i ) a i E(X i ) Var( i a i X i ) i,j a i a j Cov(X i, X j ) i a 2 i Var(X i ) + 2 i<j Cov(X i, X j ). If E(X) θ, ad your estimator of g(θ) is g( X), the likely you eed to use Jese s iequality to show that g( X) is biased. You eed to check whether g or g is covex, ad the call the Jese s iequality. g (x) 0 : g is covex ad Eg( X) g(θ); g (x) 0 : g is covex ad Eg( X) g(θ); Let (X,..., X ) be a radom sample from a distributio with mea µ ad variace σ 2. The E X [ µ, E(S 2 ( ) E Xi X ) ] 2 σ 2. That is, sample mea ad sample variace are ubiased. How to show E(S 2 ) σ 2? ( Xi X ) 2 ( X 2 i 2X i X + X 2) ( ( X 2 i X 2 i ) 2 X Xi 2 X 2. ( ) ( X i + ) 2 X X + X 2 X 2) 6

7 [ E (X i X) 2] [ E Xi 2 X 2] [ E(Xi 2 ) E( X ] 2 ) [ (µ 2 + σ 2 ) (µ 2 + σ2 ) ] ( )σ2 σ 2. Most estimators i Table are ubiased, except Ex(λ). The estimator / X is biased (Jese s iequality), ad the estimator for θ /λ are ubiased. No(µ, σ 2 ). The MLE of σ 2 is biased. Geo(p). The estimator / X is biased (Jese s iequality). Beta(α, ). Both the MLE ad the MM estimator of α are biased (Jese s iequality). The MLE of θ /α is ubiased, but the MM estimator of θ is biased (Jese s iequality). As we ll say that this case is the same as the expoetial distributio sice log X follows a Expoetial distributio. Ga(α, β) with α kow. The estimator α/ X for β is biased ((Jese s iequality), but the estimator X/α for θ /β is ubiased. More Examples. Let (X,..., X ) be a radom sample from Uif(0, θ]. The MLE is biased. Y max X i, E(Y ) i + θ. The MM estimator θ 2 X is ubiased. Let (X,..., X ) be a radom sample from a distributio with pdf f(x) whose mea µ exists ad which is symmetric about µ. Show that E(sample media) µ. Without loss of geerality, assume µ 0 (Why?). That is, the pdf f(x) is symmetric about zero. Due to the symmetry, we ca show that 7

8 The distributio of a radom sample (X,..., X ) should be the same as the distributio of ( X,..., X ). So the distributio of the sample media of (X,..., X ) should be the same as the distributio of the sample media of ( X,..., X ). E(sample media of X : ) E(sample media of X,..., X ) So E(sample media of X : ) 0. E(sample media of X : ). X,..., X Laplacia distributio with pdf f(x; θ) 2 e x θ. The MLE ˆθ Med(X,..., X ) is ubiased. Mea Squared Error (MSE) For a estimator ˆθ of θ, defie the Mea Squared Error of ˆθ by MSE(ˆθ) E(ˆθ θ) 2 E[ˆθ E(ˆθ)] 2 + Var(ˆθ) Bias 2 + Var Specially, if ˆθ is ubiased, the MSE(ˆθ) Var(ˆθ). Let ˆθ ad ˆθ 2 be two ubiased estimator of ˆθ. ˆθ is said to be more efficiet tha ˆθ 2 if Var(ˆθ ) < Var(ˆθ 2 ). The relative efficiecy of ˆθ with respect to (wrt) ˆθ 2 is Var(ˆθ 2 )/Var(ˆθ ). Examples. X,..., X Uif(0, θ]. We have leared that the MLE ˆθ ad the MM estimator θ are give by ˆθ Y max X i, θ 2 X. i Which estimator is better (i.e. havig the smallest MSE), ˆθ or θ? MSE(ˆθ) 2θ 2 ( + )( + 2), MSE( θ) θ2 3. 8

9 f Y (y) y θ, 0 < y θ. EY EY 2 θ 0 θ 0 yf Y (y)dy y 2 f Y (y)dy Var(Y ) EY 2 (EY ) 2 θ y 0 θ θ y+ 0 + θ θ + 2 θ2 ( + ) 2 ( + 2) MSE(ˆθ) Bias 2 + Var(Y ) (EY θ) 2 + EY 2 (EY ) 2. MSE( θ) 4Var( X) 4 Var(X ) 4 θ2 2 θ2 3 Note that although θ 2 X is ubiased for θ, while ˆθ max i X i is biased. The MSE of ˆθ is much smaller tha the MSE of θ for large. What must c equal if cˆθ is to be a ubiased estimator of θ? That is, costruct a ubiased estimator based the MLE ˆθ. Which estimator is more efficiet, θ or + ˆθ? What s the relative efficiecy of ˆθ wrt θ? + ( + MSE ˆθ ) ( + Var The relative efficiecy of + ˆθ wrt θ is equal to ˆθ ) ( + )2 2 Var(Y ) ( + )2 2 θ 2 ( + ) 2 ( + 2) θ 2 ( + 2). θ 2 / θ 2 3 ( + 2) Whe the sample size gets larger, the estimator + ˆθ (that is also ubiased) is much more efficiet tha the MM estimator θ. (*) (Sec 7.) I geeral, it s difficult to compare two estimators based o their MSE, sice MSE may deped o the ukow parameter θ: It is possible that MSE(ˆθ ) < MSE(ˆθ 2 ) (i.e., ˆθ is better) whe θ >, while ˆθ 2 is better for θ <. I courses o statistical decisio theory, you ll lear how to compare estimators based o their maximum MSE (i.e., their worst-case performace) or based o averaged MSE (i.e., Bayes risk). 9

10 Summary Estimators you ll ecouter i Stat 40 take the followig forms:. ˆθ the sample mea X or liear fuctios of the sample mea, e.g., 2 X. E(a X) a µ (usually ubiased). Var(a X) a 2 σ2. 2. ˆθ the average of a trasformatio of Xi s, e.g., log X i or X2 i. Defie Y i log X i the ˆθ log X i Ȳ. Fid the distributio of Y i ad the you are back to the previous case. 3. ˆθ g( X), a fuctio of the sample mea, e.g., / X. Check the sig of the secod derivative of g i the rage of possible values of X. Use Jese s iequality to show ˆθ is biased. Usually you wo t be asked to compute the bias ad variace. 4. ˆθ order statistics, e.g., Y mi i X i or Y max i X i. Fid the distributio of Y or Y. Compute the mea (usually biased) ad variace. EY 2 (EY ) 2. For variace, use formula 0

11 Appedix (Feel free to igore the materials i the Appedix.) Let x,..., x be a sequece of umbers, fid the value a that miimizes g(a) x i a. It is equivalet to write the objective fuctio as g(a) x (i) a, where x () < < x () are the order statistics of x i s. It s okay to assume that all the x i s are differet (i.e., o ties), sice they are usually radom samples from a cotiuous distributio. Recall this result: suppose g(x) x, the g (x) if x > 0, if x < 0, ad g (0) does ot exist. So the derivatio of g is ot well defied at x i s. Suppose a x (), the g(a) (i) a) (x i x i a, which is a decreasig fuctio, so its miimal is achieved at a x (). Similarly, we ca check the case whe a x (). We ca coclude that it suffices to fid the optimal value of a i the data rage, [x (), x () ]. Assume we have a odd umber of samples, i.e., 2m +. Whe a [x (), x (m+) ], g(a) is a decreasig fuctio; Whe a [x m+, x ], g(a) is a icreasig fuctio. Therefore the miimal of g(a) is achieved whe a x (m+), the media of (x,..., x ). Assume we have a eve umber of samples, i.e., 2m. Whe θ [x (), x (m) ], g(a) is a decreasig fuctio; Whe a [x (m+), x () ], g(a) is a icreasig fuctio; Whe a [x (m), x (m+) ], g(a) is a costat. So the miimizer of g(a) is ay value i the iterval [x (m), x (m+) ], the sample media of the data poits, which is ot uique.

12 max p,...,p m m j j log p j where 0 p j ad j p j ad j 0, is achieved by settig p j j / where + + m. This is a costraied optimizatio problem, which, of course, ca be solved by usig tools like the Lagrage multiplier. Next I ll give a simple derivatio based o the o-egativity of the Kullback-Leibler divergece. Scale the objective fuctio by ( ) ad look for the miimal. We have m j j log p j m j j log j/ p j m j j log j, where the secod term (o the right) has othig to do with (p,..., p m ), ad the first term is the Kullback-Leibler divergece betwee two multiomial distributios whose miimal is achieved by settig ˆp,, ˆp m m. 2

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn Stat 366 Lab 2 Solutios (September 2, 2006) page TA: Yury Petracheko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/ yuryp/ Review Questios, Chapters 8, 9 8.5 Suppose that Y, Y 2,..., Y deote a radom

More information

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1 EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum

More information

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett Lecture Note 8 Poit Estimators ad Poit Estimatio Methods MIT 14.30 Sprig 2006 Herma Beett Give a parameter with ukow value, the goal of poit estimatio is to use a sample to compute a umber that represets

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

Unbiased Estimation. February 7-12, 2008

Unbiased Estimation. February 7-12, 2008 Ubiased Estimatio February 7-2, 2008 We begi with a sample X = (X,..., X ) of radom variables chose accordig to oe of a family of probabilities P θ where θ is elemet from the parameter space Θ. For radom

More information

STATISTICAL INFERENCE

STATISTICAL INFERENCE STATISTICAL INFERENCE POPULATION AND SAMPLE Populatio = all elemets of iterest Characterized by a distributio F with some parameter θ Sample = the data X 1,..., X, selected subset of the populatio = sample

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED MATH 47 / SPRING 013 ASSIGNMENT : DUE FEBRUARY 4 FINALIZED Please iclude a cover sheet that provides a complete setece aswer to each the followig three questios: (a) I your opiio, what were the mai ideas

More information

Lecture 12: September 27

Lecture 12: September 27 36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.

More information

ECE 901 Lecture 13: Maximum Likelihood Estimation

ECE 901 Lecture 13: Maximum Likelihood Estimation ECE 90 Lecture 3: Maximum Likelihood Estimatio R. Nowak 5/7/009 The focus of this lecture is to cosider aother approach to learig based o maximum likelihood estimatio. Ulike earlier approaches cosidered

More information

Lecture 13: Maximum Likelihood Estimation

Lecture 13: Maximum Likelihood Estimation ECE90 Sprig 007 Statistical Learig Theory Istructor: R. Nowak Lecture 3: Maximum Likelihood Estimatio Summary of Lecture I the last lecture we derived a risk (MSE) boud for regressio problems; i.e., select

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

Kurskod: TAMS11 Provkod: TENB 21 March 2015, 14:00-18:00. English Version (no Swedish Version)

Kurskod: TAMS11 Provkod: TENB 21 March 2015, 14:00-18:00. English Version (no Swedish Version) Kurskod: TAMS Provkod: TENB 2 March 205, 4:00-8:00 Examier: Xiagfeg Yag (Tel: 070 2234765). Please aswer i ENGLISH if you ca. a. You are allowed to use: a calculator; formel -och tabellsamlig i matematisk

More information

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator Last Lecture Biostatistics 60 - Statistical Iferece Lecture 16 Evaluatio of Bayes Estimator Hyu Mi Kag March 14th, 013 What is a Bayes Estimator? Is a Bayes Estimator the best ubiased estimator? Compared

More information

Introductory statistics

Introductory statistics CM9S: Machie Learig for Bioiformatics Lecture - 03/3/06 Itroductory statistics Lecturer: Sriram Sakararama Scribe: Sriram Sakararama We will provide a overview of statistical iferece focussig o the key

More information

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values of the secod half Biostatistics 6 - Statistical Iferece Lecture 6 Fial Exam & Practice Problems for the Fial Hyu Mi Kag Apil 3rd, 3 Hyu Mi Kag Biostatistics 6 - Lecture 6 Apil 3rd, 3 / 3 Rao-Blackwell

More information

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization ECE 90 Lecture 4: Maximum Likelihood Estimatio ad Complexity Regularizatio R Nowak 5/7/009 Review : Maximum Likelihood Estimatio We have iid observatios draw from a ukow distributio Y i iid p θ, i,, where

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

Statistical Theory MT 2009 Problems 1: Solution sketches

Statistical Theory MT 2009 Problems 1: Solution sketches Statistical Theory MT 009 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. (a) Let 0 < θ < ad put f(x, θ) = ( θ)θ x ; x = 0,,,... (b) (c) where

More information

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS Jauary 25, 207 INTRODUCTION TO MATHEMATICAL STATISTICS Abstract. A basic itroductio to statistics assumig kowledge of probability theory.. Probability I a typical udergraduate problem i probability, we

More information

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara Poit Estimator Eco 325 Notes o Poit Estimator ad Cofidece Iterval 1 By Hiro Kasahara Parameter, Estimator, ad Estimate The ormal probability desity fuctio is fully characterized by two costats: populatio

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Exponential Families and Bayesian Inference

Exponential Families and Bayesian Inference Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation ECE 645: Estimatio Theory Sprig 2015 Istructor: Prof. Staley H. Cha Maximum Likelihood Estimatio (LaTeX prepared by Shaobo Fag) April 14, 2015 This lecture ote is based o ECE 645(Sprig 2015) by Prof. Staley

More information

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

IIT JAM Mathematical Statistics (MS) 2006 SECTION A IIT JAM Mathematical Statistics (MS) 6 SECTION A. If a > for ad lim a / L >, the which of the followig series is ot coverget? (a) (b) (c) (d) (d) = = a = a = a a + / a lim a a / + = lim a / a / + = lim

More information

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1. Eco 325/327 Notes o Sample Mea, Sample Proportio, Cetral Limit Theorem, Chi-square Distributio, Studet s t distributio 1 Sample Mea By Hiro Kasahara We cosider a radom sample from a populatio. Defiitio

More information

Solutions: Homework 3

Solutions: Homework 3 Solutios: Homework 3 Suppose that the radom variables Y,...,Y satisfy Y i = x i + " i : i =,..., IID where x,...,x R are fixed values ad ",...," Normal(0, )with R + kow. Fid ˆ = MLE( ). IND Solutio: Observe

More information

Statistics for Applications. Chapter 3: Maximum Likelihood Estimation 1/23

Statistics for Applications. Chapter 3: Maximum Likelihood Estimation 1/23 18.650 Statistics for Applicatios Chapter 3: Maximum Likelihood Estimatio 1/23 Total variatio distace (1) ( ) Let E,(IPθ ) θ Θ be a statistical model associated with a sample of i.i.d. r.v. X 1,...,X.

More information

Maximum Likelihood Estimation and Complexity Regularization

Maximum Likelihood Estimation and Complexity Regularization ECE90 Sprig 004 Statistical Regularizatio ad Learig Theory Lecture: 4 Maximum Likelihood Estimatio ad Complexity Regularizatio Lecturer: Rob Nowak Scribe: Pam Limpiti Review : Maximum Likelihood Estimatio

More information

Statistical Theory MT 2008 Problems 1: Solution sketches

Statistical Theory MT 2008 Problems 1: Solution sketches Statistical Theory MT 008 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. a) Let 0 < θ < ad put fx, θ) = θ)θ x ; x = 0,,,... b) c) where α

More information

Lecture 10 October Minimaxity and least favorable prior sequences

Lecture 10 October Minimaxity and least favorable prior sequences STATS 300A: Theory of Statistics Fall 205 Lecture 0 October 22 Lecturer: Lester Mackey Scribe: Brya He, Rahul Makhijai Warig: These otes may cotai factual ad/or typographic errors. 0. Miimaxity ad least

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

Expectation and Variance of a random variable

Expectation and Variance of a random variable Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio

More information

STAT Homework 2 - Solutions

STAT Homework 2 - Solutions STAT-36700 Homework - Solutios Fall 08 September 4, 08 This cotais solutios for Homework. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better isight.

More information

1.010 Uncertainty in Engineering Fall 2008

1.010 Uncertainty in Engineering Fall 2008 MIT OpeCourseWare http://ocw.mit.edu.00 Ucertaity i Egieerig Fall 2008 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu.terms. .00 - Brief Notes # 9 Poit ad Iterval

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

Machine Learning Theory (CS 6783)

Machine Learning Theory (CS 6783) Machie Learig Theory (CS 6783) Lecture 2 : Learig Frameworks, Examples Settig up learig problems. X : istace space or iput space Examples: Computer Visio: Raw M N image vectorized X = 0, 255 M N, SIFT

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

STAT Homework 1 - Solutions

STAT Homework 1 - Solutions STAT-36700 Homework 1 - Solutios Fall 018 September 11, 018 This cotais solutios for Homework 1. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better

More information

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen) Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................

More information

Lecture 6 Simple alternatives and the Neyman-Pearson lemma

Lecture 6 Simple alternatives and the Neyman-Pearson lemma STATS 00: Itroductio to Statistical Iferece Autum 06 Lecture 6 Simple alteratives ad the Neyma-Pearso lemma Last lecture, we discussed a umber of ways to costruct test statistics for testig a simple ull

More information

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes. Term Test October 3, 003 Name Math 56 Studet Number Directio: This test is worth 50 poits. You are required to complete this test withi 50 miutes. I order to receive full credit, aswer each problem completely

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Lecture 23: Minimal sufficiency

Lecture 23: Minimal sufficiency Lecture 23: Miimal sufficiecy Maximal reductio without loss of iformatio There are may sufficiet statistics for a give problem. I fact, X (the whole data set) is sufficiet. If T is a sufficiet statistic

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 016 MODULE : Statistical Iferece Time allowed: Three hours Cadidates should aswer FIVE questios. All questios carry equal marks. The umber

More information

Lecture 7: October 18, 2017

Lecture 7: October 18, 2017 Iformatio ad Codig Theory Autum 207 Lecturer: Madhur Tulsiai Lecture 7: October 8, 207 Biary hypothesis testig I this lecture, we apply the tools developed i the past few lectures to uderstad the problem

More information

Lecture 9: September 19

Lecture 9: September 19 36-700: Probability ad Mathematical Statistics I Fall 206 Lecturer: Siva Balakrisha Lecture 9: September 9 9. Review ad Outlie Last class we discussed: Statistical estimatio broadly Pot estimatio Bias-Variace

More information

Distribution of Random Samples & Limit theorems

Distribution of Random Samples & Limit theorems STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to

More information

Lecture 18: Sampling distributions

Lecture 18: Sampling distributions Lecture 18: Samplig distributios I may applicatios, the populatio is oe or several ormal distributios (or approximately). We ow study properties of some importat statistics based o a radom sample from

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week Lecture: Cocept Check Exercises Starred problems are optioal. Statistical Learig Theory. Suppose A = Y = R ad X is some other set. Furthermore, assume P X Y is a discrete

More information

1 of 7 7/16/2009 6:06 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 6. Order Statistics Defiitios Suppose agai that we have a basic radom experimet, ad that X is a real-valued radom variable

More information

Statistical inference: example 1. Inferential Statistics

Statistical inference: example 1. Inferential Statistics Statistical iferece: example 1 Iferetial Statistics POPULATION SAMPLE A clothig store chai regularly buys from a supplier large quatities of a certai piece of clothig. Each item ca be classified either

More information

TAMS24: Notations and Formulas

TAMS24: Notations and Formulas TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week 2 Lecture: Cocept Check Exercises Starred problems are optioal. Excess Risk Decompositio 1. Let X = Y = {1, 2,..., 10}, A = {1,..., 10, 11} ad suppose the data distributio

More information

Topic 8: Expected Values

Topic 8: Expected Values Topic 8: Jue 6, 20 The simplest summary of quatitative data is the sample mea. Give a radom variable, the correspodig cocept is called the distributioal mea, the epectatio or the epected value. We begi

More information

AMS570 Lecture Notes #2

AMS570 Lecture Notes #2 AMS570 Lecture Notes # Review of Probability (cotiued) Probability distributios. () Biomial distributio Biomial Experimet: ) It cosists of trials ) Each trial results i of possible outcomes, S or F 3)

More information

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2. SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample

More information

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2 82 CHAPTER 4. MAXIMUM IKEIHOOD ESTIMATION Defiitio: et X be a radom sample with joit p.m/d.f. f X x θ. The geeralised likelihood ratio test g.l.r.t. of the NH : θ H 0 agaist the alterative AH : θ H 1,

More information

6. Sufficient, Complete, and Ancillary Statistics

6. Sufficient, Complete, and Ancillary Statistics Sufficiet, Complete ad Acillary Statistics http://www.math.uah.edu/stat/poit/sufficiet.xhtml 1 of 7 7/16/2009 6:13 AM Virtual Laboratories > 7. Poit Estimatio > 1 2 3 4 5 6 6. Sufficiet, Complete, ad Acillary

More information

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain Assigmet 9 Exercise 5.5 Let X biomial, p, where p 0, 1 is ukow. Obtai cofidece itervals for p i two differet ways: a Sice X / p d N0, p1 p], the variace of the limitig distributio depeds oly o p. Use the

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3 MATH 337 Sequeces Dr. Neal, WKU Let X be a metric space with distace fuctio d. We shall defie the geeral cocept of sequece ad limit i a metric space, the apply the results i particular to some special

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5 CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018) Radomized Algorithms I, Sprig 08, Departmet of Computer Sciece, Uiversity of Helsiki Homework : Solutios Discussed Jauary 5, 08). Exercise.: Cosider the followig balls-ad-bi game. We start with oe black

More information

Summary. Recap ... Last Lecture. Summary. Theorem

Summary. Recap ... Last Lecture. Summary. Theorem Last Lecture Biostatistics 602 - Statistical Iferece Lecture 23 Hyu Mi Kag April 11th, 2013 What is p-value? What is the advatage of p-value compared to hypothesis testig procedure with size α? How ca

More information

Final Examination Statistics 200C. T. Ferguson June 10, 2010

Final Examination Statistics 200C. T. Ferguson June 10, 2010 Fial Examiatio Statistics 00C T. Ferguso Jue 0, 00. (a State the Borel-Catelli Lemma ad its coverse. (b Let X,X,... be i.i.d. from a distributio with desity, f(x =θx (θ+ o the iterval (,. For what value

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

Empirical Process Theory and Oracle Inequalities

Empirical Process Theory and Oracle Inequalities Stat 928: Statistical Learig Theory Lecture: 10 Empirical Process Theory ad Oracle Iequalities Istructor: Sham Kakade 1 Risk vs Risk See Lecture 0 for a discussio o termiology. 2 The Uio Boud / Boferoi

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Econ 325: Introduction to Empirical Economics

Econ 325: Introduction to Empirical Economics Eco 35: Itroductio to Empirical Ecoomics Lecture 3 Discrete Radom Variables ad Probability Distributios Copyright 010 Pearso Educatio, Ic. Publishig as Pretice Hall Ch. 4-1 4.1 Itroductio to Probability

More information

Last Lecture. Unbiased Test

Last Lecture. Unbiased Test Last Lecture Biostatistics 6 - Statistical Iferece Lecture Uiformly Most Powerful Test Hyu Mi Kag March 8th, 3 What are the typical steps for costructig a likelihood ratio test? Is LRT statistic based

More information

The picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled

The picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled 1 Lecture : Area Area ad distace traveled Approximatig area by rectagles Summatio The area uder a parabola 1.1 Area ad distace Suppose we have the followig iformatio about the velocity of a particle, how

More information

LECTURE NOTES 9. 1 Point Estimation. 1.1 The Method of Moments

LECTURE NOTES 9. 1 Point Estimation. 1.1 The Method of Moments LECTURE NOTES 9 Poit Estimatio Uder the hypothesis that the sample was geerated from some parametric statistical model, a atural way to uderstad the uderlyig populatio is by estimatig the parameters of

More information

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n,

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n, CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 9 Variace Questio: At each time step, I flip a fair coi. If it comes up Heads, I walk oe step to the right; if it comes up Tails, I walk oe

More information

Lecture 3. Properties of Summary Statistics: Sampling Distribution

Lecture 3. Properties of Summary Statistics: Sampling Distribution Lecture 3 Properties of Summary Statistics: Samplig Distributio Mai Theme How ca we use math to justify that our umerical summaries from the sample are good summaries of the populatio? Lecture Summary

More information

EE 4TM4: Digital Communications II Probability Theory

EE 4TM4: Digital Communications II Probability Theory 1 EE 4TM4: Digital Commuicatios II Probability Theory I. RANDOM VARIABLES A radom variable is a real-valued fuctio defied o the sample space. Example: Suppose that our experimet cosists of tossig two fair

More information

1. Parameter estimation point estimation and interval estimation. 2. Hypothesis testing methods to help decision making.

1. Parameter estimation point estimation and interval estimation. 2. Hypothesis testing methods to help decision making. Chapter 7 Parameter Estimatio 7.1 Itroductio Statistical Iferece Statistical iferece helps us i estimatig the characteristics of the etire populatio based upo the data collected from (or the evidece 0produced

More information

LECTURE 8: ASYMPTOTICS I

LECTURE 8: ASYMPTOTICS I LECTURE 8: ASYMPTOTICS I We are iterested i the properties of estimators as. Cosider a sequece of radom variables {, X 1}. N. M. Kiefer, Corell Uiversity, Ecoomics 60 1 Defiitio: (Weak covergece) A sequece

More information

2. The volume of the solid of revolution generated by revolving the area bounded by the

2. The volume of the solid of revolution generated by revolving the area bounded by the IIT JAM Mathematical Statistics (MS) Solved Paper. A eigevector of the matrix M= ( ) is (a) ( ) (b) ( ) (c) ( ) (d) ( ) Solutio: (a) Eigevalue of M = ( ) is. x So, let x = ( y) be the eigevector. z (M

More information

Probability and Statistics

Probability and Statistics ICME Refresher Course: robability ad Statistics Staford Uiversity robability ad Statistics Luyag Che September 20, 2016 1 Basic robability Theory 11 robability Spaces A probability space is a triple (Ω,

More information

Estimation of the Mean and the ACVF

Estimation of the Mean and the ACVF Chapter 5 Estimatio of the Mea ad the ACVF A statioary process {X t } is characterized by its mea ad its autocovariace fuctio γ ), ad so by the autocorrelatio fuctio ρ ) I this chapter we preset the estimators

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/5.070J Fall 203 Lecture 3 9//203 Large deviatios Theory. Cramér s Theorem Cotet.. Cramér s Theorem. 2. Rate fuctio ad properties. 3. Chage of measure techique.

More information

4.1 Non-parametric computational estimation

4.1 Non-parametric computational estimation Chapter 4 Resamplig Methods 4.1 No-parametric computatioal estimatio Let x 1,...,x be a realizatio of the i.i.d. r.vs X 1,...,X with a c.d.f. F. We are iterested i the precisio of estimatio of a populatio

More information

Stat 421-SP2012 Interval Estimation Section

Stat 421-SP2012 Interval Estimation Section Stat 41-SP01 Iterval Estimatio Sectio 11.1-11. We ow uderstad (Chapter 10) how to fid poit estimators of a ukow parameter. o However, a poit estimate does ot provide ay iformatio about the ucertaity (possible

More information

MATH/STAT 352: Lecture 15

MATH/STAT 352: Lecture 15 MATH/STAT 352: Lecture 15 Sectios 5.2 ad 5.3. Large sample CI for a proportio ad small sample CI for a mea. 1 5.2: Cofidece Iterval for a Proportio Estimatig proportio of successes i a biomial experimet

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Itroductio to Probability ad Statistics Lecture 23: Cotiuous radom variables- Iequalities, CLT Puramrita Sarkar Departmet of Statistics ad Data Sciece The Uiversity of Texas at Austi www.cs.cmu.edu/

More information

Parameter, Statistic and Random Samples

Parameter, Statistic and Random Samples Parameter, Statistic ad Radom Samples A parameter is a umber that describes the populatio. It is a fixed umber, but i practice we do ot kow its value. A statistic is a fuctio of the sample data, i.e.,

More information

Chapter 8: STATISTICAL INTERVALS FOR A SINGLE SAMPLE. Part 3: Summary of CI for µ Confidence Interval for a Population Proportion p

Chapter 8: STATISTICAL INTERVALS FOR A SINGLE SAMPLE. Part 3: Summary of CI for µ Confidence Interval for a Population Proportion p Chapter 8: STATISTICAL INTERVALS FOR A SINGLE SAMPLE Part 3: Summary of CI for µ Cofidece Iterval for a Populatio Proportio p Sectio 8-4 Summary for creatig a 100(1-α)% CI for µ: Whe σ 2 is kow ad paret

More information