Answers to QUIZ

Similar documents
Math 10B: Mock Mid II. April 13, 2016

Foundations of Statistical Inference. Sufficient statistics. Definition (Sufficiency) Definition (Sufficiency)

Stochastic models and their distributions

An random variable is a quantity that assumes different values with certain probabilities.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

CS Homework Week 2 ( 2.25, 3.22, 4.9)

Sensors, Signals and Noise

Challenge Problems. DIS 203 and 210. March 6, (e 2) k. k(k + 2). k=1. f(x) = k(k + 2) = 1 x k

Notes for Lecture 17-18

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

REVIEW OF MAXIMUM LIKELIHOOD ESTIMATION

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

GMM - Generalized Method of Moments

5. Stochastic processes (1)

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Linear Cryptanalysis

MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE

Empirical Process Theory

EXERCISES FOR SECTION 1.5

Approximation Algorithms for Unique Games via Orthogonal Separators

4.5 Constant Acceleration

Outline. lse-logo. Outline. Outline. 1 Wald Test. 2 The Likelihood Ratio Test. 3 Lagrange Multiplier Tests

Introduction to Probability and Statistics Slides 4 Chapter 4

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Unit Root Time Series. Univariate random walk

Comparing Means: t-tests for One Sample & Two Related Samples

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

Lecture 2 April 04, 2018

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.

Let us start with a two dimensional case. We consider a vector ( x,

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Vehicle Arrival Models : Headway

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

International Journal of Scientific & Engineering Research, Volume 4, Issue 10, October ISSN

Homework 10 (Stats 620, Winter 2017) Due Tuesday April 18, in class Questions are derived from problems in Stochastic Processes by S. Ross.

STA 114: Statistics. Notes 2. Statistical Models and the Likelihood Function

13.3 Term structure models

Lecture 4 Notes (Little s Theorem)

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3

Math 333 Problem Set #2 Solution 14 February 2003

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

Estimation Uncertainty

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

Chapter 2. Models, Censoring, and Likelihood for Failure-Time Data

Graphical Event Models and Causal Event Models. Chris Meek Microsoft Research

Transform Techniques. Moment Generating Function

Lie Derivatives operator vector field flow push back Lie derivative of

Hypothesis Testing in the Classical Normal Linear Regression Model. 1. Components of Hypothesis Tests

INSTANTANEOUS VELOCITY

Chapter 4. Location-Scale-Based Parametric Distributions. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

Sections 2.2 & 2.3 Limit of a Function and Limit Laws

Generalized Least Squares

23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes

LECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS

Estimation of Poses with Particle Filters

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

in Engineering Prof. Dr. Michael Havbro Faber ETH Zurich, Switzerland Swiss Federal Institute of Technology

Lecture 33: November 29

Measurement Error 1: Consequences Page 1. Definitions. For two variables, X and Y, the following hold: Expectation, or Mean, of X.

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A

Statistical Distributions

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19

CHERNOFF DISTANCE AND AFFINITY FOR TRUNCATED DISTRIBUTIONS *

Christos Papadimitriou & Luca Trevisan November 22, 2016

Homework 4 (Stats 620, Winter 2017) Due Tuesday Feb 14, in class Questions are derived from problems in Stochastic Processes by S. Ross.

SOLUTIONS TO ECE 3084

Cash Flow Valuation Mode Lin Discrete Time

Math 106: Review for Final Exam, Part II. (x x 0 ) 2 = !

Chapter 6. Systems of First Order Linear Differential Equations

Echocardiography Project and Finite Fourier Series

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

non -negative cone Population dynamics motivates the study of linear models whose coefficient matrices are non-negative or positive.

TMA 4265 Stochastic Processes

1 Review of Zero-Sum Games

The Strong Law of Large Numbers

A Bayesian Approach to Spectral Analysis

T. J. HOLMES AND T. J. KEHOE INTERNATIONAL TRADE AND PAYMENTS THEORY FALL 2011 EXAMINATION

4 Sequences of measurable functions

Properties of Autocorrelated Processes Economics 30331

) were both constant and we brought them from under the integral.

Today: Graphing. Note: I hope this joke will be funnier (or at least make you roll your eyes and say ugh ) after class. v (miles per hour ) Time

k B 2 Radiofrequency pulses and hardware

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence

CHAPTER 2 Signals And Spectra

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

Supplementary Material

4.1 - Logarithms and Their Properties

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Avd. Matematisk statistik

Some Basic Information about M-S-D Systems

5.1 - Logarithms and Their Properties

Solutions Problem Set 3 Macro II (14.452)

BEng (Hons) Telecommunications. Examinations for / Semester 2

The Brock-Mirman Stochastic Growth Model

Elements of Stochastic Processes Lecture II Hamid R. Rabiee

Transcription:

18441 Answers o QUIZ 1 18441 1 Le P be he proporion of voers who will voe Yes Suppose he prior probabiliy disribuion of P is given by Pr(P < p) p for 0 < p < 1 You ake a poll by choosing nine voers a random, he choice of each being independen of who else was chosen I is found ha six of he nine will voe Yes Find he poserior probabiliy ha more han half of all voers in he whole populaion will voe Yes Answer: The prior probabiliy densiy funcion of P is f P (p) d dp F P (p) d dp Pr(P p) d dp p 1 if 0 < p < 1 and 0 if p < 0 or p > 1 In oher words, P is uniformly disribued on he inerval [0, 1], or, in ye oher words, P Bea(1, 1) Le X be he number of voers in he sample of nine who will voe Yes Then he likelihood funcion is 9 L(p) Pr(X 6 P p) p 6 (1 p) 6 Muliplying he prior densiy by he likelihood gives us [consan] 1 p 6 (1 p) [consan] p 7 1 (1 p) 4 1, so we have P [X 6] Bea(7, 4) In order o find Pr(P > 1/ X 6), we need he value of he normalizing consan; we wrie he densiy as f P [X6] (p) for 0 < p < 1 Then Pr(P > 1/) 1 1/ Γ(7 + 4) Γ(7)Γ(4) p7 1 (1 p) 4 1 10!!6! p6 (1 p) 840p 6 (1 p) 840p 6 (1 p) dp 1 [ p 7 840 7 p8 8 + p9 p10 10 10 } 15 {{ + 80 84 } 1 1/ ] p1 840 ( p 6 p 7 + p 8 p 9) dp p1/ If his 1 hen we can infer ha an error has occurred! ] p1 [10p 7 15p 8 + 80p 9 84p 10 ( 10 7 15 8 + 80 9 84 10 40 15 + 140 1 8 1 44 8 1 11 6 5 64 08815 1 ) p1/

Suppose a family of probabiliy disribuions of a random variable X is indexed by a parameer θ (a) Wha does i mean o say ha T (X) is a sufficien saisic for θ? Answer: I means ha he condiional probabiliy disribuion of X given T (X) does no depend on θ; he condiional disribuion remains he same as θ changes (b) Suppose T (X) is a sufficien saisic for θ Explain why he value of he Rao- Blackwell esimaor E(δ(X) T (X)) does no depend on θ, even hough he probabiliy disribuion of δ(x) mus depend on θ in order ha δ(x) make sense as an esimaor of θ Answer: The condiional disribuion of X given T (X) does no depend on θ The condiional disribuion of δ(x) given T (X) does no depend on θ The condiional expecaion of δ(x) given T (X) does no depend on θ Suppose X 1, X i i d Bernoulli(p), ie, hey are independen and idenically disribued and { 1 wih probabiliy p, X 1 0 wih probabiliy 1 p (a) Show ha X 1 X is no a complee saisic Answer: I is enough o find some funcion g such ha E(g(X 1 X )) remains zero as p changes Bu we have E(X 1 X ) E(X 1 ) E(X ) p p 0, so we can ake g o be he ideniy funcion (b) Show ha X 1 + X is a sufficien saisic for p Answer: One way o do his is by appealing direcly o he definiion of sufficiency, ie, by finding Pr(X 1 x 1 & X x X 1 + X ) and observing ha no p appears in he answer Pr(X 1 x 1 & X x X 1 + X ) px 1 (1 p) 1 x 1 p x (1 p) 1 x p (1 p) px 1+x (1 p) (x 1+x ) ( )p (1 p) and no p appears here p (1 p) ( )p (1 p) 1,

(c) You may use he fac ha X 1 + X is a complee saisic Show ha X 1 X is an unbiased esimaor of p, and find he bes unbiased esimaor of p, ie, he one wih he smalles mean error among all unbiased esimaors of p Answer: The Lehman-Scheffé heorem says ha he condiional expecaion of an unbiased esimaor given a complee sufficien saisic is he unique bes unbiased esimaor So we seek E(X 1 X X 1 + X ) Noice ha X 1 X mus be eiher 0 or 1, and is 1 if and only if boh X 1 and X are 1, and ha happens if and only if X 1 + X So { 1 if X1 + X E(X 1 X X 1 +X ) Pr(X 1 X 1 X 1 +X ), 0 if X 1 + X eiher 0 or 1 Since we also have X 1 X { 1 if X1 + X, 0 if X 1 + X eiher 0 or 1, we can say ha E(X 1 X X 1 + X ) X 1 X In oher words, X 1 X is already he bes unbiased esimaor of p, and is unchanged by he Rao-Blackwell process of improving an esimaor (d) Find he maximum likelihood esimaor of p Answer: By invariance of maximum-likelihood esimaors, he maximumlikelihood esimaor of p is jus he square of he maximum-likelihood esimaor of p The likelihood funcion is L(p) P (X 1 x 1 & X x ) p x 1+x (1 p) x 1 x Therefore l(p) log L(p) (x 1 + x ) log p + ( x 1 x ) log(1 p) l (p) x 1 + x p x 1 x 1 p x1 + x p p(1 p) > 0 if 0 < p < (x 1 + x )/, 0 if p (x 1 + x )/, < 0 if (x 1 + x )/ < p < 1 Consequenly p (X 1 + X 1 )/, and so he maximum-likelihood esimaor of p is (X 1 + X ) /4

(e) Consider he wo esimaors ha you found above: he bes unbiased esimaor of p and he maximum likelihood esimaor of p Which has a smaller mean error when p 1/? Answer: The bes unbiased esimaor is { 0 if X1 + X { 0, 1 }, 1 if X 1 + X, and so i is { 0 wih probabiliy 1 p, 1 wih probabiliy p Is mean error is Squared error when he Squared error when he esimaor is 0 esimaor is 1 when p 1/ {}}{ (0 p ) (1 p ) + (1 p ) p 16 01875 Probabiliy ha Probabiliy ha he esimaor is 0 he esimaor is 1 The maximum-likelihood esimaor is 0 if X 1 + X 0, 1/4 if X 1 + X 1, 1 if X 1 + X, and so i is 0 wih probabiliy (1 p), 1/4 wih probabiliy p(1 p), 1 wih probabiliy p Is mean error is herefore error error {}}{ (0 p ) (1 p) + (1/4 p ) p(1 p) probabiliy probabiliy + error when p 1/ {}}{ (1 p ) p 5 01565 probabiliy So he MSE of he MLE is slighly smaller han ha of he bes unbiased esimaor when p 1/ 4

4 Among families wih wo children, le X be he score on a saisics es aken by he firs child a age 1, and le Y be he income of he second child a age 40 Suppose he pair (X, Y ) has a bivariae normal disribuion, and E(X) 65, SD(X) 10, E(Y ) $50, 000 per year, SD(Y ) $10, 000 per year, and corr(x, Y ) 1/ [All of his is ficion] Among families in which he firs child scores 75 on he saisics es a age 1, in wha proporion of cases does he second child have an income of a leas $59, 0 a age 40? Answer: On page 15 of DeGroo & Schervish, we learn ha E(Y X) X E(X) E(Y ) + corr(x, Y ) SD(Y ) SD(X) X 65 50, 000 + (1/)(10, 000) 10 So E(Y X 75) 55, 000, and var(y X) (1 corr(x, Y ) ) SD(Y ) So SD(Y X 75) (/4) 10, 000 4 10, 000 10000 Since he condiional disribuion of Y given ha X 75 is normal, we can say Pr(Y 59, 0 X 75) 1 Pr(Y 59, 0 X 75) Y 55, 000 1 Pr 10000 59, 0 55, 000 / 10000 / X 75 59, 0 55, 000 1 Φ 10000 1 Φ(0500) 1 06915 0085 / So he even of ineres occurs in abou 085% of all cases 5