Stat 543 Exam 2 Spring 2016

Similar documents
Stat 543 Exam 2 Spring 2016

First Year Examination Department of Statistics, University of Florida

STAT 511 FINAL EXAM NAME Spring 2001

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

EM and Structure Learning

Limited Dependent Variables

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

Composite Hypotheses testing

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

Exam. Econometrics - Exam 1

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Professor Chris Murray. Midterm Exam

The EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Chapter 20 Duration Analysis

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

Chapter 14 Simple Linear Regression

January Examinations 2015

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu

Chapter 3 Describing Data Using Numerical Measures

The written Master s Examination

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

a. (All your answers should be in the letter!

x = , so that calculated

Lecture 4 Hypothesis Testing

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

Credit Card Pricing and Impact of Adverse Selection

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

Linear Approximation with Regularization and Moving Least Squares

6 More about likelihood

Problem Set 9 Solutions

Lecture 3: Probability Distributions

Applied Stochastic Processes

Gaussian Mixture Models

6. Stochastic processes (2)

6. Stochastic processes (2)

Topic- 11 The Analysis of Variance

Estimation: Part 2. Chapter GREG estimation

Exercises of Chapter 2

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

Decision-making and rationality

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution

Explaining the Stein Paradox

F statistic = s2 1 s 2 ( F for Fisher )

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

e i is a random error

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Target tracking example Filtering: Xt. (main interest) Smoothing: X1: t. (also given with SIS)

Chapter 11: I = 2 samples independent samples paired samples Chapter 12: I 3 samples of equal size J one-way layout two-way layout

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Estimation of the Mean of Truncated Exponential Distribution

Lecture 3 Stat102, Spring 2007

The Expectation-Maximisation Algorithm

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals

Bayesian Learning. Smart Home Health Analytics Spring Nirmalya Roy Department of Information Systems University of Maryland Baltimore County

Statistics for Economics & Business

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition

Rockefeller College University at Albany

F71SM1 STATISTICAL METHODS TUTORIAL ON 7 ESTIMATION SOLUTIONS

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

Lecture Notes on Linear Regression

An (almost) unbiased estimator for the S-Gini index

NAME and Section No.

PhysicsAndMathsTutor.com

ECE559VV Project Report

Homework 9 for BST 631: Statistical Theory I Problems, 11/02/2006

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis

Chapter 13: Multiple Regression

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

/ n ) are compared. The logic is: if the two

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

STAT 3008 Applied Regression Analysis

Homework Assignment 3 Due in class, Thursday October 15

ECONOMETRICS - FINAL EXAM, 3rd YEAR (GECO & GADE)

A Bayesian methodology for systemic risk assessment in financial networks

Low default modelling: a comparison of techniques based on a real Brazilian corporate portfolio

Answers Problem Set 2 Chem 314A Williamsen Spring 2000

Statistics MINITAB - Lab 2

Parametric fractional imputation for missing data analysis

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Lecture 3: Shannon s Theorem

Lecture 20: Hypothesis testing

International Journal of Industrial Engineering Computations

Differentiating Gaussian Processes

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Solutions Homework 4 March 5, 2018

Lecture 2: Prelude to the big shrink

Transcription:

Stat 543 Exam 2 Sprng 206 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of questons. Do at least 0 of the parts of the man exam. I wll score your best 0 answers at 0 ponts apece (makng 00 ponts possble). There s also on the last page of the Exam an "Extra Credt" queston that wll be scored out of 0 ponts. Any Extra Credt obtaned wll be recorded and used at the end of the course at Vardeman's dscreton n decdng borderlne grades. DO NOT spend tme on ths queston untl you are done wth the entrety of the regular exam.

. Below are three pdfs for X, f ( x ), f ( x 2 ), and f ( x 3). Use them n the rest of ths queston. x 2 3 4 5 6 7 3.20.05.5.5.0.05.30 θ 2.0.05.25.05.20.5.20.25.5.05.25.5.05.0 a) For whch α are there non-randomzed most powerful sze α tests of H 0 : θ = vs H : θ = 2? b) Identfy a most powerful sze α =.5 test of H 0 : θ = vs H : θ = 2. 2

c) Fnd a 0- loss Bayes test of H 0 : θ = vs H : θ = 2 or 3 for a pror dstrbuton wth () =.4, ( 2 ) =.3, and ( 3 ) =.3. (Gve all 7 values of ( x) g g g φ.) 2. In ths problem we'll use the Exp( λ ) dstrbuton wth pdf f ( x λ) λexp( λx) I[ x 0] use wthout proof the facts that f X Exp( λ ) and 0 t > then P[ X t] exp( λt) > =, = >. You may f X ( λ ) ndependent of X Exp( λ ) then Y mn ( X, X ) Exp( λ + λ ) Exp 2 2 2 2 In a so-called "competng rsks" context, an ndvdual or tem has a lfetme Z mn ( UV, ) U and V are postve tmes to falure/death from two dfferent causes. a) For =,, n model U Exp ( λ) and V Exp( λ2) what s observed are the d pars W = ( Z, I[ Z = U] ). (Note that I [ Z U] observed s the value of U and the fact that U V w = ( z,) and = ( z,0) w. = where wth all U's and V 's ndependent. Suppose that = = means that what s <.) Gve lkelhood terms f ( λ, λ ) f (( z, ) λ, λ 2) : ((,0 ), 2) f z λ λ : w for observed 2 3

b) Sometmes, a cause of falure may not be recorded and thus only Z (and not w ) s known. Suppose that nformaton on n = 5 ndvduals/tems s = ( 3, ), 2 = ( 7, ), Z3 = 2, 4 = ( 3,0 ), and 5 = (,0 ) Suppose further that a Bayesan uses a pror for (, ) w w w w. λ λ 2 that s one of ndependence wth both a pror Exp() dstrbuted. Carefully descrbe a Gbbs samplng algorthm for generatng trples * * * (( ),( 2),( w j j 3,2) j ) λ and λ 2 λ λ (terates for the 2 rates and the unobserved ndcator). If t s possble to name a dstrbuton from whch a gven update must be sampled, do so. At a mnmum, gve a form for each unvarate update dstrbuton up to a multplcatve constant. 4

c) Completely descrbe an EM algorthm that can be used to fnd an MLE of (, ) λ λ based on the data used n part b). (It s not really necessary to resort to EM here, as the calculus problem s farly easy. But for purposes of the exam, wrte out the EM algorthm.) 2 5

3. Suppose that X, X2,, Xn are d ( ) Ber p. Let X avalable for nference develops an estmator ( ) S m m = X. A statstcan expectng to have only n observatons δ S n for p under SEL. (Ths estmator may well be a based estmator.) In fact, n observatons wll be avalable. Fnd another * estmator of p, say δ ( ), that you are sure wll have smaller MSE p than ( ) the value of p ( 0,). S n = δ no matter what s S n 6

4. Suppose that X, X2,, Xn α are d wth margnal pdf f ( x α) αx I[ 0 x ] = < <. a) Fnd a lower bound for the varance of any unbased estmator δ ( X ) of γ ( α) sn vector of n observatons). = α (based on the b) Do you expect there to exst an unbased estmator of ( ) sn Explan! γ α = α achevng your bound from a)? 7

5. Suppose that X, X2,, Xn are d B (, ) mp and (perhaps for "acceptance samplng" purposes) ( p) P [ X 0] ( p) γ = > = p s of nterest. Fnd a UMVUE for ths quantty and say why you know your estmator s UMVU. m (Hnt: You may fnd t useful to thnk of the Ber ( p ).) m X 's as Y, j for mn ndependent varables Y, j each j= 8

6. Argue carefully that you could use d double exponental observatons (.e. ones wth margnal pdf exp ( x ) on R ) to generate a standard normal random varable va the rejecton algorthm, but that 2 you could NOT use d standard normal random varables to generate a double exponental random varable va the rejecton algorthm. 9

7. (EXTRA CREDIT ONLY) Consder the two margnal pdfs and d observatons, 2, 2 f ( x 0) = I[ 0 < x< ] and f ( x ) = I 0 < x< ln 2 x ( ) [ ] X X from one of these dstrbutons (specfed by f ( ) that there exsts a non-randomzed MP test of H 0 : θ 0 vs H : θ = = for any sze ( 0,) X,, X n. Gve an explct large n approxmate form for such a test for α =.05. x θ ). Argue carefully α based on 0