Stat 543 Exam 2 Spring 2016

Similar documents
Stat 543 Exam 2 Spring 2016

First Year Examination Department of Statistics, University of Florida

STAT 511 FINAL EXAM NAME Spring 2001

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

EM and Structure Learning

Limited Dependent Variables

Composite Hypotheses testing

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Professor Chris Murray. Midterm Exam

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Exam. Econometrics - Exam 1

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

Chapter 14 Simple Linear Regression

The EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X

January Examinations 2015

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Chapter 20 Duration Analysis

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu

Chapter 3 Describing Data Using Numerical Measures

The written Master s Examination

a. (All your answers should be in the letter!

x = , so that calculated

Lecture 4 Hypothesis Testing

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Credit Card Pricing and Impact of Adverse Selection

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

Linear Approximation with Regularization and Moving Least Squares

Problem Set 9 Solutions

Lecture 3: Probability Distributions

6. Stochastic processes (2)

Gaussian Mixture Models

6. Stochastic processes (2)

Topic- 11 The Analysis of Variance

Exercises of Chapter 2

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution

Explaining the Stein Paradox

F statistic = s2 1 s 2 ( F for Fisher )

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

e i is a random error

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Target tracking example Filtering: Xt. (main interest) Smoothing: X1: t. (also given with SIS)

Chapter 11: I = 2 samples independent samples paired samples Chapter 12: I 3 samples of equal size J one-way layout two-way layout

Estimation of the Mean of Truncated Exponential Distribution

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Lecture 3 Stat102, Spring 2007

Homework Assignment 3 Due in class, Thursday October 15

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals

The Expectation-Maximisation Algorithm

Bayesian Learning. Smart Home Health Analytics Spring Nirmalya Roy Department of Information Systems University of Maryland Baltimore County

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Statistics for Economics & Business

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

Estimation: Part 2. Chapter GREG estimation

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables

Rockefeller College University at Albany

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

An (almost) unbiased estimator for the S-Gini index

NAME and Section No.

PhysicsAndMathsTutor.com

ECE559VV Project Report

Please review the following statement: I certify that I have not given unauthorized aid nor have I received aid in the completion of this exam.

6 More about likelihood

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis

Chapter 13: Multiple Regression

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

/ n ) are compared. The logic is: if the two

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

STAT 3008 Applied Regression Analysis

ECONOMETRICS - FINAL EXAM, 3rd YEAR (GECO & GADE)

A Bayesian methodology for systemic risk assessment in financial networks

Low default modelling: a comparison of techniques based on a real Brazilian corporate portfolio

Answers Problem Set 2 Chem 314A Williamsen Spring 2000

Statistics MINITAB - Lab 2

Parametric fractional imputation for missing data analysis

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition

Lecture 20: Hypothesis testing

International Journal of Industrial Engineering Computations

Differentiating Gaussian Processes

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Applied Stochastic Processes

Decision-making and rationality

Lecture 2: Prelude to the big shrink

Finding Dense Subgraphs in G(n, 1/2)

Lecture Notes on Linear Regression

Empirical Methods for Corporate Finance. Identification

Generalized Linear Methods

COS 521: Advanced Algorithms Game Theory and Linear Programming

Transcription:

Stat 543 Exam 2 Sprng 2016 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of 11 questons. Do at least 10 of the 11 parts of the man exam. I wll score your best 10 answers at 10 ponts apece (makng 100 ponts possble). There s also on the last page of the Exam an "Extra Credt" queston that wll be scored out of 10 ponts. Any Extra Credt obtaned wll be recorded and used at the end of the course at Vardeman's dscreton n decdng borderlne grades. DO NOT spend tme on ths queston untl you are done wth the entrety of the regular exam. 1

1. Below are three pdfs for X, f x 1, f x 2, and f x 3. Use them n the rest of ths queston. x 1 2 3 4 5 6 7 3.20.05.15.15.10.05.30 2.10.05.25.05.20.15.20 1.25.15.05.25.15.05.10 a) For whch are there non-randomzed most powerful sze tests of H 0 : 1 vs H 1: 2? b) Identfy a most powerful sze.15 test of H 0 : 1 vs H 1: 2. 2

c) Fnd a 0-1 loss Bayes test of H 0 : 1 vs H 1: 2 or 3 for a pror dstrbuton wth 1.4, 2.3, and 3.3. (Gve all 7 values of x g g g.) 2. In ths problem we'll use the Exp dstrbuton wth pdf f x exp xix 0 use wthout proof the facts that f X Exp and 0 t then P X t exp t,. You may f X ndependent of X Exp then Y mn X, X Exp 1 Exp 2 2 1 2 1 2 In a so-called "competng rsks" context, an ndvdual or tem has a lfetme Z mn UV, U and V are postve tmes to falure/death from two dfferent causes. a) For 1,, n model U Exp 1 and V Exp2 what s observed are the d pars W Z, IZ U. (Note that I Z U observed s the value of U and the fact that U V w z,1 and z,0 w. where wth all U's and V 's ndependent. Suppose that =1 means that what s.) Gve lkelhood terms f, f w,1 1, 2: f w,0, : 1 2 w for observed 1 2 3

b) Sometmes, a cause of falure may not be recorded and thus only Z (and not w ) s known. Suppose that nformaton on 5 w1 3,1, w2 7,1, Z3 2, w4 3,0, and w 5 1,0. Suppose further that a Bayesan uses a pror for 1, 2 that s one of ndependence wth both 1 and 2 a pror Exp 1 dstrbuted. Carefully descrbe a Gbbs samplng algorthm for generatng trples * * 1, 2, w3,2 j j j * n ndvduals/tems s (terates for the 2 rates and the unobserved ndcator). If t s possble to name a dstrbuton from whch a gven update must be sampled, do so. At a mnmum, gve a form for each unvarate update dstrbuton up to a multplcatve constant. 4

c) Completely descrbe an EM algorthm that can be used to fnd an MLE of, based on the data used n part b). (It s not really necessary to resort to EM here, as the calculus problem s farly easy. But for purposes of the exam, wrte out the EM algorthm.) 1 2 5

3. Suppose that X1, X2,, Xn are d Ber p. Let S m m X X avalable for nference develops an estmator 1. A statstcan expectng to have only n 1 observatons S n 1 for p under SEL. (Ths estmator may well be a based estmator.) In fact, n observatons wll be avalable. Fnd another * estmator of p, say, that you are sure wll have smaller MSE p than the value of p 0,1. S n no matter what s S n 1 6

4. Suppose that X1, X2,, Xn are d wth margnal pdf f x x 1 I0 x 1. a) Fnd a lower bound for the varance of any unbased estmator X of sn vector of n observatons). (based on the b) Do you expect there to exst an unbased estmator of sn Explan! achevng your bound from a)? 7

5. Suppose that X1, X2,, Xn purposes) are d B, mp and (perhaps for "acceptance samplng" p P X 0 1 1 p p s of nterest. Fnd an UMVUE for ths quantty and say why you know your estmator s UMVU. m (Hnt: You may fnd t useful to thnk of the each Ber p.) m X 's as Y, j for mn ndependent varables Y, j1 j

6. Argue carefully that you could use d double exponental observatons (.e. ones wth margnal pdf 1 exp x on ) to generate a standard normal random varable va the rejecton algorthm, but that 2 you could NOT use d standard normal random varables to generate a double exponental random varable va the rejecton algorthm. 9

7. (EXTRA CREDIT ONLY) Consder the two margnal pdfs and d observatons 1, 2, 2 f x 0 I0 x1 and f x 1 I0 x 1 ln 2 X X from one of these dstrbutons (specfed by f H : 0 vs H : 1 for any sze 0,1 that there exsts a non-randomzed UMP test of 0 1 X,, 1 X n. Gve an explct large n approxmate form for such a test for.05. x x ). Argue carefully based on 10