Module 6: Methods of Point Estimation Statistics (OA3102)
|
|
- Gerald Oliver
- 5 years ago
- Views:
Transcription
1 Module 6: Methods of Point Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter Revision:
2 Goals for this Module Learn about two methods for finding good point estimators Methods of moments Maximum likelihood Revision:
3 Methods of Point Estimation Definition of unbiasedness does not help you find unbiased estimators Here we introduce two useful methods for finding estimators with good properties Method of moments Maximum likelihood Maximum likelihood estimators often more efficient than method of moments estimators, but cost is that they can be harder to calculate Revision:
4 Method of Moments Idea: For a theoretical distribution with k parameters, find parameter estimates so that first k population moments match the data s sample moments Population moments: E.g., the first population moment of Y is just the expected value of Y, E(Y) Sample moments: m k k EY ( ) E.g., the first sample moment of Y is just the sample mean, Revision: k 1 n k Yi n i 1 Y
5 Method of Moments Approach: Choose as estimates, those values of the parameters that are solutions to the equations m k k, for k=1,..,t, where t is the number of parameters I.e., find the values of the parameters so that n 1 E( Y) Y m 1 i 1 n i1 1 n E( Y ) Yi m 2 n i1 n t t ( ) Revision: 1-12 t E Y Yi m t 5 n i1 1
6 Again, What s a Population Moment? For a distribution with pdf f (y) The kth population moment is k k k E Y y f y dy ( ) So E(Y) is the first population moment, E(Y 2 ) is the second population moment, etc. Useful fact: We know Var(Y)=E(Y 2 )-[E(Y)] 2 so 2 Var( ) 2 E Y Y E Y For example, for Y with a normal distribution First population moment: EY Second population moment: E Y Var( Y) E Y Revision:
7 And, What s a Sample Moment? For Y 1, Y 2,, Y n, a random sample from the distribution, the kth sample moment is m k 1 n k Y n i 1 Thus, the first sample moment is 1 n i n i 1 m 1 Y Y The second sample moment is Etc. m 2 1 n 2 Yi n i 1 Revision:
8 Steps in Method of Moments 1. Determine how many parameters you need to estimate E.g., one for exponential (l), two for normal (, ), etc. 2. Write out as many equations equating the population moments to sample moments as there are parameters 3. Express the population moments in terms of the parameters and substitute into your equations 4. Solve the n equations for the n unknown population parameters Revision:
9 Method of Moments by the Numbers for the Normal Distribution 1. Normal has two parameters to estimate: and 2. So, we need two equations: m and 1 1 m 2 2 which are n E( Y) Y and E( Y ) Y n i 1 3. Two slides back we expressed population moments in terms of parameters, so substituting we have Y and 4. So, now we have to solve the two equations for the two unknown population parameters Revision: n 2 Yi n i1 i
10 Solving for Method of Moments Estimates for the Normal Distribution Revision:
11 Example #2 Let Y 1, Y 2,, Y n be a random sample of service times for n customers at a logistics facility Assume the underlying distribution is exponential with parameter l Since there is only one unknown parameter, we relate E( Y) Y and solve For an exponential distribution, we know So, we set 1/ l Y Solving, we get the estimator ˆ l 1 Y EY ( ) 1/ l Revision:
12 Example 9.11 Let Y 1, Y 2,, Y n be a random sample from a uniform distribution over the interval [0, q]. Use the method of moments to derive an estimator for q. Solution: Revision:
13 Example 9.12 (not req d to know) Show that ˆ q 2Y, from Example 9.12, is a consistent estimator for q. Solution: Revision:
14 Example 9.13 Let Y 1, Y 2,, Y n be a random sample from a gamma distribution with parameters a and b We know that E(Y)=ab and Var(Y)=ab 2 Find the moment estimators for a and b Revision:
15 Example 9.13 (cont d) Revision:
16 On Method of Moments Estimators Main benefit of Method of Moments (MoM) estimators is that they are often easy to derive Also, they re consistent estimators (a topic which we haven t really covered; just know it is another good characteristic for an estimator) However, MoM estimators are sometimes not very efficient estimators Another characteristic we haven t really covered It means they can have larger standard errors than other estimators Finally, in many cases, MoM estimators biased Revision:
17 Maximum Likelihood Idea: For a theoretical distribution, find the parameters of the distribution that make the observed data most likely In a picture: Which distribution is more likely to have generated the observed data? Revision:
18 Illustrating the Idea You ve got an urn with three balls in it Each ball is either red or white You pull out two and observe they re both red Let r be the total number of red balls Before we sampled the two balls, r 0,1, 2,3 After collecting our data, we know r 2,3 How to decide on a good estimate of the total number of red balls? Revision:
19 Illustrating the Idea (continued) The probability of observing y red balls out of n balls sampled from an urn with r red balls and N total balls follows a hypergeometric distribution (see chpt. 3.7) r N r N Pr( Y y) y n y n Here we have N=3 balls, we have observed y=n=2 red balls Revision:
20 Illustrating the Idea (continued) r 3 r 3 So, we know Pr( Y 2) and we ask: What value of r maximizes the probability of having observed two red balls? For r=2, !1! 3! 1 Pr( Y 2) !0!1!0! 2!1! 3 For r=3, Thus, we decide ! 3! Pr( Y 2) !1! 2!1! rˆ 3 because it is most likely Revision:
21 Maximum Likelihood (1) Let Y 1, Y 2,, Y n have a joint dist n f(y 1,,y n q 1,,q k ) with parameters q 1, q 2,, q k For observed sample values y 1, y 2,, y n, f(y 1,y 2,,y n ; q 1,q 2,,q k ) is called the likelihood function when treated as a function of q 1, q 2,, q k We write it as L(y 1,y 2,,y n ; q 1,q 2,,q k ) The maximum likelihood estimators (MLEs) ˆ q1, ˆ q2,, ˆ qk are those values that maximize the likelihood function, so that: L y, y,, y ; ˆ q, ˆ q,, ˆ q L y, y,, y ; q, q,, q 1 2 n 1 2 k 1 2 n 1 2 k for any values of q,, q Revision: k
22 Maximum Likelihood (2) Idea is to find parameter estimates that make the observed data most likely If Y1, Y2,..., Yn iid, the likelihood function is L( Y,..., Y ; q, q,, q ) f ( Y ; q, q,, q ) 1 n 1 2 k i 1 2 k i1 Usually easier to maximize the log-likelihood: l( Y,..., Y ; q, q,, q ) ln f ( Y ; q, q,, q ) 1 n 1 2 k i 1 2 k i1 To maximize, remember your calculus Take the (partial) derivative(s) of the log-likelihood function with respect to q Set derivative(s) equal to zero and solve for q(s) Revision: n n
23 Example: MLE for Exponential Given a random sample Y 1, Y 2,, Y n from an exponential distribution, find the MLE for l Revision:
24 Example 9.14 Let Y 1, Y 2,, Y n be iid according to a Bernoulli distribution with probability p. That is, p, y1 Pr( Y y) 1 p, y 0 Find the MLE for p. Revision:
25 Example 9.14 (continued) Revision:
26 Example 9.15 Given a random sample Y 1, Y 2,, Y n from a normal distribution, find the MLEs for and 2 Revision:
27 Example 9.15 (continued) Revision:
28 Example 9.16: Sometimes Calculus Does Not Work Back to the reaction time problem: X~U[0,q] and we want to estimate q with a random sample Y 1, Y 2,, Y n Since f(y;q) = 1/q, 0 < y < q (and 0 otherwise): 1, 0 y1 q,,0 y n n q L( y1,, yn; q ) q 0, otherwise But maxima occurs at a point of discontinuity, so calculus does not work Graphically, though, it s clear that ˆ q max( ) Y i Revision: * Figure from Probability and Statistics for Engineering and the Sciences, 7 th ed., Duxbury Press, 2008.
29 Large Sample Behavior of MLE Under very general conditions on the joint distribution, when n is large The MLE of any parameter q is approximately unbiased The MLE of q is nearly as small as can be achieved by any other estimator That is, the MLE of q is approximately the MVUE of q A good thing! Revision:
30 MLEs for Functions of Parameters The Invariance Property: Let ˆ q1, ˆ q2,, ˆ qk be the MLEs of the parameters q 1, q 2,, q k. Then the MLE of any one-to-one function h(q 1, q 2,, q k ) of these parameters is the function of the MLEs: h ˆ q ˆ q ˆ q Example: As we showed, in the normal case the MLEs for and 2 are So, if we want the MLE for, we have Thus: i1 Revision: ,,, k 1 2 n n ˆ Y Yi and ˆ Y 2 i Y n n ˆ 1 n n Y 2 i Y i1 i1 h, 2 2
31 What We Covered in this Module Learned about two methods for finding good point estimators Methods of moments Maximum likelihood Revision:
32 Homework WM&S chapter 9 Required exercises: 69, 71, 76, 80 part a, 81, 88 Also, do the following additional problem: Let Y 1, Y 2,, Y n be a random sample from a gamma distribution with parameters a and b. Find the Method of Moment estimators for a and b. Extra credit: 79, 92 Hints and instructions: Ex. 81: Ignore the hint in the book. Instead, use the Invariance Principle we discussed in class Revision:
33 Homework Hints and instructions continued: Ex. 88: To compare estimators, do so empirically with q =1 o To generate a random observation from the pdf f(y)=2y, 0<y<1, just take the square root of a random uniform o Then, first, empirically demonstrate that the estimators are unbiased by showing the sample mean of a large number of qˆ values is very close to 1 o Second, estimate the standard errors for both estimators via simulation and see if the standard error of one estimator is smaller than the standard error of the other Revision:
Chapter 6: Functions of Random Variables
Chapter 6: Functions of Random Variables Professor Ron Fricker Naval Postgraduate School Monterey, California 3/15/15 Reading Assignment: Sections 6.1 6.4 & 6.8 1 Goals for this Chapter Introduce methods
More informationModule 10: Analysis of Categorical Data Statistics (OA3102)
Module 10: Analysis of Categorical Data Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 14.1-14.7 Revision: 3-12 1 Goals for this
More informationPractice Problems Section Problems
Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,
More informationSTA 260: Statistics and Probability II
Al Nosedal. University of Toronto. Winter 2017 1 Properties of Point Estimators and Methods of Estimation 2 3 If you can t explain it simply, you don t understand it well enough Albert Einstein. Definition
More informationModule 9: Nonparametric Statistics Statistics (OA3102)
Module 9: Nonparametric Statistics Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 15.1-15.6 Revision: 3-12 1 Goals for this Lecture
More informationQuiz 1. Name: Instructions: Closed book, notes, and no electronic devices.
Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices. 1. What is the difference between a deterministic model and a probabilistic model? (Two or three sentences only). 2. What is the
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More information1. (Regular) Exponential Family
1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &
More informationIntroduction to Maximum Likelihood Estimation
Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:
More informationEcon 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE
Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE Eric Zivot Winter 013 1 Wald, LR and LM statistics based on generalized method of moments estimation Let 1 be an iid sample
More informationp y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.
1. Suppose Y 1, Y 2,..., Y n is an iid sample from a Bernoulli(p) population distribution, where 0 < p < 1 is unknown. The population pmf is p y (1 p) 1 y, y = 0, 1 p Y (y p) = (a) Prove that Y is the
More informationStatistics and Econometrics I
Statistics and Econometrics I Point Estimation Shiu-Sheng Chen Department of Economics National Taiwan University September 13, 2016 Shiu-Sheng Chen (NTU Econ) Statistics and Econometrics I September 13,
More informationSTAT 135 Lab 3 Asymptotic MLE and the Method of Moments
STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,
More informationA Primer on Statistical Inference using Maximum Likelihood
A Primer on Statistical Inference using Maximum Likelihood November 3, 2017 1 Inference via Maximum Likelihood Statistical inference is the process of using observed data to estimate features of the population.
More information[Chapter 6. Functions of Random Variables]
[Chapter 6. Functions of Random Variables] 6.1 Introduction 6.2 Finding the probability distribution of a function of random variables 6.3 The method of distribution functions 6.5 The method of Moment-generating
More informationStatistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation
Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence
More informationUsing R in Undergraduate Probability and Mathematical Statistics Courses. Amy G. Froelich Department of Statistics Iowa State University
Using R in Undergraduate Probability and Mathematical Statistics Courses Amy G. Froelich Department of Statistics Iowa State University Undergraduate Probability and Mathematical Statistics at Iowa State
More informationFinal Examination. STA 215: Statistical Inference. Saturday, 2001 May 5, 9:00am 12:00 noon
Final Examination Saturday, 2001 May 5, 9:00am 12:00 noon This is an open-book examination, but you may not share materials. A normal distribution table, a PMF/PDF handout, and a blank worksheet are attached
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationIntroduction to Survey Analysis!
Introduction to Survey Analysis! Professor Ron Fricker! Naval Postgraduate School! Monterey, California! Reading Assignment:! 2/22/13 None! 1 Goals for this Lecture! Introduction to analysis for surveys!
More informationModern Methods of Data Analysis - WS 07/08
Modern Methods of Data Analysis Lecture VIc (19.11.07) Contents: Maximum Likelihood Fit Maximum Likelihood (I) Assume N measurements of a random variable Assume them to be independent and distributed according
More informationSTAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method
STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method Rebecca Barter February 2, 2015 Confidence Intervals Confidence intervals What is a confidence interval? A confidence interval is calculated
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationExample 1. Let be a random sample from. Please find a good point estimator for
AMS570 Prof. Wei Zhu Point Estimators Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators. Property
More informationLecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis
Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationMTH135/STA104: Probability
MTH5/STA4: Probability Homework # Due: Tuesday, Dec 6, 5 Prof Robert Wolpert Three subjects in a medical trial are given drug A After one week, those that do not respond favorably are switched to drug
More informationMath 5a Reading Assignments for Sections
Math 5a Reading Assignments for Sections 4.1 4.5 Due Dates for Reading Assignments Note: There will be a very short online reading quiz (WebWork) on each reading assignment due one hour before class on
More informationEECS 126 Probability and Random Processes University of California, Berkeley: Spring 2018 Kannan Ramchandran February 14, 2018.
EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2018 Kannan Ramchandran February 14, 2018 Midterm 1 Last Name First Name SID You have 10 minutes to read the exam and
More informationDS-GA 1003: Machine Learning and Computational Statistics Homework 7: Bayesian Modeling
DS-GA 1003: Machine Learning and Computational Statistics Homework 7: Bayesian Modeling Due: Tuesday, May 10, 2016, at 6pm (Submit via NYU Classes) Instructions: Your answers to the questions below, including
More informationSolution-Midterm Examination - I STAT 421, Spring 2012
Solution-Midterm Examination - I STAT 4, Spring 0 Prof. Prem K. Goel. [0 points] Let X, X,..., X n be a random sample of size n from a Gamma population with β = and α = ν/, where ν is the unknown parameter.
More informationMath 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14
Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional
More informationDepartment of Statistical Science FIRST YEAR EXAM - SPRING 2017
Department of Statistical Science Duke University FIRST YEAR EXAM - SPRING 017 Monday May 8th 017, 9:00 AM 1:00 PM NOTES: PLEASE READ CAREFULLY BEFORE BEGINNING EXAM! 1. Do not write solutions on the exam;
More informationThis format is the result of tinkering with a mixed lecture format for 3 terms. As such, it is still a work in progress and we will discuss
Version 1, August 2016 1 This format is the result of tinkering with a mixed lecture format for 3 terms. As such, it is still a work in progress and we will discuss adaptations both to the general format
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationToday. Calculus. Linear Regression. Lagrange Multipliers
Today Calculus Lagrange Multipliers Linear Regression 1 Optimization with constraints What if I want to constrain the parameters of the model. The mean is less than 10 Find the best likelihood, subject
More informationISyE 6644 Fall 2014 Test 3 Solutions
1 NAME ISyE 6644 Fall 14 Test 3 Solutions revised 8/4/18 You have 1 minutes for this test. You are allowed three cheat sheets. Circle all final answers. Good luck! 1. [4 points] Suppose that the joint
More informationLinear Methods for Prediction
Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationIEOR 165 Lecture 13 Maximum Likelihood Estimation
IEOR 165 Lecture 13 Maximum Likelihood Estimation 1 Motivating Problem Suppose we are working for a grocery store, and we have decided to model service time of an individual using the express lane (for
More informationPATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 2: Estimation Theory January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology Classical Estimation
More informationMath 407: Probability Theory 5/10/ Final exam (11am - 1pm)
Math 407: Probability Theory 5/10/2013 - Final exam (11am - 1pm) Name: USC ID: Signature: 1. Write your name and ID number in the spaces above. 2. Show all your work and circle your final answer. Simplify
More informationFirst Year Examination Department of Statistics, University of Florida
First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your
More information0, otherwise. U = Y 1 Y 2 Hint: Use either the method of distribution functions or a bivariate transformation. (b) Find E(U).
1. Suppose Y U(0, 2) so that the probability density function (pdf) of Y is 1 2, 0 < y < 2 (a) Find the pdf of U = Y 4 + 1. Make sure to note the support. (c) Suppose Y 1, Y 2,..., Y n is an iid sample
More informationStatistics 100A Homework 5 Solutions
Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to
More information[Refer Slide Time: 00:45]
Differential Calculus of Several Variables Professor: Sudipta Dutta Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Module 1 Lecture No 2 Continuity and Compactness. Welcome
More informationPOLI 8501 Introduction to Maximum Likelihood Estimation
POLI 8501 Introduction to Maximum Likelihood Estimation Maximum Likelihood Intuition Consider a model that looks like this: Y i N(µ, σ 2 ) So: E(Y ) = µ V ar(y ) = σ 2 Suppose you have some data on Y,
More informationMath 1280 Notes 4 Last section revised, 1/31, 9:30 pm.
1 competing species Math 1280 Notes 4 Last section revised, 1/31, 9:30 pm. This section and the next deal with the subject of population biology. You will already have seen examples of this. Most calculus
More informationCSE 312 Final Review: Section AA
CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material
More informationFinal Exam # 3. Sta 230: Probability. December 16, 2012
Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets
More informationMarkov Networks.
Markov Networks www.biostat.wisc.edu/~dpage/cs760/ Goals for the lecture you should understand the following concepts Markov network syntax Markov network semantics Potential functions Partition function
More informationProblem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20
Problem Set MAS 6J/.6J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 0 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain a
More informationOpen book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you.
ISQS 5347 Final Exam Spring 2017 Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you. 1. Recall the commute
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria
ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor guirregabiria SOLUTION TO FINL EXM Monday, pril 14, 2014. From 9:00am-12:00pm (3 hours) INSTRUCTIONS:
More informationDr. Maddah ENMG 617 EM Statistics 10/15/12. Nonparametric Statistics (2) (Goodness of fit tests)
Dr. Maddah ENMG 617 EM Statistics 10/15/12 Nonparametric Statistics (2) (Goodness of fit tests) Introduction Probability models used in decision making (Operations Research) and other fields require fitting
More informationCS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas
CS839: Probabilistic Graphical Models Lecture 7: Learning Fully Observed BNs Theo Rekatsinas 1 Exponential family: a basic building block For a numeric random variable X p(x ) =h(x)exp T T (x) A( ) = 1
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationSOLUTION FOR HOMEWORK 12, STAT 4351
SOLUTION FOR HOMEWORK 2, STAT 435 Welcome to your 2th homework. It looks like this is the last one! As usual, try to find mistakes and get extra points! Now let us look at your problems.. Problem 7.22.
More informationFinal Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.
1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2
IEOR 316: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 2 More Probability Review: In the Ross textbook, Introduction to Probability Models, read
More informationPart 2: One-parameter models
Part 2: One-parameter models 1 Bernoulli/binomial models Return to iid Y 1,...,Y n Bin(1, ). The sampling model/likelihood is p(y 1,...,y n ) = P y i (1 ) n P y i When combined with a prior p( ), Bayes
More informationPart 4: Multi-parameter and normal models
Part 4: Multi-parameter and normal models 1 The normal model Perhaps the most useful (or utilized) probability model for data analysis is the normal distribution There are several reasons for this, e.g.,
More informationHonors Advanced Mathematics Determinants page 1
Determinants page 1 Determinants For every square matrix A, there is a number called the determinant of the matrix, denoted as det(a) or A. Sometimes the bars are written just around the numbers of the
More informationECE531 Lecture 10b: Maximum Likelihood Estimation
ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So
More informationChapter 5 Class Notes
Chapter 5 Class Notes Sections 5.1 and 5.2 It is quite common to measure several variables (some of which may be correlated) and to examine the corresponding joint probability distribution One example
More informationFinal Examination December 16, 2009 MATH Suppose that we ask n randomly selected people whether they share your birthday.
1. Suppose that we ask n randomly selected people whether they share your birthday. (a) Give an expression for the probability that no one shares your birthday (ignore leap years). (5 marks) Solution:
More informationMath 562 Homework 1 August 29, 2006 Dr. Ron Sahoo
Math 56 Homework August 9, 006 Dr. Ron Sahoo He who labors diligently need never despair; for all things are accomplished by diligence and labor. Menander of Athens Direction: This homework worths 60 points
More informationStat 5102 Lecture Slides Deck 3. Charles J. Geyer School of Statistics University of Minnesota
Stat 5102 Lecture Slides Deck 3 Charles J. Geyer School of Statistics University of Minnesota 1 Likelihood Inference We have learned one very general method of estimation: method of moments. the Now we
More informationAll other items including (and especially) CELL PHONES must be left at the front of the room.
TEST #2 / STA 5327 (Inference) / Spring 2017 (April 24, 2017) Name: Directions This exam is closed book and closed notes. You will be supplied with scratch paper, and a copy of the Table of Common Distributions
More informationComments. x > w = w > x. Clarification: this course is about getting you to be able to think as a machine learning expert
Logistic regression Comments Mini-review and feedback These are equivalent: x > w = w > x Clarification: this course is about getting you to be able to think as a machine learning expert There has to be
More informationMidterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm
Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please
More information#29: Logarithm review May 16, 2009
#29: Logarithm review May 16, 2009 This week we re going to spend some time reviewing. I say re- view since you ve probably seen them before in theory, but if my experience is any guide, it s quite likely
More informationQuantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing
Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October
More informationECE 275A Homework 7 Solutions
ECE 275A Homework 7 Solutions Solutions 1. For the same specification as in Homework Problem 6.11 we want to determine an estimator for θ using the Method of Moments (MOM). In general, the MOM estimator
More informationThis exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.
TEST #3 STA 5326 December 4, 214 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access to
More informationMax. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes
Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter
More informationProbabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016
Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier
More informationEECS 126 Probability and Random Processes University of California, Berkeley: Spring 2018 Kannan Ramchandran February 14, 2018.
EECS 6 Probability and Random Processes University of California, Berkeley: Spring 08 Kannan Ramchandran February 4, 08 Midterm Last Name First Name SID You have 0 minutes to read the exam and 90 minutes
More informationGradient Ascent Chris Piech CS109, Stanford University
Gradient Ascent Chris Piech CS109, Stanford University Our Path Deep Learning Linear Regression Naïve Bayes Logistic Regression Parameter Estimation Our Path Deep Learning Linear Regression Naïve Bayes
More informationStat 426 : Homework 1.
Stat 426 : Homework 1. Moulinath Banerjee University of Michigan Announcement: The homework carries 120 points and contributes 10 points to the total grade. (1) A geometric random variable W takes values
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationTest Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics
Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests
More informationCS 361: Probability & Statistics
October 17, 2017 CS 361: Probability & Statistics Inference Maximum likelihood: drawbacks A couple of things might trip up max likelihood estimation: 1) Finding the maximum of some functions can be quite
More informationGeneralized Linear Models Introduction
Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,
More informationMath 362, Problem set 1
Math 6, roblem set Due //. (4..8) Determine the mean variance of the mean X of a rom sample of size 9 from a distribution having pdf f(x) = 4x, < x
More informationMultivariate distributions
CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density
More informationThis midterm covers Chapters 6 and 7 in WMS (and the notes). The following problems are stratified by chapter.
This midterm covers Chapters 6 and 7 in WMS (and the notes). The following problems are stratified by chapter. Chapter 6 Problems 1. Suppose that Y U(0, 2) so that the probability density function (pdf)
More informationLinear Regression with 1 Regressor. Introduction to Econometrics Spring 2012 Ken Simons
Linear Regression with 1 Regressor Introduction to Econometrics Spring 2012 Ken Simons Linear Regression with 1 Regressor 1. The regression equation 2. Estimating the equation 3. Assumptions required for
More informationIntroduction to Simple Linear Regression
Introduction to Simple Linear Regression Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Introduction to Simple Linear Regression 1 / 68 About me Faculty in the Department
More informationCS 361: Probability & Statistics
March 14, 2018 CS 361: Probability & Statistics Inference The prior From Bayes rule, we know that we can express our function of interest as Likelihood Prior Posterior The right hand side contains the
More informationAn introduction to plotting data
An introduction to plotting data Eric D. Black California Institute of Technology v2.0 1 Introduction Plotting data is one of the essential skills every scientist must have. We use it on a near-daily basis
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationMathematics 375 Probability and Statistics I Final Examination Solutions December 14, 2009
Mathematics 375 Probability and Statistics I Final Examination Solutions December 4, 9 Directions Do all work in the blue exam booklet. There are possible regular points and possible Extra Credit points.
More informationChapter 11. Regression with a Binary Dependent Variable
Chapter 11 Regression with a Binary Dependent Variable 2 Regression with a Binary Dependent Variable (SW Chapter 11) So far the dependent variable (Y) has been continuous: district-wide average test score
More informationMathematics Qualifying Examination January 2015 STAT Mathematical Statistics
Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,
More informationMLE and GMM. Li Zhao, SJTU. Spring, Li Zhao MLE and GMM 1 / 22
MLE and GMM Li Zhao, SJTU Spring, 2017 Li Zhao MLE and GMM 1 / 22 Outline 1 MLE 2 GMM 3 Binary Choice Models Li Zhao MLE and GMM 2 / 22 Maximum Likelihood Estimation - Introduction For a linear model y
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationNCERT solution for Integers-2
NCERT solution for Integers-2 1 Exercise 6.2 Question 1 Using the number line write the integer which is: (a) 3 more than 5 (b) 5 more than 5 (c) 6 less than 2 (d) 3 less than 2 More means moving right
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More information