Statistical inference

Size: px
Start display at page:

Download "Statistical inference"

Transcription

1 Statistical inference Contents 1. Main definitions 2. Estimation 3. Testing L. Trapani MSc Induction - Statistical inference 1

2 1 Introduction: definition and preliminary theory In this chapter, we shall spell out some important definitions and two important results in statistics, namely the Law of Large Numbers and the Central Limit Theorem. Both results are frequently used to prove properties of estimators, and even though a profound knowledge of them is not necessary, it is worth considering them L. Trapani MSc Induction - Statistical inference 2

3 The Law of Large Numbers To state this result formally, consider: a sample Y 1,, Y n of independent and identically distributed (iid) random variables, with E[Y j ]=μ; the arithmetic average Y = 1 n n j= 1 Y j Then it holds that [ ] Y μ > ε = 0 for any ε 0 lim P > n L. Trapani MSc Induction - Statistical inference 3

4 Some comments: the LLN is an asymptotic result, i.e. it describes a situation whereby the sample size is very large roughly speaking, the LLN states that when one estimates the expected value μ by means of the average, then there is a zero chance that the average and the expected value can differ by an arbitrarily small number: thus, there is a zero chance that the average can miss the true value of the expected value μ this result holds if one has an infinite number of observations L. Trapani MSc Induction - Statistical inference 4

5 It is indeed true that the LLN holds if one has an infinite number of observations; However, when the number of observations one has is sufficiently large, then the LLN is a good approximation of the degree of accuracy of the average as an estimator of the mean Some terminology: the average (as the sample size n grows large) gets almost surely close to the expected value; this is denoted as p limy Y p μ = μ or equivalently L. Trapani MSc Induction - Statistical inference 5

6 The central limit theorem We know that, according to the LLN, the expected value of an iid population can be estimated in an almost surely perfect way as the sample size n is growing large It could be interesting to know more about the behaviour of the estimation error in other words, as the sample size n increases, we wonder what happens to Y μ L. Trapani MSc Induction - Statistical inference 6

7 It is well known, from the LLN, that this quantity will be equal to zero More informatively, the Central Limit Theorem (in its basic version) states that: given a sample Y 1,, Y n of independent and identically distributed (iid) random variables, with E[Y j ]=μ and finite variance equal to σ 2, it holds that n ( ) [ ] 2 Y μ ~ N 0, σ L. Trapani MSc Induction - Statistical inference 7

8 Some comments: the CLT is again an asymptotic result roughly speaking, the CLT states that when estimating the expected value μ by means of the average, then the estimation error, magnified by n 1/2, has a normal distribution L. Trapani MSc Induction - Statistical inference 8

9 this is a very powerful and versatile result: it refers to an estimation error: not only do we know that this error will collapse to zero (LLN), but we also know, along which pattern it will collapse to zero; irrespective of the true distribution of the random variables (which we do not need to know), the estimation error will always have a normal distribution, i.e. a distribution which is well-known more precisely, note that (with a slight abuse of notation) Y ~ 2 σ N μ, n 2 σ with n = 0 as n L. Trapani MSc Induction - Statistical inference 9

10 Some terminology: the estimation error is said to converge in distribution to a normal random variable; this is denoted as n ( ) d ( 2 Y μ N 0, σ ) Y μ n σ d N ( 0,1) or L. Trapani MSc Induction - Statistical inference 10

11 2 The properties of estimators In this chapter, we shall provide only definitions of the most commonly studied properties of estimators These definitions are of paramount importance, and will occur very frequently in econometrics L. Trapani MSc Induction - Statistical inference 11

12 Estimation In the previous section, the notion of random variables and PDF have been discussed consider e.g. the normal case, X~N[μ,σ 2 ] Suppose that the PDF of a random variable Y has the form ƒ(y ϑ) this could be e.g. the normal distribution; the PDF depends on one or more parameters ϑ=(ϑ 1,, ϑ k ) which are within a set of possible values, say Ω, called the parameter space the goal of estimation is finding a good guess (an estimate) for ϑ L. Trapani MSc Induction - Statistical inference 12

13 Consider the random variable Y, with PDF ƒ(y ϑ), where ϑ is a parameter that we would like to estimate. Let Y 1,, Y n be a sample from this population: in other words, Y 1,, Y n is a collection of n observations drawn from the random variable Y (e.g. returns to an asset observed throughout 100 days) the number of observations n is defined sample size then an estimator of ϑ is a function or rule of the form ˆ ϑ = ˆ ϑ Y 1,...,Y n ( ) L. Trapani MSc Induction - Statistical inference 13

14 Thus it can be noted that: the estimator is a function (or a transformation) of the observations Y 1,, Y n ; the estimator itself is a random variable because it is a function/transformation of random variables. There are various techniques to compute an estimator; to name the most frequently employed ones (Ordinary) Least Squares; (Generalised) Method of Moments; Maximum Likelihood. L. Trapani MSc Induction - Statistical inference 14

15 The definition of estimator given above is very general Thus, various estimators could potentially be proposed, and it becomes necessary to choose the appropriate estimator In order to do so, one needs to consider the properties of each estimator and select the estimator that best suits L. Trapani MSc Induction - Statistical inference 15

16 To tackle this questions, some properties of estimators are to be evaluated It is common to distinguish these properties into: small sample properties: these are properties/definitions that hold when the sample size n is finite, not large, i.e. not close to infinity strictly speaking, n is always finite large sample properties: these hold (strictly speaking) when the sample size n is very large, actually growing to infinity L. Trapani MSc Induction - Statistical inference 16

17 It is worth noting that: small sample properties hold for any n, whether it be finite or large; even though one can only have a finite number of observations: when n is big enough large sample properties could be a good approximation of the behaviour of the estimates note, as a general rule, that there is no theorem to tell when n is large enough particularly, it is NOT true that, as sometimes said, n=30 is large enough in order to pretend that one has an infinite number of observations L. Trapani MSc Induction - Statistical inference 17

18 Small sample properties Three main properties, known as unbiasedness; efficiency; precision. L. Trapani MSc Induction - Statistical inference 18

19 Unbiasedness Intuition: an estimator is said to be unbiased if on average it will yield the true parameter value; in other words, an estimator is unbiased if the underlying experiment is repeated infinitely many times by drawing sample of size n, the average value of the estimates from all those samples will be ϑ Formally, the definition of unbiasedness is ϑˆ is E unbiased if [ ] [ ˆ] ϑˆ ϑ or E ϑ and only = ϑ = δ = if 0 the quantity δ is also known as the bias of the estimator L. Trapani MSc Induction - Statistical inference 19

20 Exhibit 1 Unbiased and biased estimators. The density functions are normal, with (σ = 1). The true value of the population mean is (μ = 0). The dotted line denotes a biased estimator Sample value L. Trapani MSc Induction - Statistical inference 20

21 Efficiency Intuition: if we compare estimators that are unbiased, then the estimator with the smaller variance would be preferred, and would be defined more efficient it is important to note that efficient estimators do not exist: we only have more efficient estimators; efficiency is a criterion to choose among unbiased estimators. Formally: ϑˆ is 1 more ( ϑˆ ) ( ϑˆ ) Var < Var 1 efficient 2 than ϑˆ 2 if L. Trapani MSc Induction - Statistical inference 21

22 Exhibit 2 The density function of the efficient estimator is exemplified by a normal density with (σ = 0.5). The dotted line indicates a less efficient estimator (σ = 1) Sample value L. Trapani MSc Induction - Statistical inference 22

23 Precision Problem: efficiency is a criterion to select among unbiased estimators what if one needs to compare biased estimators? Intuition: the two estimators can be compared on the grounds of both their bias and their variance: Both bias and variance should be small Formally, the indicator that is commonly employed is the mean squared error (MSE) of the estimator, defined as MSE = Var ) [ ) ] 2 ( ϑ ) = E ( ϑ ϑ ) ( ) ) 2 ϑ + δ the criterion is (obviously): pick the estimator with the smaller MSE =... L. Trapani MSc Induction - Statistical inference 23

24 Some comments: the criterion is (obviously): pick the estimator with the smaller MSE note that the MSE is a criterion that gives equal weight to efficiency and bias (i.e. they are considered equally important); the MSE is employed to compare estimators, rather than to assess the goodness of one single estimator; L. Trapani MSc Induction - Statistical inference 24

25 Large sample properties Three main properties, known as consistency; limiting distribution; asymptotic efficiency. L. Trapani MSc Induction - Statistical inference 25

26 Consistency Intuition: an estimator is consistent if, as the sample size n increases, the estimated value collapses onto the true value of the parameter ϑ Formally (note the link with the LLN) ) limn P n ) p limϑn = ϑ ) p ϑ ϑ n [ ] ϑ ϑ > ε = 0 L. Trapani MSc Induction - Statistical inference 26

27 Some comments: consistency is a very important property, and it is common to discard estimators that are not consistent consistency could possibly be seen as the large sample counterpart to unbiasedness, but: an estimator does not need to be unbiased to be consistent, and an unbiased estimator is not necessarily consistent; roughly speaking, consistency means that: as n grows, the estimator will collapse on the true value of the parameter: thus, we do have asymptotic unbiasedness however (which is not considered by unbiasedness), the dispersion of the estimator around the true value must go to zero as well L. Trapani MSc Induction - Statistical inference 27

28 Limiting distribution Intuition: the limiting distribution of an estimator is the PDF (or more precisely the distribution) of the estimator as n tends to infinity example: many estimators have a distribution which is asymptotically normal with mean ϑ (the true value of the parameter) for large values of n this is referred to as asymptotic normality; when n is small, the limiting distribution is a (sometimes good) approximation of the true PDF of the estimator L. Trapani MSc Induction - Statistical inference 28

29 Asymptotic efficiency Intuition: this notion is actually the same as in the small sample case, but here the variance of the estimator is computed for n tending to infinity the asymptotic variance of an estimator is the variance of its limiting distribution; once again, asymptotic efficiency can be employed to compare estimators, rather than to assess the goodness of one estimator; the notion of asymptotic efficiency, similarly to the small sample case, can be applied only to consistent estimators. L. Trapani MSc Induction - Statistical inference 29

30 Confidence intervals As we know, save for the special and rather unrealistic case whereby one has an infinite number of observations, an estimator will guess the true value of a parameter up to an estimation error Thus, it could make sense to acknowledge that the estimator is not fully accurate Instead of estimating the parameter ϑ by simply using the estimator, it is sometimes common to find an interval wherein the true value of the parameter could lie with a certain probability using the estimator as a raw guess of ϑ is referred to as point estimation using an interval is known as interval estimation L. Trapani MSc Induction - Statistical inference 30

31 Definition: an interval estimate is a range of values where the true, unobserved ϑ lies, associated to the probability that ϑ lies within this range In other words: instead of estimating ϑ by means of a single number (the estimator or point estimate), ϑ is said to range in an interval [a,b] with probability p p is also referred to as the confidence level L. Trapani MSc Induction - Statistical inference 31

32 L. Trapani MSc Induction - Statistical inference 32 Most often, an estimator has (at least asymptotically) a normal distribution Therefore, confidence intervals are constructed as follows Suppose the estimator has a distribution Then a possible, and commonly employed confidence interval is n N 2, ~ ˆ σ ϑ ϑ + n n ˆ, 1.96 ˆ σ ϑ σ ϑ ϑ

33 Some comments here, the confidence level is p=0.95 see Lecture 2, slide 73 the interval estimation can be read as there s a 95% chance that the true value of ϑ belongs to the confidence interval note that the width of the interval itself contains some important information, as it tells about the accuracy of the estimator: the larger the interval, the higher the uncertainty about ϑ therefore, the larger the interval, the worse the precision of the estimator L. Trapani MSc Induction - Statistical inference 33

34 Note that the width of the confidence interval depends on 3 factors: the variance σ 2 : this is usually a characteristic of the data one has the larger σ 2, the less precise the estimator the number of observations n the larger n, the more precise the estimator in other words, the more information one has, the more accurate the estimates the confidence level p that one chooses: the larger p, the wider the confidence interval L. Trapani MSc Induction - Statistical inference 34

35 3 Hypothesis testing Hypothesis testing is an issue of paramount importance, and it will unfold during the various modules in econometrics. Several tests will be applied, and the mechanism of hypothesis testing will become clearer and clearer The purpose of this note is twofold. First, some useful definitions are going to be provided. Secondly, a quick (and very easy, but rigorous) rule of thumb to run hypothesis testing is presented. L. Trapani MSc Induction - Statistical inference 35

36 Hypothesis testing It is an issue of paramount importance! No theoretical background is needed (which is good news), A test is a procedure/decision rule which makes use of a sample of available data (say Y 1,, Y n ) and an estimator of a parameter ϑ to verify whether a certain hypothesis on ϑ holds true or not L. Trapani MSc Induction - Statistical inference 36

37 Some important definitions: a test is represented as a choice between the hypothesis we would like to verify (null hypothesis) or, in other words, a statement about the value of parameter ϑ an alternative which is the negation of the null (alternative hypothesis) L. Trapani MSc Induction - Statistical inference 37

38 the representation of a test is (almost universally) H0 : ϑ Ω0, H1 : ϑ Ω1 Ω Ω = Ω, Ω Ω 1 = the null hypothesis can be rejected (i.e. it s false) or not rejected (i.e. we can say it is true, with a slight abuse of terminology); L. Trapani MSc Induction - Statistical inference 38

39 sometimes a test can be wrong: there is a chance of rejecting the null when this is true or not rejecting it when this is false the probability of rejecting the null if it is true is called size of the test we would like it to be small, and usually the test is designed in such a way that the size is 0.05; the probability of rejecting the null if it is false is called power we would like this to be large, as close to 1 as possible; there exists a trade-off between size and power: namely, the smaller the size, the smaller the power L. Trapani MSc Induction - Statistical inference 39

40 P-value This is an extremely important definition, Whenever running a test, the output of any software is (together with a great deal of other results) a quantity called p-value: the p-value is a number between 0 and 1 it represents a probability with an abuse of terminology, we can employ the following rule of thumb to run a hypothesis test: the p-value is the probability that the null hypothesis is true (based on the data) L. Trapani MSc Induction - Statistical inference 40

41 Thus: with this rule of thumb, running a test means making the following decision: is the null hypothesis likely enough to be true? if we think/decide it is, then we conclude that we cannot reject the null hypothesis (i.e. we accept it, we think it is true and base our subsequent analysis as if it were true) we need a criterion to decide whether the null hypothesis is plausible enough L. Trapani MSc Induction - Statistical inference 41

42 A commonly employed criterion (which is almost universally accepted even though completely arbitrary) is if p-value>0.05, accept the null if p-value<0.05, reject the null Thus, to run a test we need to know TWO elements: the null hypothesis (and, obviously, the alternative) the p-value L. Trapani MSc Induction - Statistical inference 42

43 THE LAST SLIDE Nearly but not quite there is still an optional Q&A set of sessions but, if I don t see you, good luck for everything enjoy your MSc and have a happy life L. Trapani MSc Induction - Statistical inference 43

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics A short review of the principles of mathematical statistics (or, what you should have learned in EC 151).

More information

Probability. Table of contents

Probability. Table of contents Probability Table of contents 1. Important definitions 2. Distributions 3. Discrete distributions 4. Continuous distributions 5. The Normal distribution 6. Multivariate random variables 7. Other continuous

More information

Lectures 5 & 6: Hypothesis Testing

Lectures 5 & 6: Hypothesis Testing Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across

More information

LECTURE 5. Introduction to Econometrics. Hypothesis testing

LECTURE 5. Introduction to Econometrics. Hypothesis testing LECTURE 5 Introduction to Econometrics Hypothesis testing October 18, 2016 1 / 26 ON TODAY S LECTURE We are going to discuss how hypotheses about coefficients can be tested in regression models We will

More information

A General Overview of Parametric Estimation and Inference Techniques.

A General Overview of Parametric Estimation and Inference Techniques. A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying

More information

y ˆ i = ˆ " T u i ( i th fitted value or i th fit)

y ˆ i = ˆ  T u i ( i th fitted value or i th fit) 1 2 INFERENCE FOR MULTIPLE LINEAR REGRESSION Recall Terminology: p predictors x 1, x 2,, x p Some might be indicator variables for categorical variables) k-1 non-constant terms u 1, u 2,, u k-1 Each u

More information

Bias Variance Trade-off

Bias Variance Trade-off Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]

More information

Basic Concepts of Inference

Basic Concepts of Inference Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).

More information

Estimators as Random Variables

Estimators as Random Variables Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Statistics Boot Camp. Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018

Statistics Boot Camp. Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018 Statistics Boot Camp Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018 March 21, 2018 Outline of boot camp Summarizing and simplifying data Point and interval estimation Foundations of statistical

More information

V. Properties of estimators {Parts C, D & E in this file}

V. Properties of estimators {Parts C, D & E in this file} A. Definitions & Desiderata. model. estimator V. Properties of estimators {Parts C, D & E in this file}. sampling errors and sampling distribution 4. unbiasedness 5. low sampling variance 6. low mean squared

More information

MAS221 Analysis Semester Chapter 2 problems

MAS221 Analysis Semester Chapter 2 problems MAS221 Analysis Semester 1 2018-19 Chapter 2 problems 20. Consider the sequence (a n ), with general term a n = 1 + 3. Can you n guess the limit l of this sequence? (a) Verify that your guess is plausible

More information

Summary statistics. G.S. Questa, L. Trapani. MSc Induction - Summary statistics 1

Summary statistics. G.S. Questa, L. Trapani. MSc Induction - Summary statistics 1 Summary statistics 1. Visualize data 2. Mean, median, mode and percentiles, variance, standard deviation 3. Frequency distribution. Skewness 4. Covariance and correlation 5. Autocorrelation MSc Induction

More information

Estimation of Parameters

Estimation of Parameters CHAPTER Probability, Statistics, and Reliability for Engineers and Scientists FUNDAMENTALS OF STATISTICAL ANALYSIS Second Edition A. J. Clark School of Engineering Department of Civil and Environmental

More information

Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you.

Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you. ISQS 5347 Final Exam Spring 2017 Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you. 1. Recall the commute

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

EC2001 Econometrics 1 Dr. Jose Olmo Room D309

EC2001 Econometrics 1 Dr. Jose Olmo Room D309 EC2001 Econometrics 1 Dr. Jose Olmo Room D309 J.Olmo@City.ac.uk 1 Revision of Statistical Inference 1.1 Sample, observations, population A sample is a number of observations drawn from a population. Population:

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

Lecture 4: September Reminder: convergence of sequences

Lecture 4: September Reminder: convergence of sequences 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused

More information

STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN

STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN Massimo Guidolin Massimo.Guidolin@unibocconi.it Dept. of Finance STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN SECOND PART, LECTURE 2: MODES OF CONVERGENCE AND POINT ESTIMATION Lecture 2:

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action

More information

ECON Introductory Econometrics. Lecture 2: Review of Statistics

ECON Introductory Econometrics. Lecture 2: Review of Statistics ECON415 - Introductory Econometrics Lecture 2: Review of Statistics Monique de Haan (moniqued@econ.uio.no) Stock and Watson Chapter 2-3 Lecture outline 2 Simple random sampling Distribution of the sample

More information

Parameter Estimation, Sampling Distributions & Hypothesis Testing

Parameter Estimation, Sampling Distributions & Hypothesis Testing Parameter Estimation, Sampling Distributions & Hypothesis Testing Parameter Estimation & Hypothesis Testing In doing research, we are usually interested in some feature of a population distribution (which

More information

The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing

The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing 1 The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing Greene Ch 4, Kennedy Ch. R script mod1s3 To assess the quality and appropriateness of econometric estimators, we

More information

Theory of Maximum Likelihood Estimation. Konstantin Kashin

Theory of Maximum Likelihood Estimation. Konstantin Kashin Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical

More information

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Probabilities & Statistics Revision

Probabilities & Statistics Revision Probabilities & Statistics Revision Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 January 6, 2017 Christopher Ting QF

More information

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline)

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) Gujarati D. Basic Econometrics, Appendix A.8 Barrow M. Statistics

More information

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM Subject Business Economics Paper No and Title Module No and Title Module Tag 8, Fundamentals of Econometrics 3, The gauss Markov theorem BSE_P8_M3 1 TABLE OF CONTENTS 1. INTRODUCTION 2. ASSUMPTIONS OF

More information

Linear Regression with 1 Regressor. Introduction to Econometrics Spring 2012 Ken Simons

Linear Regression with 1 Regressor. Introduction to Econometrics Spring 2012 Ken Simons Linear Regression with 1 Regressor Introduction to Econometrics Spring 2012 Ken Simons Linear Regression with 1 Regressor 1. The regression equation 2. Estimating the equation 3. Assumptions required for

More information

Inference in Regression Analysis

Inference in Regression Analysis Inference in Regression Analysis Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 1 Today: Normal Error Regression Model Y i = β 0 + β 1 X i + ǫ i Y i value

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Principles of Statistical Inference Recap of statistical models Statistical inference (frequentist) Parametric vs. semiparametric

More information

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing Robert Vanderbei Fall 2014 Slides last edited on November 24, 2014 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

INTRODUCTION TO BASIC LINEAR REGRESSION MODEL

INTRODUCTION TO BASIC LINEAR REGRESSION MODEL INTRODUCTION TO BASIC LINEAR REGRESSION MODEL 13 September 2011 Yogyakarta, Indonesia Cosimo Beverelli (World Trade Organization) 1 LINEAR REGRESSION MODEL In general, regression models estimate the effect

More information

2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008

2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 2.830J / 6.780J / ESD.63J Control of Processes (SMA 6303) Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Space Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses

Space Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses Space Telescope Science Institute statistics mini-course October 2011 Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses James L Rosenberger Acknowledgements: Donald Richards, William

More information

STA Module 10 Comparing Two Proportions

STA Module 10 Comparing Two Proportions STA 2023 Module 10 Comparing Two Proportions Learning Objectives Upon completing this module, you should be able to: 1. Perform large-sample inferences (hypothesis test and confidence intervals) to compare

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur Lecture No. # 36 Sampling Distribution and Parameter Estimation

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Cosmological parameters A branch of modern cosmological research focuses on measuring

More information

Inferring from data. Theory of estimators

Inferring from data. Theory of estimators Inferring from data Theory of estimators 1 Estimators Estimator is any function of the data e(x) used to provide an estimate ( a measurement ) of an unknown parameter. Because estimators are functions

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Regression #3: Properties of OLS Estimator

Regression #3: Properties of OLS Estimator Regression #3: Properties of OLS Estimator Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #3 1 / 20 Introduction In this lecture, we establish some desirable properties associated with

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be CHAPTER 4: IT IS ALL ABOUT DATA 4a - 1 CHAPTER 4: IT

More information

Business Statistics. Lecture 10: Course Review

Business Statistics. Lecture 10: Course Review Business Statistics Lecture 10: Course Review 1 Descriptive Statistics for Continuous Data Numerical Summaries Location: mean, median Spread or variability: variance, standard deviation, range, percentiles,

More information

Review. December 4 th, Review

Review. December 4 th, Review December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

MA Advanced Econometrics: Applying Least Squares to Time Series

MA Advanced Econometrics: Applying Least Squares to Time Series MA Advanced Econometrics: Applying Least Squares to Time Series Karl Whelan School of Economics, UCD February 15, 2011 Karl Whelan (UCD) Time Series February 15, 2011 1 / 24 Part I Time Series: Standard

More information

ACE 564 Spring Lecture 8. Violations of Basic Assumptions I: Multicollinearity and Non-Sample Information. by Professor Scott H.

ACE 564 Spring Lecture 8. Violations of Basic Assumptions I: Multicollinearity and Non-Sample Information. by Professor Scott H. ACE 564 Spring 2006 Lecture 8 Violations of Basic Assumptions I: Multicollinearity and Non-Sample Information by Professor Scott H. Irwin Readings: Griffiths, Hill and Judge. "Collinear Economic Variables,

More information

Business Statistics. Lecture 5: Confidence Intervals

Business Statistics. Lecture 5: Confidence Intervals Business Statistics Lecture 5: Confidence Intervals Goals for this Lecture Confidence intervals The t distribution 2 Welcome to Interval Estimation! Moments Mean 815.0340 Std Dev 0.8923 Std Error Mean

More information

Bayesian vs frequentist techniques for the analysis of binary outcome data

Bayesian vs frequentist techniques for the analysis of binary outcome data 1 Bayesian vs frequentist techniques for the analysis of binary outcome data By M. Stapleton Abstract We compare Bayesian and frequentist techniques for analysing binary outcome data. Such data are commonly

More information

the time it takes until a radioactive substance undergoes a decay

the time it takes until a radioactive substance undergoes a decay 1 Probabilities 1.1 Experiments with randomness Wewillusethetermexperimentinaverygeneralwaytorefertosomeprocess that produces a random outcome. Examples: (Ask class for some first) Here are some discrete

More information

Preliminary Statistics. Lecture 5: Hypothesis Testing

Preliminary Statistics. Lecture 5: Hypothesis Testing Preliminary Statistics Lecture 5: Hypothesis Testing Rory Macqueen (rm43@soas.ac.uk), September 2015 Outline Elements/Terminology of Hypothesis Testing Types of Errors Procedure of Testing Significance

More information

Introductory Econometrics. Review of statistics (Part II: Inference)

Introductory Econometrics. Review of statistics (Part II: Inference) Introductory Econometrics Review of statistics (Part II: Inference) Jun Ma School of Economics Renmin University of China October 1, 2018 1/16 Null and alternative hypotheses Usually, we have two competing

More information

Least Squares Estimation-Finite-Sample Properties

Least Squares Estimation-Finite-Sample Properties Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions

More information

2 Prediction and Analysis of Variance

2 Prediction and Analysis of Variance 2 Prediction and Analysis of Variance Reading: Chapters and 2 of Kennedy A Guide to Econometrics Achen, Christopher H. Interpreting and Using Regression (London: Sage, 982). Chapter 4 of Andy Field, Discovering

More information

Statistical Distribution Assumptions of General Linear Models

Statistical Distribution Assumptions of General Linear Models Statistical Distribution Assumptions of General Linear Models Applied Multilevel Models for Cross Sectional Data Lecture 4 ICPSR Summer Workshop University of Colorado Boulder Lecture 4: Statistical Distributions

More information

Econometrics Summary Algebraic and Statistical Preliminaries

Econometrics Summary Algebraic and Statistical Preliminaries Econometrics Summary Algebraic and Statistical Preliminaries Elasticity: The point elasticity of Y with respect to L is given by α = ( Y/ L)/(Y/L). The arc elasticity is given by ( Y/ L)/(Y/L), when L

More information

MATH2206 Prob Stat/20.Jan Weekly Review 1-2

MATH2206 Prob Stat/20.Jan Weekly Review 1-2 MATH2206 Prob Stat/20.Jan.2017 Weekly Review 1-2 This week I explained the idea behind the formula of the well-known statistic standard deviation so that it is clear now why it is a measure of dispersion

More information

Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn!

Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Questions?! C. Porciani! Estimation & forecasting! 2! Cosmological parameters! A branch of modern cosmological research focuses

More information

Monte Carlo Studies. The response in a Monte Carlo study is a random variable.

Monte Carlo Studies. The response in a Monte Carlo study is a random variable. Monte Carlo Studies The response in a Monte Carlo study is a random variable. The response in a Monte Carlo study has a variance that comes from the variance of the stochastic elements in the data-generating

More information

Notes for Week 13 Analysis of Variance (ANOVA) continued WEEK 13 page 1

Notes for Week 13 Analysis of Variance (ANOVA) continued WEEK 13 page 1 Notes for Wee 13 Analysis of Variance (ANOVA) continued WEEK 13 page 1 Exam 3 is on Friday May 1. A part of one of the exam problems is on Predictiontervals : When randomly sampling from a normal population

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

Central Limit Theorem and the Law of Large Numbers Class 6, Jeremy Orloff and Jonathan Bloom

Central Limit Theorem and the Law of Large Numbers Class 6, Jeremy Orloff and Jonathan Bloom Central Limit Theorem and the Law of Large Numbers Class 6, 8.5 Jeremy Orloff and Jonathan Bloom Learning Goals. Understand the statement of the law of large numbers. 2. Understand the statement of the

More information

Applied Econometrics (MSc.) Lecture 3 Instrumental Variables

Applied Econometrics (MSc.) Lecture 3 Instrumental Variables Applied Econometrics (MSc.) Lecture 3 Instrumental Variables Estimation - Theory Department of Economics University of Gothenburg December 4, 2014 1/28 Why IV estimation? So far, in OLS, we assumed independence.

More information

Probability theory basics

Probability theory basics Probability theory basics Michael Franke Basics of probability theory: axiomatic definition, interpretation, joint distributions, marginalization, conditional probability & Bayes rule. Random variables:

More information

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. β 0, β 1 Q = n (Y i (β 0 + β 1 X i )) 2 i=1 Minimize this by maximizing

More information

2. Linear regression with multiple regressors

2. Linear regression with multiple regressors 2. Linear regression with multiple regressors Aim of this section: Introduction of the multiple regression model OLS estimation in multiple regression Measures-of-fit in multiple regression Assumptions

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 12: Frequentist properties of estimators (v4) Ramesh Johari ramesh.johari@stanford.edu 1 / 39 Frequentist inference 2 / 39 Thinking like a frequentist Suppose that for some

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

Monte Carlo Simulations and the PcNaive Software

Monte Carlo Simulations and the PcNaive Software Econometrics 2 Monte Carlo Simulations and the PcNaive Software Heino Bohn Nielsen 1of21 Monte Carlo Simulations MC simulations were introduced in Econometrics 1. Formalizing the thought experiment underlying

More information

Introduction to Design of Experiments

Introduction to Design of Experiments Introduction to Design of Experiments Jean-Marc Vincent and Arnaud Legrand Laboratory ID-IMAG MESCAL Project Universities of Grenoble {Jean-Marc.Vincent,Arnaud.Legrand}@imag.fr November 20, 2011 J.-M.

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Hypothesis testing. Anna Wegloop Niels Landwehr/Tobias Scheffer

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Hypothesis testing. Anna Wegloop Niels Landwehr/Tobias Scheffer Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Hypothesis testing Anna Wegloop iels Landwehr/Tobias Scheffer Why do a statistical test? input computer model output Outlook ull-hypothesis

More information

Regression Estimation Least Squares and Maximum Likelihood

Regression Estimation Least Squares and Maximum Likelihood Regression Estimation Least Squares and Maximum Likelihood Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 3, Slide 1 Least Squares Max(min)imization Function to minimize

More information

Applied Quantitative Methods II

Applied Quantitative Methods II Applied Quantitative Methods II Lecture 4: OLS and Statistics revision Klára Kaĺıšková Klára Kaĺıšková AQM II - Lecture 4 VŠE, SS 2016/17 1 / 68 Outline 1 Econometric analysis Properties of an estimator

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals (SW Chapter 5) Outline. The standard error of ˆ. Hypothesis tests concerning β 3. Confidence intervals for β 4. Regression

More information

INTRODUCTION TO ANALYSIS OF VARIANCE

INTRODUCTION TO ANALYSIS OF VARIANCE CHAPTER 22 INTRODUCTION TO ANALYSIS OF VARIANCE Chapter 18 on inferences about population means illustrated two hypothesis testing situations: for one population mean and for the difference between two

More information

Business Statistics: Lecture 8: Introduction to Estimation & Hypothesis Testing

Business Statistics: Lecture 8: Introduction to Estimation & Hypothesis Testing Business Statistics: Lecture 8: Introduction to Estimation & Hypothesis Testing Agenda Introduction to Estimation Point estimation Interval estimation Introduction to Hypothesis Testing Concepts en terminology

More information

1/24/2008. Review of Statistical Inference. C.1 A Sample of Data. C.2 An Econometric Model. C.4 Estimating the Population Variance and Other Moments

1/24/2008. Review of Statistical Inference. C.1 A Sample of Data. C.2 An Econometric Model. C.4 Estimating the Population Variance and Other Moments /4/008 Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University C. A Sample of Data C. An Econometric Model C.3 Estimating the Mean of a Population C.4 Estimating the Population

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

Eco517 Fall 2014 C. Sims MIDTERM EXAM

Eco517 Fall 2014 C. Sims MIDTERM EXAM Eco57 Fall 204 C. Sims MIDTERM EXAM You have 90 minutes for this exam and there are a total of 90 points. The points for each question are listed at the beginning of the question. Answer all questions.

More information

Applied Statistics and Econometrics

Applied Statistics and Econometrics Applied Statistics and Econometrics Lecture 6 Saul Lach September 2017 Saul Lach () Applied Statistics and Econometrics September 2017 1 / 53 Outline of Lecture 6 1 Omitted variable bias (SW 6.1) 2 Multiple

More information

Introduction to Econometrics

Introduction to Econometrics Introduction to Econometrics Lecture 2 : Causal Inference and Random Control Trails(RCT) Zhaopeng Qu Business School,Nanjing University Sep 18th, 2017 Zhaopeng Qu (Nanjing University) Introduction to Econometrics

More information

Statistical Inference with Regression Analysis

Statistical Inference with Regression Analysis Introductory Applied Econometrics EEP/IAS 118 Spring 2015 Steven Buck Lecture #13 Statistical Inference with Regression Analysis Next we turn to calculating confidence intervals and hypothesis testing

More information

Economic modelling and forecasting

Economic modelling and forecasting Economic modelling and forecasting 2-6 February 2015 Bank of England he generalised method of moments Ole Rummel Adviser, CCBS at the Bank of England ole.rummel@bankofengland.co.uk Outline Classical estimation

More information

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1 4 Hypothesis testing 4. Simple hypotheses A computer tries to distinguish between two sources of signals. Both sources emit independent signals with normally distributed intensity, the signals of the first

More information

1 Random walks and data

1 Random walks and data Inference, Models and Simulation for Complex Systems CSCI 7-1 Lecture 7 15 September 11 Prof. Aaron Clauset 1 Random walks and data Supposeyou have some time-series data x 1,x,x 3,...,x T and you want

More information

review session gov 2000 gov 2000 () review session 1 / 38

review session gov 2000 gov 2000 () review session 1 / 38 review session gov 2000 gov 2000 () review session 1 / 38 Overview Random Variables and Probability Univariate Statistics Bivariate Statistics Multivariate Statistics Causal Inference gov 2000 () review

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis y = β 0 + β 1 x 1 + β 2 x 2 +... β k x k + u 2. Inference 0 Assumptions of the Classical Linear Model (CLM)! So far, we know: 1. The mean and variance of the OLS estimators

More information

Econometrics 2, Class 1

Econometrics 2, Class 1 Econometrics 2, Class Problem Set #2 September 9, 25 Remember! Send an email to let me know that you are following these classes: paul.sharp@econ.ku.dk That way I can contact you e.g. if I need to cancel

More information

Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1)

Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1) Advanced Research Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1) Intelligence for Embedded Systems Ph. D. and Master Course Manuel Roveri Politecnico di Milano,

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information